This paper systematically examines the introspective abilities (introspection) of 21 open-source large-scale language models (LLMs) in both grammatical knowledge and word prediction. Given that a model's internal linguistic knowledge can theoretically be supported by direct measures of string probabilities, we assessed how accurately the model's responses to metalinguistic prompts reflect its internal knowledge. We propose a novel introspective metric that measures the extent to which a model's prompt responses predict its own string probabilities and assess whether they exceed the predictions of other models with similar internal knowledge. While both metalinguistic prompts and probability comparisons achieved high task accuracy, we found no evidence that LLMs possessed privileged "self-access." By comprehensively evaluating a variety of open-source models and controlling for model similarity, we provide new evidence supporting the assertion that LLMs are incapable of introspection and that prompt responses should not be confused with the model's linguistic generalizations.