This paper highlights how large-scale language models (LLMs) like ChatGPT expose the vulnerabilities of modern knowledge infrastructures by mimicking consistency while bypassing traditional mechanisms like citation, authority, and verification. Accordingly, the author proposes the Situated Epistemological Infrastructure (SEI) framework as a diagnostic tool for analyzing how knowledge becomes authoritative across human-machine systems under post-coherence conditions. Rather than relying on stable academic domains or clearly bounded communities of practice, SEI traces how trustworthiness is mediated across institutional, computational, and temporal arrangements. Integrating insights from infrastructure studies, platform theory, and epistemology, this framework emphasizes coordination over classification, highlighting the need for predictive and adaptive models of epistemological management. By offering a compelling alternative to representationalist models of scholarly communication, this paper contributes to discussions on AI governance, knowledge production, and the ethical design of information systems.