This paper presents the results of a large-scale empirical study of the security vulnerabilities of large-scale language models (LLMs) deployed via open-source and commercial frameworks. Through internet-wide measurements, we identified 320,102 publicly available LLM services across 15 frameworks and extracted 158 unique API endpoints, categorized into 12 functional groups. Our analysis revealed that over 40% of endpoints used plain HTTP, and over 210,000 lacked valid TLS metadata. Some frameworks exhibited highly inconsistent API exposures, responding to over 35% of unauthenticated API requests, potentially leading to model or system information leaks. We observed widespread use of insecure protocols, improper TLS configurations, and unauthorized access to critical operations. These security vulnerabilities can lead to serious consequences, including model leaks, system compromise, and unauthorized access.