This paper presents an automated method for measuring the Information Security Awareness (ISA) level of large-scale language model (LLM)-based assistants. ISA encompasses not only the security knowledge of LLMs addressed in previous research, but also the attitudes and behaviors crucial for understanding implicit security context and rejecting unsafe requests, using 30 mobile ISA taxonomies. Using real-world scenarios to explore the tension between implicit security risks and user satisfaction, we evaluate the ISA levels of leading LLMs. We find that most models exhibit medium or low ISA levels. In particular, smaller variants within the same model family are even more dangerous, and the lack of consistent ISA improvements in recent versions suggests that providers are not actively addressing this issue. This demonstrates the widespread vulnerability of current LLM deployments, implying that most popular models, including smaller variants, systematically expose users to risks. This paper proposes a practical mitigation strategy that integrates security awareness guidelines into model system prompts to help LLMs better detect and reject unsafe requests.