Given that the widespread adoption of large-scale language models (LLMs) such as ChatGPT, Gemini, and DeepSeek has significantly transformed how people work in education, careers, and creative fields, this paper investigates the impact of the structure and clarity of user prompts on the effectiveness and productivity of LLM output. Using data from 243 survey respondents with diverse academic and professional backgrounds, we analyze AI usage habits, prompting strategies, and user satisfaction. Results show that users who use clear, structured, and context-aware prompts report greater task efficiency and better outcomes. These findings highlight the critical role of prompt engineering in maximizing the value of generative AI and provide practical guidance for everyday use.