In this paper, we present the concept of 'Speaking with Intent (SWI)', which explicitly generates intents in large-scale language models (LLMs) to capture the model's internal intentions and provide high-level plans that guide subsequent analysis and actions. By mimicking the conscious thought process of humans, we aim to improve the inference ability and generation quality of LLMs. Through extensive experiments on text summarization, multi-task question answering, and mathematical reasoning benchmarks, we demonstrate the effectiveness and generalizability of SWI compared to direct generation without explicit intent. We further analyze the generalizability of SWI in various experimental settings, and verify the consistency, effectiveness, and interpretability of the generated intents through human evaluation. The promising results of enhancing LLMs with explicit intent suggest a new way to enhance the generation and inference ability of LLMs through cognitive concepts.