This paper proposes TransLLM, an integrated framework that combines spatiotemporal modeling and a large-scale language model (LLM) to address diverse challenges in urban transportation systems, including traffic prediction, electric vehicle charging demand forecasting, and taxi dispatching. TransLLM captures complex dependencies through a lightweight spatiotemporal encoder and seamlessly interacts with the LLM through learnable prompt construction. An instance-level prompt routing mechanism trained via reinforcement learning dynamically personalizes prompts based on input characteristics. It encodes spatiotemporal patterns as contextual representations, constructs personalized prompts to guide LLM inference, and generates task-specific predictions through a specialized output layer. Experimental results on seven datasets and three tasks demonstrate that TransLLM performs competitively in both supervised and zero-shot settings, demonstrating excellent generalization and cross-task adaptability.