This paper addresses the challenge of building a general model capable of analyzing human movement paths across diverse regions and tasks. Existing research has been limited to training in specific regions or being suitable only for a small number of tasks. To address this challenge, we propose the Traj-MLLM framework, leveraging a multimodal large-scale language model (MLLM). Traj-MLLM integrates multi-view contexts to transform raw path data into image-text sequences and leverages the inference capabilities of MLLM to perform path analysis. Furthermore, we propose a prompt optimization technique that generates data-invariant prompts for task adaptation. Experimental results show that Traj-MLLM outperforms the existing best-performing models by 48.05%, 15.52%, 51.52%, and 1.83% on travel time prediction, mobility prediction, anomaly detection, and transportation mode identification tasks, respectively. Traj-MLLM does not require fine-tuning the MLLM backbone or training data.