In this paper, we propose a novel framework for learning urban region representations, GURPP (Graph-based Urban Region Pre-training and Prompting), which is important for various urban-related subtasks. Pointing out that previous studies lack consideration of the fine-grained functional layout semantics of urban regions and poor task adaptability, GURPP constructs an urban region graph and captures heterogeneous and transferable patterns of entity interactions through a subgraph-centric pre-training model. We pre-train region embeddings with rich knowledge using contrastive learning and multi-view learning, and enhance the adaptability of embeddings through manually defined prompts and learnable prompts. We demonstrate the superior performance of GURPP through experiments on various urban region prediction tasks and cities.