In this paper, we present REAL (Real Estate Agent Large Language Model Evaluation), the first evaluation tool for evaluating the agent performance of large-scale language models (LLMs) in real estate transaction and service domains. REAL contains 5,316 high-quality evaluation items across four topics: memory, understanding, reasoning, and hallucination, which are organized into 14 categories to evaluate the knowledge and abilities of LLMs in real estate transaction and service scenarios. Experimental results show that even the existing state-of-the-art LLMs still have significant room for improvement before they can be applied to real estate domains.