This paper explores how to provide rich context to improve the performance of large-scale language models (LLMs). To address the increased computational cost of long prompts and the limited input size of LLMs, we propose PartPrompt, a novel selective compression method that overcomes the limitations of existing generative and selective compression methods. PartPrompt utilizes a linguistic rule-based syntax tree to compute the information entropy of each node and, based on this, constructs a global tree that considers the hierarchical structure (dependencies among sentences, paragraphs, and sections). It adjusts node values through bottom-up and top-down propagation on the global tree, and then compresses prompts by pruning the tree using a recursive algorithm based on the adjusted node values. Experimental results demonstrate that PartPrompt achieves state-of-the-art performance across diverse datasets, evaluation metrics, compression ratios, and LLMs. It also demonstrates superiority in the cohesiveness of compressed prompts and in extremely long prompt scenarios.