This paper presents a method for enabling AI agents to understand and adapt to individual preferences, enabling them to operate effectively, especially in collaborative roles. Unlike previous studies that adopt generalized approaches, this study develops agents that learn preferences from a small number of trials and then adapt their planning strategies based on these preferences. Building on the observation that preferences can generalize across a wide range of planning scenarios, even when implicitly expressed through minimal trials, we present a Preference-based Planning (PbP) benchmark featuring hundreds of diverse preferences, ranging from atomic actions to complex sequences. Evaluations with state-of-the-art methodologies (SOTA) demonstrate that while symbol-based approaches offer promise in terms of scalability, they still face significant challenges in generating and executing plans that satisfy personalized preferences. Furthermore, we demonstrate that incorporating learned preferences into the intermediate representation of the plan significantly enhances the agent's ability to construct personalized plans. These results demonstrate that preferences represent a valuable abstraction layer for adaptive planning and open new avenues for research in preference-based plan generation and execution.