This paper conducted task generation experiments with humans and GPT-4o to investigate whether generative agents based on large-scale language models (LLMs) generate tasks in a human-like manner. Our results show that while human task generation is consistently influenced by personal values like openness to experience and psychological drivers like cognitive style, LLMs fail to reflect these behavioral patterns even when explicitly provided with psychological drivers. LLM-generated tasks were less social, less physically demanding, and more focused on abstract topics. While LLM-generated tasks were rated as more engaging and novel, this demonstrates a gap between LLMs' linguistic abilities and their ability to generate human-like, concrete goals. In conclusion, there is a fundamental difference between the value-driven and concrete nature of human cognition and the statistical patterns of LLMs. Designing more human-centric agents requires integrating intrinsic motivation and physical foundations.