This study deeply analyzes the ethical and trustworthiness issues arising with the rapid development of generative artificial intelligence (AI) technology and proposes a comprehensive framework for systematic evaluation. While generative AI, such as ChatGPT, demonstrates innovative potential, it also raises ethical and social concerns, including bias, harmfulness, copyright infringement, privacy violations, and hallucinations. Existing AI evaluation methodologies, primarily focused on performance and accuracy, fall short of addressing these multifaceted issues. Therefore, this study emphasizes the need for new, human-centered criteria that reflect societal impact. To this end, we identify key dimensions for evaluating the ethics and trustworthiness of generative AI, including fairness, transparency, accountability, safety, privacy, accuracy, consistency, robustness, explainability, copyright and intellectual property protection, and traceability. We then develop detailed indicators and evaluation methodologies for each dimension. Furthermore, we provide a comparative analysis of AI ethics policies and guidelines in Korea, the United States, the European Union, and China, deriving key approaches and Takeaways from each policy. The proposed framework applies across the entire AI lifecycle and integrates technical assessments with multidisciplinary perspectives to provide a practical means of identifying and managing ethical risks in real-world settings. Ultimately, this study establishes an academic foundation for the responsible development of generative AI and provides actionable insights for policymakers, developers, users, and other stakeholders, supporting the positive societal contributions of AI technology.