[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Using LLMs to identify features of personal and professional skills in an open-response situational judgment test

Created by
  • Haebom

Author

Cole Walsh, Rodica Ivan, Muhammad Zafar Iqbal, Colleen Robb

Outline

This paper presents a novel approach to develop an automatic scoring system for Situational Judgement Tests (SJTs) using large-scale language models (LLMs). As the importance of measuring personal and professional skills increases, the need for developing an automated system to overcome the limitations of existing human-based scoring methods and conduct SJTs on a large scale is increasing. This study presents a method to extract features related to components from SJT responses using LLMs to solve the validity issue of existing NLP-based systems, and demonstrates its effectiveness using Casper SJT. This study lays the foundation for developing an automatic scoring system for personal and professional skills.

Takeaways, Limitations

Takeaways:
Presenting a new possibility of developing an automatic SJT scoring system using LLM
Contributes to solving the scalability and efficiency issues of existing human-based grading
Increasing the potential for automation and standardization of personal and professional skills assessments
Validation of the effectiveness of the approach through empirical research using Casper SJT
Limitations:
This study only presents results for the Casper SJT; further research is needed to determine generalizability to other types of SJTs.
Additional validation of the construct validity of the LLM-based system is needed.
Consideration Needed on LLM Bias and Ethical Issues
Challenges in building and managing large datasets
👍