This paper studies how students form trust in AI chatbots as the use of AI chatbots increases in educational environments. It points out that existing human-to-human trust models and technology trust models do not sufficiently reflect the personified characteristics of AI chatbots, and analyzes the effects of human-like trust and system-like trust on students’ enjoyment, trust intention, usage intention, and perception of usefulness of AI chatbots using partial least squares structural equation modeling (PLS-SEM). The results show that both types of trust significantly affect students’ perceptions, but human-like trust has a greater effect on trust intention, and system-like trust has a greater effect on usage intention and perception of usefulness. Both types of trust have a similar effect on enjoyment perception. Based on this, we propose a new theoretical framework that students form a unique form of human-AI trust that is different from existing human-human or human-technology trust models.