This paper explores the concept of rationality, a central concept in the field of artificial intelligence (AI). Both attempts to simulate human reasoning and achieve bounded optimality aim to make artificial agents as rational as possible. However, there is no unified definition of what constitutes a rational agent within AI. This paper examines rationality and irrationality in AI and addresses some of the unresolved issues in this field. Specifically, it examines how understandings of rationality in economics, philosophy, and psychology have influenced the concept of rationality within AI. Focusing on the behavior of artificial agents, it examines irrational behaviors that may be optimal in specific scenarios. While several methods have been developed to identify and interact with irrational agents, research in this area remains limited. Methods developed for adversarial scenarios can be applied to interactions with artificial agents. It also discusses the role of rationality in human-AI interactions, highlighting the many remaining questions related to the potential irrational behavior of both humans and artificial agents.