In this paper, we present a novel dataset, Hindi Analogy Test Set (HATS), for assessing analogy performance in Hindi. HATS consists of 405 multiple-choice questions taken from Indian government exams and is used to assess the analogy performance of various language models. In this paper, we evaluate state-of-the-art multilingual LLMs using various prompting strategies and a grounded Chain of Thought approach based on cognitive theory and suggest a method to improve model performance on Hindi analogy tasks. Experimental results show that model performance is best when English prompts are used, regardless of prompting strategy. This study addresses the critical lack of resources for assessing LLM reasoning performance in Hindi.