This paper explores the use of large-scale language models (LLMs) in knowledge-based visual question answering (VQA). Unlike previous studies that directly induce LLMs to predict answers, this paper proposes a novel framework, PLRH, that leverages rationale heuristics, an intermediate reasoning process. PLRH uses Chains of Thinking (CoT) to guide LLMs to generate rationale heuristics, which are then used to predict answers. Experimental results show that PLRH outperforms existing baseline models by 2.2 and 2.1 points, respectively, in OK-VQA and A-OKVQA.