This paper presents Persistent Workflow Prompting (PWP), a novel prompt engineering methodology for in-depth peer review of scientific papers using large-scale language models (LLMs). PWP addresses challenges such as data limitations and the complexity of expert reasoning by systematically defining expert reasoning processes using a standard LLM chat interface (zero code, no API). This study presents a proof-of-concept of PWP prompts for critical analysis of experimental chemistry papers, and defines detailed analysis workflows through a hierarchical and modular architecture structured in Markdown. PWP prompts are developed by systematically formalizing expert review workflows that include tacit knowledge through iterative meta-prompting techniques and meta-inference. Once submitted at the beginning of a session, PWP prompts provide LLMs with a persistent workflow triggered by follow-up questions, guiding them to perform complex tasks such as parameter inference through text/image/figure analysis, quantitative validity checking, claim and estimate comparison, and prior validity evaluation. We provide a demonstration showing how to identify key methodological flaws in test cases and mitigate LLM input bias, and provide full prompts, detailed demo analysis, and interactive chat logs as supplementary material to ensure transparency and facilitate reproducibility. Beyond specific applications, this study provides insight into the meta-development process itself, highlighting the potential of PWPs for detailed workflow formulations that enable sophisticated analyses using readily available LLMs for complex scientific tasks.