This paper raises the issue that forming parasocial relationships with AI agents can have serious, sometimes tragic, consequences for human well-being. Preventing these dynamics is challenging, as parasocial cues emerge gradually in private conversations and not all forms of emotional engagement are detrimental. To address this, we present a simple response evaluation framework that leverages state-of-the-art language models to assess parasocial cues in real time. We test the feasibility of this approach using a small synthetic dataset of 30 conversations (parasocial, flattering, and neutral conversations). Iterative evaluations using five rounds of testing demonstrate that, under a lenient matching rule, all parasocial conversations can be identified while avoiding false positives, typically within the first few exchanges. These results provide preliminary evidence that the evaluation agent can provide a viable solution for preventing parasocial relationships.