This paper raises concerns about the increasing incidence of harm to children and adults due to excessive parasocial ties and sycophancy with AI chatbots. To mitigate these risks, we propose an "AI Guardian Agent" that monitors AI chatbot conversations. Built by repurposing existing state-of-the-art language models, this agent analyzes conversational content to detect excessive parasocial ties and sycophancy. Experimental results using a synthetic dataset demonstrate that the AI Guardian Agent accurately identifies all parasocial conversations under majority rule, effectively preventing false positives. This suggests that the AI Guardian Agent can be effective in reducing the risk of parasocial ties.