To enhance our understanding of AI-AI interactions in environments where multiple AI agents interact, this paper conducted experiments using a repeated public goods game. We tested models with and without four different inference capabilities, with two conditions: one where the other agent was "another AI agent" and the other was "themselves." We found that informing a large-scale language model (LLM) that the other agent was themselves significantly influenced their tendency to cooperate. Although this study focused on a simple environment, it offers insights into multi-agent environments where cooperation can increase or decrease unpredictably due to agents "unconsciously" discriminating against each other.