ILR is a novel collaborative learning framework for multi-agent learning (MAS). It studies whether interactions between LLMs can enhance the independent problem-solving abilities of LLMs. It integrates two core components: Dynamic Interaction and Perception Calibration, dynamically selecting cooperative or competitive strategies based on question difficulty and model capabilities, and exchanging information through Idea3 (idea sharing, idea analysis, and idea fusion). Perception Calibration trains LLMs using GRPO, and integrates the reward distribution characteristics of one LLM into the reward function of another LLM to enhance the cohesiveness of multi-agent interactions. ILR was experimented with three LLMs of various scales on mathematical and coding benchmarks, consistently outperforming single-agent learning.