This paper focuses on the situation where the Internet is used as training data for generative AI models, incorporating content generated by other models. Specifically, we study how data-mediated interactions arise, whereby a model learns from the output of another model. This study presents empirical evidence, develops a theoretical model for this interaction process, and validates the theory through experiments. We find that while data-mediated interactions can aid in learning new concepts, they can also lead to performance homogenization on shared tasks.