This paper explores whether human trust in AI-generated text is limited by biases that go beyond concerns about accuracy. We examined how human raters react to labeled and unlabeled content across three experiments: text editing, news article summaries, and persuasive writing. While blind tests failed to distinguish between the two types of text, we found that human raters preferred content labeled "human-generated" over "AI-generated" by more than 30%. The same pattern was observed when labels were intentionally changed. This human bias in AI has broader social and cognitive implications, including underestimating AI performance. This study highlights the limitations of human judgment when interacting with AI and provides a foundation for improving human-AI collaboration, particularly in creative fields.