This paper investigated the impact of culturally biased AI on improving human decision-making through a randomized experiment with 2,500 participants. Using politically diverse variants of GPT-4o to perform information evaluation tasks, the biased AI assistant outperformed human performance, increased engagement, and reduced evaluation bias compared to a neutral AI. This effect was particularly significant when participants encountered opposing viewpoints. However, this benefit came at the cost of reduced trust: participants underestimated the biased AI and overestimated the neutral system. When participants were shown two AIs with opposing biases surrounding human perspectives, the gap between perception and performance narrowed. These findings challenge conventional wisdom about AI neutrality and suggest that strategically incorporating diverse cultural biases can foster improved and more resilient human decision-making.