This paper examines, through a randomized experiment with 2,500 participants, how existing AI systems, which emphasize ideological neutrality, can induce automation bias by suppressing human cognitive engagement in decision-making. Using a politically diverse variant of GPT-4o, participants interacted with the AI on an information evaluation task. Results showed that a biased AI assistant outperformed a neutral AI, increased engagement, and reduced evaluation bias, particularly when participants encountered conflicting views. However, this benefit came at the cost of reduced trust: participants underestimated the biased AI and overestimated the neutral system. The gap between perception and performance was bridged by exposing participants to two AI systems with biases surrounding human perspectives. These results challenge conventional wisdom about AI neutrality and suggest that strategically incorporating diverse cultural biases can foster improved and more resilient human decision-making.