This study analyzed the impact of racial bias in artificial intelligence (AI) models on human hiring decisions through an experiment with 528 participants. In 1,526 scenarios across 16 high- and low-status jobs, applicants were evaluated against AI models that were differentially biased based on race (White, Black, Hispanic, and Asian). The results showed that when the AI favored a particular race, people tended to favor that race up to 90% of the time. Even when people perceived the AI's recommendations as low quality or unimportant, they were still influenced by the AI's biases in certain situations. Pre-administering an Implicit Association Test (IAT) increased the likelihood of selecting applicants who did not match common race-status stereotypes by 13%.