This paper identifies style-conditional data contamination as a covert vector that amplifies sociolinguistic bias in large-scale language models. Using a small contaminated budget, we pair dialectal prompts, such as those from African American English (AAVE) and Southern dialects, with toxic or stereotypical completions to investigate whether language style can serve as a potential trigger for harmful behavior. Across multiple model families and scales, contaminated exposure increases toxicity and stereotype expression for dialectal input, particularly consistently for AAVE. Standard American English, while relatively low, is not immune. A multi-metric audit combining classifier-based toxicity assessment with LLM-as-a-judge reveals stereotype-laden content even when lexical toxicity appears suppressed, indicating that existing detectors underestimate sociolinguistic harm. Furthermore, contaminated models exhibit rapid escape even without explicit profanity in the toxicity, suggesting weakened alignment rather than memorization.