This paper presents the first empirical study of library and programming language preferences for code generation across eight diverse large-scale language models (LLMs). We find that LLMs tend to overuse popular libraries like NumPy (up to 48% of the time, they use them unnecessarily) and favor Python as their primary language (58% of the time, even for high-performance project initialization tasks where Python is not the optimal language, and Rust is never used). This is because LLMs prioritize familiarity and popularity over suitability and task-specific optimality, which can lead to security vulnerabilities and technical debt, and limit exposure to newly developed, more suitable tools and languages. Understanding and addressing these biases is therefore essential for responsibly integrating LLMs into software development workflows.