This paper examines ensembling techniques for large-scale language models (LLMs) based on generative pre-trained transformers (GPTs). Individual LLMs often produce inconsistent outputs and exhibit bias, limiting their ability to adequately represent diverse linguistic patterns. Furthermore, many powerful LLMs are closed-source, limiting their industrial applications due to data privacy concerns. Building on their success in text generation, this paper examines LLM ensemble techniques for code generation and analyzes their capabilities by categorizing them into seven key approaches: weighted merging, knowledge fusion, expert mixing, reward ensemble, output ensemble, routing, and cascading. We highlight key advantages, including enhanced representation of diversity, improved output quality, and increased application flexibility. This approach aids in model selection for practical tasks and lays the foundation for extending ensemble strategies to multimodal LLMs.