This paper proposes FedSEA-LLaMA, a transformer-based federated partitioning model for federated learning environments, to leverage private data to improve the performance of large-scale language models (LLMs) while addressing data silos and high computational demands. FedSEA-LLaMA ensures data privacy by distributing most model parameters to servers (or distributed clients), with only a small portion maintained on clients. To address the limitations of existing federated partitioning models, such as the vulnerability of P2P encryption, high communication overhead due to sequential learning and inference, and the problem of fixed split points, we propose secure vector transmission via Gaussian noise injection, reduced communication costs through attention mask compression and KV cache collaboration, and dynamic split point adjustment by the user. Experimental results on natural language understanding, summarization, and conversational question-answering tasks demonstrate that FedSEA-LLaMA achieves up to an eightfold increase in training and inference speed compared to centralized LLaMA2 without any performance degradation. Furthermore, we demonstrate its security and adaptability through privacy attack analysis and analysis of various split points.