Large-scale language models are widely used, but they can unintentionally contain sensitive or harmful information, raising security concerns. Machine unlearning has emerged to address this issue, but existing training-time unlearning methods have limited ability to balance knowledge separation and removal with model utility. This paper proposes FALCON, a representation-based unlearning approach. FALCON enhances unlearning effectiveness, maintains model utility, and exhibits robust resistance to knowledge recovery attempts by utilizing information-theoretic guidance for efficient parameter selection, contrastive mechanism for representation separation, and projection of conflicting gradients into an orthogonal space to resolve conflicts between forgetting and retention objectives.