This paper proposes CoDiCon, a novel approach that introduces competitive incentives into cooperative scenarios to address the limitations of existing methods in multi-agent reinforcement learning (MARL) that do not consider interactions between agents. Inspired by sociological research, which suggests that appropriate competition and constructive conflict facilitate group decision-making, we design an intrinsic reward mechanism using ranking features to incentivize competition. A centralized intrinsic reward module generates and distributes diverse reward values to agents, maintaining a balance between competition and cooperation. By optimizing the centralized reward module, which is parameterized to maximize environmental rewards, we reframe the constrained bidirectional optimization problem to align it with the original task objective. We evaluate CoDiCon against state-of-the-art methods in SMAC and GRF environments, demonstrating that the competitive intrinsic reward effectively promotes diverse and adaptive strategies among cooperative agents, achieving superior performance.