This paper introduces SON-GOKU, a proposed multi-task learning algorithm, to address the problem of gradient interference, slow convergence, and degraded model performance caused by conflicting objectives. SON-GOKU calculates gradient interference, constructs an interference graph, and then partitions well-matched tasks into groups using a greedy graph coloring method. At each training step, only tasks from one group (color class) are activated, and the group partitions are continuously recalculated as task relationships evolve during training. This method ensures that each mini-batch contains only tasks that drive the model in the same direction, improving the effectiveness of all baseline multi-task learning optimizers without additional tuning. Experimental results on six different datasets demonstrate that SON-GOKU consistently outperforms existing methods and state-of-the-art multi-task optimizers. Furthermore, we provide a theoretical rationale for why grouping and sequential updates improve multi-task learning, and provide guarantees for accurate identification of descent, convergence, and conflicts or alignments between tasks.