This paper proposes Orthogonal Residual Update (ORU) to overcome the limitations of conventional residual connections. While conventional methods tend to directly add the output of a module to the input stream, reinforcing or adjusting the existing direction, the proposed method induces the module to learn a new representation direction by adding only orthogonal components to the input stream. This enables richer feature learning and efficient training. We experimentally demonstrate that our method improves generalization accuracy and training stability across various architectures, such as ResNetV2 and Vision Transformer, and across diverse datasets, such as CIFARs, TinyImageNet, and ImageNet-1k. For example, we improve the top-1 accuracy of ViT-B by 4.3% on ImageNet-1k.