In this paper, we propose a novel structure, Learned Augmented Residual Layer (LAuReL), which generalizes the existing residual connection. LAuReL aims to improve the model performance and efficiency by replacing the existing residual connection. Experimental results show that it achieves 60% of the performance improvement that can be obtained by adding additional layers on ResNet-50 and ImageNet 1K tasks, while increasing the number of parameters by only 0.003%, and achieves the same performance with 2.6x fewer parameters. In addition, when pre-training LLMs with 1 billion and 4 billion parameters, it shows performance improvements of 2.54% to 20.05% in various subtasks, while the additional parameters are only 0.012% and 0.1%, respectively. This means that it brings performance improvements on both vision and language models.