This paper proposes Orthogonal Finetuning (OFTv2) to address the limitations of Orthogonal Finetuning (OFT), which limits its practical application due to its high runtime and memory requirements. OFTv2 reduces computational costs by restructuring OFT's core computational bottleneck, the weight-centric implementation, into an input-centric one. It also introduces Cayley-Neumann parameterization to implement efficient orthogonal parameterization. This achieves up to 10x faster training and 3x lower GPU memory usage. Furthermore, it supports fine-tuning of quantized base models, outperforming QLoRA.