Low-Rank Adaptation (LoRA) is an efficient framework for fine-tuning large-scale foundation models and is widely used for data-driven customization of LLM. However, switching between LoRAs in a multi-turn environment incurs an inefficiency because the KV cache of the entire turn history must be recalculated with LoRA weights. To address this issue, this paper proposes Activated LoRA (aLoRA), an adapter architecture that adapts weights only for tokens in a sequence after aLoRA is invoked. This allows aLoRA to utilize the underlying model KV cache of the input string, enabling it to be activated immediately within the chain without recalculating previous keys and values. This allows for the construction of specialized models, called "intrinsics," that are invoked to perform well-defined tasks for specific input chains or segments of a conversation. By training an aLoRA-based intrinsics model, we achieve competitive accuracy with standard LoRA while significantly improving inference efficiency. The aLoRA implementation was contributed to the Huggingface PEFT library.