This paper addresses the problem of recognizing specific keywords in context-aware automatic speech recognition (ASR). Existing context-biased techniques have limitations, such as requiring additional model training, slow decoding speed, and limited ASR system types. In this paper, we propose a general-purpose ASR context-biased framework that supports all major ASR model types, including CTC, Transducer, and Attention Encoder-Decoder models. Using GPU-accelerated word boosting trees, the framework operates in shallow fusion mode without slowdown in greedy and beam search decoding, even with up to 20,000 keywords. Experimental results demonstrate that the proposed method outperforms existing open-source context-biased techniques in terms of accuracy and decoding speed. The proposed context-biased framework has been open-sourced as part of the NeMo toolkit.