In this paper, we propose LLMCert-B, the first framework for authenticating counterfactual bias in large-scale language models (LLMs). Existing studies fall short in thoroughly assessing the bias of LLM responses across demographic groups, do not scale to many inputs, and provide no guarantees. LLMCert-B provides a certificate consisting of high confidence intervals for the unbiased probability of LLM responses for a distribution of counterfactual prompts (prompts that differ across demographic groups). In this paper, we demonstrate counterfactual bias authentication for a counterfactual prompt distribution generated by applying prefixes sampled from a prefix distribution to a given set of prompts. We consider a prefix distribution consisting of a mixture of random token sequences, manual jailbreaks, and variations of jailbreaks in the embedding space of the LLM. We generate non-obvious certificates for state-of-the-art LLMs while exposing their vulnerability to prompt distributions generated from computationally inexpensive prefix distributions.