Despite efforts to align Large Language Models (LLMs) with human values and safety rules, jailbreak attacks that exploit vulnerabilities persist. To defend against these attacks, this paper proposes Speculative Safety-Aware Decoding (SSD), a lightweight decode-time approach that enhances additional safety properties. SSD leverages small language models with safety properties and accelerates inference. It integrates speculative sampling into the decoding process and quantifies jailbreak risk using the agreement ratio between the small and composite models. This allows SSD to dynamically switch decoding strategies to prioritize utility or safety, while also addressing the issue of different model capacities. Output tokens are sampled from a new distribution that combines the distributions of the original and small models.