Large-Scale Language Models (LLMs) offer innovative capabilities for hardware design automation, including Verilog code generation. However, they also pose significant data security challenges, including Verilog evaluation data contamination, intellectual property (IP) design leakage, and the risk of malicious Verilog generation. This paper introduces SALAD, a comprehensive evaluation leveraging machine unlearning, to mitigate these threats. SALAD enables selective removal of contaminated benchmarks, sensitive IP and design artifacts, or malicious code patterns from pre-trained LLMs without requiring full retraining.