This paper addresses data security issues in hardware design automation using Large-Scale Language Models (LLMs), particularly in Verilog code generation. Verilog code generation using LLMs can pose serious data security risks, including Verilog evaluation data corruption, intellectual property (IP) design leaks, and the risk of generating malicious Verilog code. In response, this paper presents SALAD, a comprehensive evaluation method that mitigates these threats using machine unlearning techniques. SALAD selectively removes contaminated benchmarks, sensitive IP and design artifacts, and malicious code patterns from pre-trained LLMs without retraining. Through a detailed case study, this paper demonstrates how machine unlearning techniques effectively mitigate data security risks in LLM-based hardware designs.