Cybersecurity activities are essential in safety-critical software systems, and risk assessment is one of the most important activities. Many software teams lack or have only a few cybersecurity experts. This increases the workload for experts, forcing software engineers to perform cybersecurity activities themselves. Therefore, tools are needed to support cybersecurity experts and engineers in assessing vulnerabilities and threats during risk assessments. This paper explores the potential of locally hosted large-scale language models (LLMs) using search-augmented generation in the forestry field to support cybersecurity risk assessments, while complying with data protection and privacy requirements that limit external data sharing. A design science study involving interviews, interactive sessions, and a survey was conducted with 12 experts within a large-scale project. The results demonstrate that LLMs can support cybersecurity experts in generating initial risk assessments, identifying threats, and checking for redundancy. They also highlight the need for human supervision to ensure accuracy and compliance. Despite trust concerns, experts expressed a willingness to utilize LLMs for specific assessment and support roles, rather than relying solely on their generation capabilities. This study provides insights that encourage the use of LLM-based agents to support the risk assessment process of cyber-physical systems in safety-critical areas.