This paper investigates the worst-case scenario risk of deploying the open-source GPT model (gpt-oss). To maximize gpt-oss's capabilities in both biological and cybersecurity domains, we employ the malicious fine-tuning (MFT) technique. To maximize biorisk, we selected threat-generating tasks and trained gpt-oss in a web-browsing reinforcement learning environment. To maximize cybersecurity risk, we trained gpt-oss in an agent-coding environment to solve the Capture-The-Flag (CTF) problem. We compared the MFT model with other large-scale language models with open and closed weights. Compared to closed models, the MFT gpt-oss underperformed OpenAI o3, which scored below the Preparedness High level, in both biological and cybersecurity risk. Compared to open models, gpt-oss slightly improved biorisk, but not significantly. These results contributed to model deployment decisions, and we hope that the MFT approach will provide useful guidance for assessing the risks of future open-weighted model deployments.