This paper addresses the issue that access to accurate, actionable harm reduction information that can directly impact health outcomes for people who use drugs (PWUD) often fails to meet their diverse and dynamic needs due to the adaptability, accessibility, and pervasive stigma of existing online channels. Large-scale language models (LLMs) offer novel opportunities to improve information delivery, but their application in this high-risk domain remains unexplored and presents sociotechnical challenges. This study investigates how LLMs can responsibly support the information needs of PWUD. Through qualitative workshops involving a diverse group of stakeholders, including academics, harm reduction practitioners, and online community managers, we explore the capabilities of LLMs, identify potential use cases, and describe key design considerations. Our findings demonstrate that LLMs can address existing information barriers by providing responsive, multilingual, and potentially less stigmatizing interactions, but must overcome challenges related to ethical alignment with harm reduction principles, nuanced contextual understanding, effective communication, and clearly defined operational boundaries to be effective. We present a design path that emphasizes collaborative co-design with experts and PWUD to develop helpful, safe, and responsibly managed LLM systems. This study provides empirically based insights and actionable design considerations for responsibly developing LLMs as a support tool within the harm reduction ecosystem.