This paper proposes a deep learning-based breath sound event detection method to address the subjectivity and inter-expert differences in auscultation, which are crucial for the early diagnosis of respiratory diseases. To address the limitations of existing methods, including fixed-length audio processing, inaccurate temporal localization due to frame-by-frame prediction, and insufficient utilization of breath sound location information, we present a graph neural network-based framework utilizing anchor intervals. This framework enables variable-length audio processing and accurate temporal localization of abnormal breath sound events. Experimental results using the SPRSound 2024 and HF Lung V1 datasets demonstrate the effectiveness of the proposed method and the importance of utilizing breath location information. A reference implementation is available on GitHub.