This paper analyzes performance changes when deploying a pre-trained, large-scale audio neural network on resource-constrained devices such as the Raspberry Pi. We experimentally study the impact of CPU temperature, microphone quality, and audio signal volume on performance, revealing that increased temperature due to sustained CPU usage triggers the Raspberry Pi's automatic slowdown mechanism, affecting inference latency. Furthermore, we demonstrate that microphone quality and audio signal volume on inexpensive devices such as the Google AIY Voice Kit impact system performance. We experience significant challenges related to library compatibility and the unique processor architecture requirements of the Raspberry Pi, making the process less straightforward compared to a standard computer (PC). These observations can help researchers develop more compact machine learning models, design heat-dissipating hardware, and select appropriate microphones when deploying AI models on edge devices for real-time applications.