This paper highlights the importance of memory frequency tuning, as well as processor frequency tuning, to address high latency and energy consumption during deep neural network (DNN) inference in resource-constrained environments. Using model- and data-driven methods, we investigate the impact of co-tuning memory and compute frequencies on inference time and energy consumption. We also analyze the effectiveness of co-tuning models by combining the fit parameters of various DNN models. Finally, we verify the effectiveness of co-tuning memory and compute frequencies in reducing energy consumption through simulation results for local and collaborative inference.