This paper proposes a multi-resolution debugger (MGDebugger) to overcome the limitations of code generation based on large-scale language models (LLMs). MGDebugger isolates, identifies, and resolves bugs in generated code at various levels of granularity, ranging from low-level syntax errors to high-level algorithmic flaws. It decomposes problematic code into a hierarchical tree of subfunctions, each level representing an error at a specific granularity. Using an LLM-based Python executor, it traces the execution of subfunctions and monitors variable states to accurately identify errors. Accuracy and efficiency are improved through subfunction-level testing and bottom-up, iterative bug resolution. Experimental results using the HumanEval and HumanEvalFix datasets demonstrate its superior performance compared to existing debugging systems.