This paper addresses the limitations of Code LLMs in their ability to infer runtime behavior and understand program functionality. Code LLMs suffer from a lack of inference capabilities regarding program execution behavior and the inconsistent and fragmented representation of semantic information, such as execution traces. To address these challenges, we present a general framework that integrates semantic information (e.g., execution traces) into code task-related prompts and comprehensively study the impact of semantic information on improving the inference performance of Code LLMs. Specifically, we investigate the impact of trace-based semantic information on the supervised fine-tuning (SFT) and inference stages of Code LLMs. Our experimental results demonstrate that, unlike previous studies, semantic information has limited utility in improving the test time of SFT and Code LLMs.