This paper proposes SKA-Bench, a novel benchmark for evaluating the structured knowledge (SK) understanding of large-scale language models (LLMs). SKA-Bench includes four types of SKs—knowledge graphs (KGs), tables, KG+text, and tables+text—and generates instances consisting of questions, correct answers, positive knowledge units, and incorrect knowledge units through a three-stage pipeline. To further evaluate the SK understanding of LLMs, we extend the four fundamental testbeds for robustness to noise, order indifference, information integration, and negative information rejection. Experiments on eight representative LLMs demonstrate that existing LLMs still struggle with structured knowledge understanding, and their performance is affected by factors such as the amount of noise, the order of knowledge units, and hallucinations. The dataset and code are available on GitHub.