This paper proposes SKA-Bench, a novel benchmark for evaluating the structured knowledge (SK) understanding ability of large-scale language models (LLMs). SKA-Bench includes four types of SKs: knowledge graphs (KGs), tables, KG+text, and tables+text, and consists of questions, answers, positive knowledge units, and negative knowledge units. To precisely evaluate the SK understanding ability of LLMs, we assess four aspects: robustness to noise, order sensitivity, information integration ability, and negative information rejection ability. Experiments on eight representative LLMs reveal that existing LLMs still struggle with SK understanding, and their performance is affected by factors such as the amount of noise, the order of knowledge units, and hallucinations. The dataset and code are available on GitHub.