This paper presents a system for autonomous semantic exploration and dense semantic target mapping in complex and unknown environments using a terrestrial robot equipped with a LiDAR-panoramic camera system. Existing approaches often struggle to collect high-quality observations from multiple viewpoints and avoid unnecessary repetitive movements. To address these challenges, we propose a complete system that combines mapping and planning. First, we redefine the task of completing both geometric coverage and semantic perspective observations, manage semantic and geometric perspectives separately, and propose a novel priority-based separable region sampler to enable explicit multi-view semantic inspection and voxel coverage without unnecessary repetition. Building on this, we develop a hierarchical planner that ensures efficient global coverage and propose a safe-aggressive exploration state machine that allows aggressive exploration behavior while ensuring robot safety. Furthermore, we include a plug-and-play semantic goal mapping module that seamlessly integrates with state-of-the-art SLAM algorithms. We validate our approach through realistic simulations and extensive experiments in complex real-world environments. Simulation results demonstrate that the planner achieves faster exploration and shorter travel distances while guaranteeing a specified number of multi-view inspections. Real-world experiments further verify the effectiveness of the system in achieving accurate dense semantic object mapping in unstructured environments.