This paper addresses the problem of interpreting brain activity as visual representations. We highlight the limitations of existing EEG visual decoding methods due to the Hierarchical Neural Encoding Neglect (HNEN) problem, and propose a novel framework, ViEEG, inspired by the hierarchical structure of the visual cortex. ViEEG decomposes visual stimuli into three biologically aligned components—contours, foreground objects, and background scenes—and utilizes a triple-stream EEG encoder based on these components. Cross-attention routing mimics the flow of low-level to high-level visual information, and hierarchical contrastive learning performs EEG-CLIP representation alignment to enable zero-shot object recognition. Experimental results on the THINGS-EEG and THINGS-MEG datasets demonstrate significantly superior performance to existing methods, suggesting a new paradigm for EEG brain decoding.