EEG2TEXT-CN is one of the first open-vocabulary EEG-to-text generation frameworks for Chinese. Based on a biologically based EEG encoder (NICE-EEG) and a small pre-trained language model (MiniLM), it aligns multi-channel brain signals with natural language representations through mask pre-training and contrastive learning. Using a subset of the Chinese EEG dataset (each sentence is aligned with approximately 10 Chinese characters and 128-channel EEG recorded at 256 Hz), it segments EEG into character-wise embeddings and predicts whole sentences in zero-shot settings. The decoder is trained with teacher-forced and padded masks to handle variable-length sequences. Evaluation results on over 1,500 training-validation sentences and 300 separate test samples show promising lexical alignment with a maximum BLEU-1 score of 6.38%. Although syntactic fluency remains a challenge, this study demonstrates the feasibility of non-phonetic, cross-modal language decoding from EEG. This opens new directions for multilingual brain-text research and lays the foundation for future Chinese-based cognitive-language interfaces.