This paper presents two Knowledge Poisoning Attacks (KPAs) that exploit vulnerabilities in the Graph-based Retrieval-Augmented Generation (GraphRAG) model. GraphRAG transforms raw text into a structured knowledge graph to improve the accuracy and explainability of LLMs. We address the potential for malicious manipulation of the LLM's knowledge extraction process from raw text. The two proposed attacks are Targeted KPA (TKPA) and Universal KPA (UKPA). TKPA uses graph theoretical analysis to identify vulnerable nodes in the generated graph and rewrites the corresponding descriptions into LLMs, precisely controlling specific question-answering (QA) results. UKPA exploits linguistic cues, such as pronouns and dependencies, to alter globally influential words, thereby destroying the structural integrity of the generated graph. Experimental results demonstrate that even small text modifications can significantly degrade GraphRAG's QA accuracy, highlighting the failure of existing defense techniques to detect these attacks.