This paper highlights the limitations of existing spatial reasoning, which fails to consider object orientation, a crucial factor in 6-DOF micromanipulation. Existing pose representation methods rely on predefined frames or templates, limiting generalization and semantic foundations. To address this, we propose the concept of "semantic orientation," which defines object orientation using natural language without a reference frame (e.g., the "plug-in" orientation of a USB, the "handle" orientation of a cup). We build a large-scale semantically oriented 3D object dataset, OrienText300K, and develop a general model, PointSO, for zero-shot semantic orientation prediction. We present the SoFar framework, which integrates semantic orientation into a VLM agent to enable 6-DOF spatial reasoning and generate robot motions. Experimental results demonstrate the effectiveness and generalization of SoFar, achieving a zero-shot success rate of 48.7% on Open6DOR and 74.9% on SIMPLER-Env.