This paper proposes a novel face matting framework, FaceMat, that estimates a high-resolution alpha matte that separates occluding elements from the face to address the occlusion problem (when hands, hair, accessories, etc. obscure the face) that degrades the performance of face filters. To achieve this, we present FaceMat, a face matting framework that accounts for uncertainty without trimaps. FaceMat is trained using a teacher-student model learning pipeline. The teacher model predicts both alpha matte and pixel-wise uncertainty, and this uncertainty information is then used to spatially adaptively guide the student model. Unlike existing methods, FaceMat operates without auxiliary inputs (trimaps or segmentation masks) and improves the synthesis strategy by clearly distinguishing skin into foreground and background occlusions. Furthermore, we conduct experiments using a newly constructed, large-scale synthetic dataset, CelebAMat, and demonstrate that our approach outperforms existing state-of-the-art methods across various benchmarks. The source code and the CelebAMat dataset are publicly available.