This paper addresses the problem of point cloud registration, which is crucial for ensuring 3D alignment consistency of multiple local point clouds in 3D reconstruction applications such as remote sensing and digital heritage. Existing learning-based and non-learning-based methods ignore point orientation and point uncertainty, making them vulnerable to noisy inputs and aggressive rotational transformations such as orthogonal transformations. Therefore, extensive training point clouds, including translation augmentation, are required. To address these issues, this paper proposes a surfel-based pose learning regression approach. The proposed method initializes the surfel using virtual perspective camera parameters from Lidar point clouds and learns explicit SE(3) equilateral features that capture both position and rotation via SE(3) equilateral convolution kernels to predict the relative translation between source and target scans. The model consists of an equilateral convolution encoder, a cross-attention mechanism for similarity calculation, a fully-connected decoder, and a nonlinear Huber loss. Experimental results on indoor and outdoor datasets demonstrate the superiority of the proposed model over state-of-the-art methods and its robustness on real point cloud scans.