This paper presents a novel approach to designing an architecture that directly processes gradients, highlighting the importance of gradients in neural network model optimization, editing, and analysis. Specifically, we introduce GradMetaNet, based on three principles: (1) an equilateral design that preserves neuron permutation symmetry, (2) processing gradients from multiple data points to capture curvature information, and (3) efficient gradient representation via rank-1 decomposition. GradMetaNet consists of simple equilateral blocks, and we demonstrate generalizability and demonstrate that existing approaches cannot approximate the natural gradient-based functions that GradMetaNet can perform. Furthermore, we demonstrate that GradMetaNet is effective in various gradient-based tasks for MLPs and Transformers, including learned optimization, INR editing, and loss landscape curvature estimation.