This paper presents evidence that eigenvalue analysis of the empirical neural tangent kernel (eNTK) can identify features used by trained neural networks. Using two standard toy models for machine interpretability—the toy model of superposition (TMS) and a 1-layer multi-layer modelling pipeline (MLP) trained on modular addition—we find that eNTK exhibits a sharp spectral cliff whose superordinate eigenspace matches the ground truth. In TMS, eNTK recovers ground truth features in both sparse (highly overlapped) and dense regions. In modular arithmetic, eNTK can be used to recover Fourier feature families. Furthermore, we provide evidence that layer-by-layer eNTK localizes features to specific layers and that the evolution of eNTK spectra can be used to diagnose grokking phase transitions. These results suggest that eNTK analysis can provide a practical tool for feature discovery and phase transition detection in small models.