Paper deep dive
Feature Identification via the Empirical NTK
Jennifer Lin
Models: 1-layer MLP (modular arithmetic), Toy Models of Superposition MLP
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 94%
Last extracted: 3/12/2026, 6:05:41 PM
Summary
The paper introduces the empirical Neural Tangent Kernel (eNTK) as a tool for mechanistic interpretability. By performing eigenanalysis on the eNTK of trained neural networks, the authors demonstrate that top eigenspaces align with ground-truth features across various models, including Toy Models of Superposition (TMS), 1-layer MLPs, and 1-layer Transformers trained on modular addition. The method effectively localizes features to specific layers and detects phase transitions like grokking.
Entities (5)
Relation Signals (3)
eNTK → recovers → Ground-truth features
confidence 95% · top eigenspaces of the eNTK align with ground-truth features
eNTK → diagnoses → Grokking
confidence 90% · the evolution of the eNTK spectrum can be used to diagnose the grokking phase transition
Layerwise eNTK → localizes → Features
confidence 90% · we provide evidence that a layerwise eNTK localizes features to specific layers
Cypher Suggestions (2)
Find all models where eNTK analysis has been applied to identify features. · confidence 90% · unvalidated
MATCH (m:Model)-[:ANALYZED_BY]->(e:Methodology {name: 'eNTK'}) RETURN m.nameIdentify relationships between methodologies and the phenomena they diagnose. · confidence 85% · unvalidated
MATCH (m:Methodology)-[:DIAGNOSES]->(p:Phenomenon) RETURN m.name, p.name
Abstract
Abstract:We provide evidence that eigenanalysis of the empirical neural tangent kernel (eNTK) can surface the features used by trained neural networks. Across three standard toy models for mechanistic interpretability, Toy Models of Superposition (TMS), a 1-layer MLP trained on modular addition and a 1-layer Transformer trained on modular addition, we find that top eigenspaces of the eNTK align with ground-truth features. In TMS, the eNTK recovers the ground-truth features in both the sparse (high superposition) and dense regimes. In modular arithmetic, the eNTK can be used to recover Fourier feature families. Moreover, we provide evidence that a layerwise eNTK localizes features to specific layers and that the evolution of the eNTK spectrum can be used to diagnose the grokking phase transition. These results suggest that eNTK analysis may provide a practical handle for feature discovery and for detecting phase changes in small models.
Tags
Links
- Source: https://arxiv.org/abs/2510.00468
- Canonical: https://arxiv.org/abs/2510.00468
PDF not stored locally. Use the link above to view on the source site.
Full Text
67,900 characters extracted from source content.
Expand or collapse full text
Feature Identification via the Empirical NTK Jennifer Lin Abstract We provide evidence that eigenanalysis of the empirical neural tangent kernel (eNTK) can surface the features used by trained neural networks. Across three standard toy models for mechanistic interpretability, Toy Models of Superposition (TMS), a 1-layer MLP trained on modular addition and a 1-layer Transformer trained on modular addition, we find that top eigenspaces of the eNTK align with ground-truth features. In TMS, the eNTK recovers the ground-truth features in both the sparse (high superposition) and dense regimes. In modular arithmetic, the eNTK can be used to recover Fourier feature families. Moreover, we provide evidence that a layerwise eNTK localizes features to specific layers and that the evolution of the eNTK spectrum can be used to diagnose the grokking phase transition. These results suggest that eNTK analysis may provide a practical handle for feature discovery and for detecting phase changes in small models. Machine Learning, ICML 1 Introduction Mechanistic interpretability seeks to understand how neural networks process information at inference time (Olah et al., 2020). A major open question is to understand how neural networks represent learned features. Operationally, suppose that an input to a model can either exhibit a particular human-interpretable property (e.g. the color red, a dog snout) or not. Neural networks often seem to learn to detect such properties. For example, they can be probed to classify the presence or absence of such properties after pretraining with higher-than-chance accuracy (Alain & Bengio, 2016). We would like to understand how they do so internally, by finding functions of a model’s activations and weights that detect the presence of the property with high sensitivity and specificity. It’s a priori unclear how to guess a good candidate answer. One strategy is to look to theories of deep learning. If there existed a hypothetical complete theory that allowed us to “reverse-compile” any model into human-interpretable code, it would (by definition) solve this problem by telling us which effective features a model uses and how they interact to produce the input-output map (Olah, 2022). Instead of a complete theory however, at present we have several nascent theories that each explain some modest aspects of what neural networks may be doing, in some corners of the parameter space. 111See e.g. the examples on page 13 of (Sharkey et al., 2025). Among such theories in 2025, the neural tangent kernel (NTK) and its cousins (Jacot et al., 2018; Roberts et al., 2022) are unique in giving us a closed-form formula for the function f learned by a neural network in a corner of the parameter space: namely fi(x)=∑α1,α2Kij(x,xα1)Kjk−1(xα1,xα2)yk,α2,f_i(x)= _ _1, _2K_ij(x,x_ _1)K^-1_jk(x_ _1,x_ _2)\,y_k, _2\,, (1) where the index i runs over output neurons (classes), the α’s run over points in the training set, and the data-space kernel Kij(x1,x2)=∑μdfi(x1)dWμdfj(x2)dWμ|t=t0K_ij(x_1,x_2)= . _μ df_i(x_1)dW_μ df_j(x_2)dW_μ |_t=t_0 (2) evaluated at initialization, with the index μ running over model parameters, is the eponymous NTK. Eq. (1) is a provably good approximation to the function learned by a neural network under gradient descent with MSE loss in a “lazy limit” where the drift in model parameters throughout training is parametrically small. See Appendix A for a review. Realistic models are usually not in this regime. However, it’s been conjectured and empirically verified in some situations (Fort et al., 2020; Atanasov et al., 2021; Ortiz-Jiménez et al., 2021; Mohamadi et al., 2023) - although we don’t yet know in general when the conjecture is and isn’t true - that Eq. (1) with the NTK (2) evaluated at the end of training instead of at initialization, Kij(x1,x2)=∑μdfi(x1)dWμdfj(x2)dWμ|t=t∗,K_ij(x_1,x_2)= . _μ df_i(x_1)dW_μ df_j(x_2)dW_μ |_t=t_*\,, (3) can at times be a good approximation to the function learned by a neural network beyond the lazy limit. This conjecture goes by the name of the empirical NTK hypothesis, with (3) being the empirical NTK (eNTK). In this paper, we will take this conjecture seriously and see if the eNTK can be used to generate new algorithms for mechanistic interpretability. In particular, we examine the hypothesis that eigenvectors of the eNTK may align with interesting features at the end of training. We find that this turns out to be true across toy models commonly studied by the mechanistic interpretability community. Our main contributions are that • In “Toy Models of Superposition” (Elhage et al., 2022), the eNTK eigenspectrum is anisotropic with a qualitatively large drop at the nnth ordered eigenvalue, for n the number of ground-truth features. The leading n-dimensional eigenspace aligns nearly 1:1 with the ground-truth features at the end of training in both the sparse and dense regimes. • For a 1L MLP trained on modular arithmetic mod p in the grokking regime, the eNTK eigenspectrum develops two cliffs. The first cliff, localized to the first layer of the model and visible already at initialization, has dimension k=4⌊p/2⌋k=4 p/2 and a span that matches the Fourier features learned by the first layer of the model. The second cliff aligns with the “sum” and “difference” Fourier features learned by the second layer of the model and appears precisely at the grokking phase transition. In this sense, eNTK spectral analysis provides a practical diagnostic that localizes where features live by layer and detects phase transitions. • For a 1L Transformer trained on modular arithmetic mod p in the grokking regime, across training runs, the leading eigendirections of the eNTK localized to the O/V layers of the attention block, the first (input) layer of the MLP block, the second (output) layer of the MLP block and the unembedding layer all align with Fourier features at the run-dependent “key frequencies” used by the model to implement the modular addition algorithm. This paper is organized as follows. In Section 2, we explain how we use the eNTK to find features in trained models. In Sections 3, 4 and 5, we use the eNTK to find features in Toy Models of Superposition, a 1L MLP trained on modular arithmetic and a 1L Transformer trained on modular arithmetic, respectively. We conclude with a discussion of future directions in Section 6. Related work. A few earlier works have explored whether NTK eigenvectors align with interesting directions in trained models. (Loo et al., 2022) and (Tsilivis & Kempe, 2022) visualized kernel eigenvectors as candidate features in small image nets (in the context of using the NTK to study adversarial training), and are the most closely related works. (Atanasov et al., 2021) and (Baratin et al., 2021) argued that NTK tangent features rotate towards directions predictive of the outputs during training. Besides the eNTK, other objects involving gradients/curvature have appeared in the mechanistic interpretability literature. As mentioned above, the eNTK (3) is constructed by contracting a pair of Jacobians J(z)=∇Wf(z)J(z)= _Wf(z) along the parameter direction, forming a kernel in data space (shorthand: K(z,z′)=J(z)J(z′)T=∇Wf(z)∇Wf(z′)TK(z,z )=J(z)J(z )^T= _Wf(z) _Wf(z )^T). The Hessian, the second derivative of the loss function wrt parameters H=∇W2ℒH=∇^2_WL, can be approximated by a pair of Jacobians contracted the other way (H≈∑iJ(xi)T∇f2ℒ(f(xi),yi)J(xi)H≈ _iJ(x_i)^T\,∇^2_fL(f(x_i),y_i)\,J(x_i)), and appears frequently in recent work on singular learning theory (e.g (Bushnaq et al., 2024b)). The Hessian induces a “influence function” kernel on data points ℐ(z′,z)=−∇Wℒ(z′)TH−1∇Wℒ(z)I(z ,z)=- _WL(z )^TH^-1 _WL(z) ((Koh & Liang, 2017)) that was recently used to interpret LLMs at scale ((Grosse et al., 2023)). The Jacobian also played a role in the “Local Interaction Basis” algorithm ((Bushnaq et al., 2024a)) which proposed to find interpretable directions in activation space in part by aligning would-be directions with singular directions of the intra-layer Jacobian. The idea of approximating a neural network by keeping only kernel modes up to an emergent spectral scale can be viewed as a naive application of renormalization ideas to mechanistic interpretability (Greenspan, 2025), with the spectral scale setting a hard UV cutoff. 2 Method: NTK pipeline for feature identification The main idea that we pursue in this paper is to treat the eNTK as fundamental and interpret its top eigenvectors. One motivation for this idea is that empirically, the spectrum of the eNTK is often very anisotropic. (We’l see examples of this below.) When true, it means that if the linearized formula (1) is a good approximation to the function learned by a neural network at the end of training, then the same formula where we replace the kernel with its low-rank spectral truncation should in turn be a good approximation, and the top eigenvectors of the kernel should contain most of the information about how the trained model computes its outputs. Below, for several models, we will eigendecompose the empirical NTK and its variants defined in section 2.1. We define a cliff to be a qualitatively large drop in the ordered eigenvalues 222Alternatively, a more precise definition of a cliff as ending at an eigenvalue λi _i if the immediate eigengap is more than twice as large as adjacent gaps, (λi+1−λi)/12(λi−λi−1+λi+2−λi+1)( _i+1- _i)/ 12( _i- _i-1+ _i+2- _i+1) >> 22, holds across all examples discussed throughout this paper. However, we believe that the visual evidence we’l present of spectra with large gaps, often far in excess of a ratio of 2, provides a more compelling, if qualitative, illustration of the phenomenon. , and a NTK feature subspace to be the span of eigenvectors corresponding to the eigenvalues in a cliff. To compute if a length-N eNTK eigenvector for N the dataset size aligns with a ground-truth feature, we can take the absolute value of its cosine similarity with a length-N “feature vector” whose entries consist of that feature’s activation on each data point. This gives a scalar for each (eigenvector, feature) pair. To visualize how the eigenvectors across a NTK feature subspace align with a selection of features, we will render heatmaps that show this correlation across all pairs. 2.1 Variants of the empirical NTK used in this paper The eNTK evaluated over all points in a dataset has shape (N,N,C,C)(N,N,C,C) for N the size of the dataset, and C the number of output neurons (classes). To do an eigendecomposition, we must reduce it to two dimensions. Below, we’l use two collapses of K: 1. Per-class eNTK. For a class c∈1,…,Cc∈ 1,…,C, we can study the kernel Kcc(x1,x2)K_c(x_1,x_2). 2. Flattened eNTK. Alternatively, we can form a NCNC by NCNC matrix by stacking the per-class blocks Kiji,j=1C\K_ij\_i,j=1^C row-major in i and column-major in j. Another variant that we’l use is the layerwise NTK where rather than sum over all model parameters in (3), we sum only over those parameters belonging to a particular layer, before applying either collapse above. This will prove useful for attributing features to a layer. 3 Results for “Toy Models of Superposition” In our first experiment, we study the alignment of eNTK eigenvectors with ground-truth features in Toy Models of Superposition (TMS) (Elhage et al., 2022). 3.1 Review of the setup In TMS, we train a 1L autoencoder with a smaller hidden layer size m than input/output layer size n and tied weights to learn the identity map on a dataset of n synthetic features. Explicitly, the model architecture is f(x)=ReLU(WTWx+b);f(x)= ReLU(W^TWx+b)\,; (4) the dataset consists of N length-n vectors whose iith entry is set to 0 with probability S, and otherwise randomly drawn from U[0,1]U[0,1] 333By using a fixed-size dataset, we’re actually using the setup of (Henighan et al., 2023). ; and the loss function is MSE loss modified by a per-feature importance hyperparameter IiI_i, ℒ=∑iIi(xi−fi(x))2.L= _iI_i(x_i-f_i(x))^2\,. (5) The claim to fame of this setup is that it’s a maximally simple toy model of superposition, i.e., the idea that if models store features as linear combinations of within-layer neurons, then a model might learn to represent more features than it has neurons by storing them non-orthogonally in the neural activations. In this experiment, the n input/output directions are the analogs of ground-truth features, and one would like to see if the autoencoder is able to reconstruct them. Upon training this class of model, one finds that when the features are dense (small S), the model learns to reconstruct the top m features only, but as one dials up the sparsity, the model is able to recover more and more features, at the cost of encoding them non-orthogonally. Our motivation is somewhat different. Namely, given a TMS model at any (m,nm,n) (i.e. in either the sparse or dense regime), we would like to see if the eNTK eigenvectors at the end of training line up with the ground-truth features. Below we report results for models trained to convergence on a 500-point training set and hyperparameters n=50n=50 and Ii=0.8iI_i=0.8^i (with varying m and S). However, other choices of hyperparameters lead to qualitatively similar behaviors. Figure 1: Eigenvalue spectrum of the flattened eNTK (top row) and column-normalized, importance-rescaled flattened eNTK eigenvector-feature indicator heatmaps for TMS models with n=50n=50 ground-truth features, importance Ii=0.8iI_i=0.8^i, (hidden-layer size, sparsity) m=10,S=0.3m=10,S=0.3 (left column); m=10,S=0.9m=10,S=0.9 (middle column) or m=40,S=0.9m=40,S=0.9 (right column); and importance rescaling parameters β=1β=1 (middle row) and β=0.3β=0.3 (bottom row), as described in the main text. The center column is an example of a TMS model in the sparse regime (high superposition), while the left and right columns are two different ways to push the model to the dense regime (low superposition). 3.2 Results Spectral structure and cliffs. Across TMS settings (dense vs. sparse regime and different widths m), the flattened eNTK exhibits a pronounced spectral cliff structure separating a small set of leading modes from a long tail (Fig. 1, row 1). In all cases that we tested, the leading eigenspace contains two cliffs with the second drop appearing at k=50k=50, equal to the number of ground-truth features in the dataset. A first cliff differentiates the number of features that the model is able to reconstruct from those it cannot. 444Quantitatively, in each example the number of entries along the diagonal of WTW^TW exceeding the threshold of 0.75, for W the trained weight matrix, equals the size of one of the two cliffs. Alignment with ground-truth features. As discussed in section 2, one way to visualize how well a length-N eigenvector for N the dataset size lines up with a ground-truth feature is to compute its inner product with a “feature vector” whose entries consist of that feature’s activation on each data point. Here, we chose instead to work with the flattened eNTK which is a shape NCNC by NCNC tensor. At the same time, the output classes C correspond to ground-truth features in TMS. To construct heatmaps displaying how well the flattened eNTK eigenvectors align with ground-truth features in TMS, we therefore first build an expanded data matrix of shape (NC,C)(NC,C) whose entries are zero unless the class indices match, then compute and display the inner product of the expanded data matrix with the top flattened eNTK eigenvectors. Before rendering the heatmaps, we also apply an importance rescaling to the eNTK, multiplying each Jacobian entry dfi(x)/dWμdf_i(x)/dW_μ in (3) by Iβi/2I^β i/2. This directionally correct for the fact that the derivation of Eq. (1) assumed a standard MSE loss (see Appendix A), but TMS uses an importance-weighted MSE loss, (5). The choice β=1β=1 would allow us to absorb I into the NTK in eq. (13), allowing the rest of the derivation in Appendix A to proceed with standard MSE, while β=0β=0 corresponds to turning off the importance rescaling. In practical terms, because the authors of TMS did not model the importance distribution of realistic data, it’s unclear which choice of β best matches a realistic data distribution. With β=1β=1, we find that the leading eigenvectors align with high-importance features (middle row of Fig. 1). As β decreases, lower-importance features become increasingly visible in the heatmaps, until around β=0.3β=0.3 (bottom row of Fig. 1), the leading eigendirections of the importance-rescaled eNTK appear to align nearly one-to-one with all of the ground-truth features of TMS in both the sparse and dense regimes. This experiment suggests that the raw kernel also contains information about less-important features, but down-weighs them in line with the objective. In the sparse (high superposition) regime, the map between top eNTK eigenvectors and ground-truth features appears to split into two lines sorted by importance (center and bottom middle plots in Fig. 1). It could be interesting - and likely is possible given the simplicity of the kernel (see Appendix B for its closed form) - to try to understand why this happens. However, we will leave this for future work. 4 Results for a MLP trained on modular arithmetic Next, we study the alignment of top eNTK eigenvectors with the ground-truth Fourier features in a MLP trained on modular arithmetic (Gromov, 2023). 4.1 Review of the setup In this experiment, for a fixed integer p we construct a dataset consisting of the (length 2p2p) p2p^2 unique pairs of concatenated, one-hot encoded integers from (0,…,p−1)(0,…,p-1), labeled by their (length p) one-hot-encoded sum mod p. We split the dataset into non-overlapping train and test sets controlled by a fractional hyperparameter α=|train|/p2α=|D_ train|/p^2. We then train a 1L MLP with 2p2p input neurons, n hidden neurons, p output neurons, a quadratic activation function, and no biases to learn the training set with a MSE loss function and AdamW optimizer. Explicitly, the model architecture is f(x)=W(2)(W(1)x)2.f(x)=W^(2)(W^(1)x)^2\,. (6) In the experiments below, we report results for a particular training run at p=29p=29, n=512n=512 and α=0.7α=0.7. However, we find qualitatively similar results for different seeds and other values of p. (See Appendix F for a robustness check.) With these hyperparameters, the model exhibits grokking (Power et al., 2022; Nanda et al., 2023). Namely, some time after reaching 100% accuracy on the training set, it suddenly goes from 0% to 100% accuracy on the test set (see the left panel of Fig. 3), suggesting that it switched to using a more generalizable representation of the underlying modular arithmetic algorithm. Moreover, in this case we know the exact solution that the model learns after grokking (Gromov, 2023). At the end of training, the model solves modular addition by learning the weights Wk(a,b)(1) W_k(a,b)^(1) = = (cos(2πkpa+φk(1))cos(2πkpb+φk(2)))T, ( tabular[]c$ (2π kpa+ _k^(1))$\\ $ (2π kpb+ _k^(2))$ tabular )^T\,, (9) Wqk(2) W_qk^(2) = = cos(−2πkpq−φk(3)), (-2π kpq- _k^(3))\,, (10) where we represent Wk(a,b)(1)W_k(a,b)^(1) as a row of two n×pn× p matrices, with total shape (n,2p(n,2p); k∈0,…n−1k∈ 0,… n-1 runs over the hidden-layer neurons and a, b, q ∈0,…,p−1∈ 0,…,p-1 over the two input and output integers, respectively; and the phases φk(i) _k^(i) are random but should satisfy the constraints φk(1)+φk(2)=φk(3) _k^(1)+ _k^(2)= _k^(3). The weights (9), (10) fully determine the neural activations at each level. In particular, given input integers a and b the first-layer preactivations are simply hk(1)(a,b)=cos(2πkpa+φk(1))+cos(2πkpb+φk(2)),h_k^(1)(a,b)= (2π kpa+ _k^(1) )+ (2π kpb+ ^(2)_k )\,, (11) and the first-layer activations are its squares (see (6)). Hence, linear combinations of neural activations form Fourier features cos(2πkpa) (2π kpa), cos(2πkpb) (2π kpb) up to phase (below: the ‘‘cosa”`` a" and ‘‘cosb”`` b" feature families), for k∈1,…⌊p2⌋k∈ 1,... p2 . 555Fourier modes with k>⌊p2⌋k> p2 are redundant because of trigonometric identities relating arguments x with 2π−x2π-x. After applying trigonometric identities, the second-layer activations turn out to be the sum of many terms each with the schematic form ∑k=1ncos(2πkps+φk) _k=1^n (2π kps+ _k), for s a linear combination of a,b,q\a,b,q\ and φk _k a linear combination of φk(1),φk(2),φk(3)\ _k^(1), _k^(2), _k^(3)\. For generic phases these sums cancel, but one such term turns out to use exactly the combination φk(1)+φk(2)−φk(3)=0 _k^(1)+ _k^(2)- _k^(3)=0. Hence, its phase drops out and the remaining ∑k=1ncos(2πkps) _k=1^n (2π kps) acts as a modular δ function, firing when s=a+b−qs=a+b-q. 4.2 Results Figure 2: Full and layerwise eNTK spectrum of the MLP trained on modular arithmetic after training to convergence (500 epochs). The left column displays the spectrum of the full eNTK, the middle column the spectrum of the layerwise eNTK wrt the parameters (9) in layer 1, and the right column the spectrum of the layerwise eNTK wrt the parameters (10) in layer 2. The per-class averaged eNTK spectrum of the modular-arithmetic model trained to convergence is shown in the left panel of Fig. 2. It contains two cliffs, each of size 56 = 4⌊p2⌋ p2 for our choice of p=29p=29. Layerwise localization. Using the layerwise NTK (where we restrict the sum in (3) to parameters in a single layer), we find that the first cliff localizes to layer 1 (middle panel of Fig. 2), while the second is most cleanly visible if we keep parameters from both layers 1 and 2. Time evolution of the spectrum. The second cliff is absent before grokking and appears at the grokking phase transition (see the right panel of Fig. 3). This suggests that feature-finding aside, perhaps the eNTK can be used to find phase transitions. In contrast to the eigenvalues, the eigenvectors evolve smoothly on either side of the phase transition. We comment on the evolution of the eigenvectors in Appendix C. On the other hand, the first cliff is present already at initialization (see the r.h.s. of Fig. 3). This follows from the structure of the Layer 1 kernel, as we explain briefly in Appendix B, and suggests (in combination with the discussion below) that this model’s use of Fourier features could perhaps have been predicted at initialization from the eNTK. Figure 3: Spectral change at the grokking phase transition for the MLP trained on modular arithmetic. Left: Train and test accuracy shows evidence for a grokking phase transition (sudden onset of generalization) around epoch 90. Right: at the same training time, the eNTK spectrum develops a kink at the base of the second cliff. Alignment with Fourier features. Since the cliffs have dimension 4⌊p2⌋ p2 , it’s a priori plausible that they contain Fourier feature families. More precisely, each cosine feature family has size ⌊p2⌋ p2 and comes paired with a sine family for a count of cos,sin×k=1,…,⌊p2⌋\ , \×\k=1,…, p2 \ = 2 ⌊p2⌋ p2 , so 4⌊p2⌋ p2 has the right counting to contain two such families. Table 1: Frobenius norm of the eigendirections in each cliff with different Fourier feature families, as well as a control of normalized random vectors of the same dimension. A score of 28, which is the dimension of each Fourier feature family, would denote perfect alignment. This exercise shows strong alignment of the first cliff with the a and b Fourier feature families, and of the second cliff with the a±ba± b feature families. Cliff a b sum diff ctrl 1 27.54 27.56 0.13 0.38 1.83 2 0.27 0.25 26.53 24.86 1.80 To check whether the span of each cliff contains a particular Fourier feature family, we compute the Frobenius norm between the cliff directions and the feature family, i.e. the sum of squared inner products between all pairs of directions of a normalized basis for the cliff and each feature family. The results, shown in Table 1, strongly suggest that the span of the first cliff contains the a and b Fourier families and the span of the second cliff contains the a±ba± b Fourier families. Unlike in the TMS experiment however, the heat map of inner products of the eNTK eigenvectors in each cliff with feature vectors for the Fourier families is not 1:1. This degeneracy reflects the exact shift symmetry of the dataset under shifts in a and b. Given knowledge from Table 1 of which Fourier families are contained within each cliff, we can surface the exact linear combinations of eNTK eigenvectors that align with each Fourier mode by ordering directions in each cliff eigenspace according to how smoothly they vary along the input axes in the directions specified in Table 1. We discuss this in Appendix C. 5 Results for a Transformer trained on modular arithmetic In our third example, we study the alignment of eNTK eigenvectors with ground-truth Fourier features in a Transformer trained on modular arithmetic with cross-entropy loss (Nanda et al., 2023). This model learns a different ground-truth algorithm than the MLP trained on modular arithmetic that was the subject of section 4. 5.1 Review of the setup In this experiment, for a fixed integer p we construct a dataset consisting of the p2p^2 unique pairs of integers from (0,…,p−1)(0,…,p-1) represented as a sequence of three tokens ‘‘[a,b,=]”``[a,b,=]", labeled by their one-hot-encoded sum mod p at the final token position. As in the previous experiment, we split the dataset into non-overlapping train and test sets controlled by a fractional hyperparameter α. We then train a 1-layer GPT-type Transformer, without LayerNorm and with untied embedding/unembedding matrices, to learn the training set with cross-entropy loss and an AdamW optimizer. For a review of the exact architecture, see Appendix A of (Nanda et al., 2023). Below, we report results for p=29p=29, dmodel=64d_model=64, 4 attention heads of dimension d/4=16d/4=16, and n=256n=256 hidden units in the MLP. We use learning rate 3e-4, weight decay parameter 1.0, and α=0.5α=0.5. With these hyperparameters, the model exhibits grokking across several seeds. The ground-truth modular arithmetic algorithm learned by this model after grokking is different from the one learned by the MLP in section 4 (Nanda et al., 2023). One major difference is that rather than learn Fourier modes at all intermediate frequencies, the model learns only Fourier modes at a sparse set of random, seed-dependent frequencies. Specifically, at the embedding step, the model maps the inputs a and b to sines and cosines at a set of key random frequencies ωk=2πk/p _k=2π k/p, k∈ℕk . See Fig. 4 for an example of key frequencies in a particular training run, obtained by applying a Fourier transform along the input dimension of the embedding matrix WEW_E then computing the L2 norm in the other dimension. We will analyze this training run in the rest of this section. However, we observed similar phenomena at other random seeds. (See Appendix G for a robustness check.) After the embedding step isolates the key frequencies, the model then computes the trigonometric sum identities for cos(ωk(a+b)) ( _k(a+b)) and sin(ωk(a+b)) ( _k(a+b)) on those frequencies only in the attention and MLP layers, and the trigonometric difference identity cos(ωk(a+b−c)) ( _k(a+b-c)) on those frequencies only in the MLP output and unembedding layers. Finally, it uses the unembedding matrix to add together the cos(ωk(a+b−c)) ( _k(a+b-c))’s for different k’s. This causes destructive interference unless a+b−c=0a+b-c=0. Figure 4: The L2 norms of the Fourier components of the embedding matrix WEW_E in a training run of the Transformer model with the hyperparameters mentioned in the main text. Different seeds give different key frequencies. In this training run, there are two key frequencies at k=1k=1 and 12, since modes with k>⌊p2⌋k> p2 are redundant. 5.2 Results Figure 5: Layerwise eNTK spectra of the Transformer trained on modular arithmetic wrt the “O” + “V” weight matrices in the attention block; the input matrix WinW_in of the MLP layer; the output matrix WoutW_out of the MLP layer; and the unembedding matrix (from left to right, respectively), in a run with clear spectral separation. For additional seeds, see the discussion in Appendix G. Leading layerwise eNTK eigenspaces align with Fourier features at key frequencies. Because we did not use LayerNorm in this experiment (in keeping with the conventions of (Nanda et al., 2023)), the eNTK spectra of different layers of this model differ from each other by orders of magnitude, and the full eNTK spectrum is dominated by the largest such contributions. It’s therefore more meaningful to look at the layerwise eNTK spectra. In this run, we find that the leading five eigendirections of the layerwise eNTK associated with the O+V parameters in the attention block, the input matrix WinW_in of the MLP layer, and the output matrix WoutW_out of the MLP layer each overlap significantly with the sum of a and b Fourier features cos(ωka)+cos(ωkb),sin(ωka)+sin(ωkb)\ ( _ka)+ ( _kb), ( _ka)+ ( _kb)\ at the key frequencies shown in Fig. 4, while the top five eigendirections of the eNTK for the unembedding matrix overlap significantly with the ‘sum’ Fourier features. To see this, we compute the Frobenius norm between said eNTK eigendirections and the corresponding feature families at the key frequencies in Table 2. The number five is one more than twice the number of key frequencies in this run, suggesting that the eigendirections mix precisely those Fourier features with a zero mode or similar. 666Across our earlier examples in sections 3 and 4, there was also a leading mode with a qualitatively large eigenvalue. We leave the question of why the leading direction here has a relatively smaller eigenvalue, so as to mix with the other top eigendirections, for future work. We also find that the next four eigendirections of the WoutW_out and unembedding layers overlap with the ‘sum’ and a and b Fourier feature families, respectively. In Appendix G, we show that this result generalizes across training runs with different (numbers of) key frequencies. Intermittent emergence of sharp cliffs in the eNTK spectrum. In this run, the per-class averaged eNTK spectra at different layers are shown in Fig. 5. Each of these spectra contains a leading cliff ending at the fifth eigenvalue, while the WoutW_out and unembedding layers also contain secondary spectral cliffs of dimension 4. However, we do not find that sharp spectral gaps emerge consistently across other seeds trained at our fixed budget (see Appendix G). We leave the question of when sharp spectral gaps emerge in this model for future work. Table 2: Frobenius norm of the eigendirections mentioned in the text with different Fourier feature families at the key frequencies, as well as a control of normalized random vectors of the same dimension. A score of 4 would denote perfect overlap with the Fourier feature directions. layer A+B sum diff ctrl OV 3.86 0.12 0.04 0.16 MLPin 3.89 0.05 0.01 0.14 MLPout (1) 3.90 0.02 0.03 0.14 MLPout (2) 0.02 3.94 0.01 0.14 U (1) 0.02 3.97 0.02 0.14 U (2) 3.92 0.00 0.02 0.10 6 Discussion To summarize, we have provided evidence that eigenanalysis of the eNTK can surface learned features across three toy benchmarks commonly used by the mechanistic interpretability community. For the toy models studied in this paper, it’s easy to write down closed-form expressions for the kernels in terms of the weights and data vectors. (See Appendices B and D.) It could be interesting to understand analytically what properties of said weights and data were needed to get the clean mechanistic signals presented here. Another direction is to promote our checks that eNTK eigenvectors align with known ground-truth feature vectors to an approach that unsupervisedly matches a candidate eNTK eigendirection with a human-interpretable semantic meaning. To make progress on this problem, we can repurpose work on unsupervised feature discovery in the interpretability literature. In Appendix E, we show that for the modular arithmetic example in section 4, gradient ascent on a random input to maximize a scalar score built from an eNTK eigenvector that corresponds to a Fourier mode at a particular frequency yields synthetic inputs that resemble sinusoids of the same frequency. This provides a proof-of-concept example of how one could unsupervisedly assign semantic meanings to eNTK eigendirections. Finally, we would ultimately like to make contact with realistic models, understanding if and how the eNTK can slot into an end-to-end pipeline for unsupervised feature discovery at scale. To this end, it would be interesting to compute and study the leading eNTK eigendirections 777In larger models, we would presumably want to use approximation methods to compute the top eigendirections without ever materializing the full eNTK. in larger models, e.g. (Bricken et al., 2023), OthelloGPT (Li et al., 2023) and eventually realistic LLMs. We hope to report on this in the future. Acknowledgments I am grateful to Ari Brill, Tom Ingebretsen Carlson, Lauren Greenspan, Andrew Mack, Logan Smith, Lucas Teixeira and Dmitry Vaintrob for discussions and comments on the manuscript. This work was supported by PIBBSS and the Long-Term Future Fund. References Alain & Bengio (2016) Alain, G. and Bengio, Y. Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644, 2016. Atanasov et al. (2021) Atanasov, A., Bordelon, B., and Pehlevan, C. Neural networks as kernel learners: The silent alignment effect. ICLR 2022, 11 2021. URL https://arxiv.org/pdf/2111.00034.pdf. Baratin et al. (2021) Baratin, A., George, T., Laurent, C., Hjelm, R. D., Lajoie, G., Vincent, P., and Lacoste-Julien, S. Implicit regularization via neural feature alignment. In International Conference on Artificial Intelligence and Statistics, p. 2269–2277. PMLR, 2021. Bricken et al. (2023) Bricken, T., Templeton, A., Batson, J., Chen, B., Jermyn, A., Conerly, T., Turner, N., Anil, C., Denison, C., Askell, A., et al. Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2, 2023. Bushnaq et al. (2024a) Bushnaq, L., Heimersheim, S., Goldowsky-Dill, N., Braun, D., Mendel, J., Hänni, K., Griffin, A., Stöhler, J., Wache, M., and Hobbhahn, M. The local interaction basis: Identifying computationally-relevant and sparsely interacting features in neural networks. arXiv preprint arXiv:2405.10928, 2024a. Bushnaq et al. (2024b) Bushnaq, L., Mendel, J., Heimersheim, S., Braun, D., Goldowsky-Dill, N., Hänni, K., Wu, C., and Hobbhahn, M. Using degeneracy in the loss landscape for mechanistic interpretability. In ICML 2024 Workshop on Mechanistic Interpretability, 2024b. Elhage et al. (2022) Elhage, N., Hume, T., Olsson, C., Schiefer, N., Henighan, T., Kravec, S., Hatfield-Dodds, Z., Lasenby, R., Drain, D., Chen, C., Grosse, R., McCandlish, S., Kaplan, J., Amodei, D., Wattenberg, M., and Olah, C. Toy models of superposition. Transformer Circuits Thread, 2022. Fort et al. (2020) Fort, S., Dziugaite, G. K., Paul, M., Kharaghani, S., Roy, D. M., and Ganguli, S. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. Advances in Neural Information Processing Systems, 33:5850–5861, 2020. Greenspan (2025) Greenspan, L. Renormalizing interpretability. URL:https://w.lesswrong.com/s/3fknGqkujhGrnRodA, 2025. Gromov (2023) Gromov, A. Grokking modular arithmetic. arXiv preprint arXiv:2301.02679, 2023. Grosse et al. (2023) Grosse, R., Bae, J., Anil, C., Elhage, N., Tamkin, A., Tajdini, A., Steiner, B., Li, D., Durmus, E., Perez, E., et al. Studying large language model generalization with influence functions. arXiv preprint arXiv:2308.03296, 2023. Henighan et al. (2023) Henighan, T., Carter, S., Hume, T., Elhage, N., Lasenby, R., Fort, S., Schiefer, N., and Olah, C. Superposition, memorization, and double descent. Transformer Circuits Thread, 6:24, 2023. Jacot et al. (2018) Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018. Koh & Liang (2017) Koh, P. W. and Liang, P. Understanding black-box predictions via influence functions. In International conference on machine learning, p. 1885–1894. PMLR, 2017. Li et al. (2023) Li, K., Hopkins, A. K., Bau, D., Viégas, F., Pfister, H., and Wattenberg, M. Emergent world representations: Exploring a sequence model trained on a synthetic task. ICLR, 2023. Loo et al. (2022) Loo, N., Hasani, R., Amini, A., and Rus, D. Evolution of neural tangent kernels under benign and adversarial training. Advances in Neural Information Processing Systems, 35:11642–11657, 2022. Mohamadi et al. (2023) Mohamadi, M. A., Bae, W., and Sutherland, D. J. A fast, well-founded approximation to the empirical neural tangent kernel. In International conference on machine learning, p. 25061–25081. PMLR, 2023. Nanda et al. (2023) Nanda, N., Chan, L., Lieberum, T., Smith, J., and Steinhardt, J. Progress measures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217, 2023. Olah (2022) Olah, C. Mechanistic interpretability, variables, and the importance of interpretable bases. URL:https://w.transformer-circuits.pub/2022/mech-interp-essay, 2022. Olah et al. (2020) Olah, C., Cammarata, N., Schubert, L., Goh, G., Petrov, M., and Carter, S. Zoom in: An introduction to circuits. Distill, 2020. doi: 10.23915/distill.00024.001. https://distill.pub/2020/circuits/zoom-in. Ortiz-Jiménez et al. (2021) Ortiz-Jiménez, G., Moosavi-Dezfooli, S.-M., and Frossard, P. What can linearized neural networks actually say about generalization? Advances in Neural Information Processing Systems, 34:8998–9010, 2021. Power et al. (2022) Power, A., Burda, Y., Edwards, H., Babuschkin, I., and Misra, V. Grokking: Generalization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177, 2022. Roberts et al. (2022) Roberts, D. A., Yaida, S., and Hanin, B. The principles of deep learning theory, volume 46. Cambridge University Press Cambridge, MA, USA, 2022. Sharkey et al. (2025) Sharkey, L., Chughtai, B., Batson, J., Lindsey, J., Wu, J., Bushnaq, L., Goldowsky-Dill, N., Heimersheim, S., Ortega, A., Bloom, J., et al. Open problems in mechanistic interpretability. arXiv preprint arXiv:2501.16496, 2025. Tsilivis & Kempe (2022) Tsilivis, N. and Kempe, J. What can the neural tangent kernel tell us about adversarial robustness? Advances in Neural Information Processing Systems, 35:18116–18130, 2022. Appendix A Review of NTK theory In this section, we review the derivation of Eq. (1), which motivates this line of study. As mentioned in the Introduction, a main claim of the NTK theory is that a neural network trained under gradient flow with MSE loss learns the function (1), (2) at the end of training as long as its trajectory stays in a “lazy limit” where the drift in parameters throughout training is parametrically small. The idea behind the derivation is that if we take a limit where the changes in a model’s weights WμW_μ during training are small enough that we can truncate a Taylor expansion of the model in its weights to first order while maintaining a good approximation, then perhaps we can resum the linearized approximation to how the model changes at each step of gradient descent to get a closed-form formula for the function learned by the model. In equations, given a neural network fi(x,Wμ)f_i(x,W_μ) with index i running over the output neurons, the Taylor expansion for the change in the model after a step of gradient descent is fi(x,Wμ(t+1))−fi(x,Wμ(t))=∑μdfi(x)dWμ|Wμ=Wμ(t)dWμ+…f_i(x,W_μ(t+1))-f_i(x,W_μ(t))= . _μ df_i(x)dW_μ |_W_μ=W_μ(t)dW_μ+… (12) where the …’s denote terms quadratic and higher in dWμdW_μ. Using the definition of gradient descent to replace dWμ→−ηdℒdWμdW_μ→-η dLdW_μ for η the learning rate and ℒL the loss function, as well as the fact that the loss function itself depends on the model evaluated on points in the training set, we can massage (12) into fi(x,Wμ(t+1))−fi(x,Wμ(t))=−η∑j∑α∈dℒdfj(xα)[∑μdfi(x)dWμdfj(xα)dWμ]|Wμ=Wμ(t)+….f_i(x,W_μ(t+1))-f_i(x,W_μ(t))= .-η _j _α dLdf_j(x_α) [ _μ df_i(x)dW_μ df_j(x_α)dW_μ ] |_W_μ=W_μ(t)+…\,. (13) The term in brackets on the RHS is the NTK, (2). 888Throughout this section we also use that the NTK itself is time independent in the lazy limit. Now suppose that we’re allowed to truncate the … while maintaining a controlled approximation to the function learned by the neural network. Moreover, let’s specialize to MSE loss. This yields fi(x,Wμ(t+1))−fi(x,Wμ(t))=−η∑j∑α∈(fj(xα,t)−yj,α)Kij(x,xα).f_i(x,W_μ(t+1))-f_i(x,W_μ(t))=-η _j _α (f_j(x_α,t)-y_j,α)K_ij(x,x_α)\,. (14) If we could sum the LHS of (14) over all time steps t, we would get the desired closed-form formula for the function learned by the neural network in the linearized approximation. What remains is to get rid of the t-dependence on the RHS of (14). To do this, we apply (14) itself to the special case that the argument x on the LHS is a point xαx_α in the training set, yielding roughly fj(xα,t)−yj,α=(1−ηK)t+1(fj(xα,t=0)−yj,α)f_j(x_α,t)-y_j,α=(1-η K)^t+1(f_j(x_α,t=0)-y_j,α) up to indices. This turns the sum over time steps into a geometric series in (1−ηK)(1-η K) whose resummation yields Eq. (1). The mathematical assumption required for these steps to go through is that we must be able to discard the higher-order terms in eqs. (12) and (13). Roughly, this can be imposed at the level of the model architecture in a scaling limit where we make the model much wider than it is deep, while initializing the weights with a 1/width width scaling. See (Roberts et al., 2022) for a recent pedagogical review. We emphasize that the experiments in the main text go beyond the regime of validity of the derivation in this appendix in multiple ways (e.g. the experiments in the text are outside of the lazy regime, and in section 5 we use cross-entropy loss instead of MSE loss), so the theory should be taken only as initial motivation. Appendix B Kernels used in the MLPs studied in this paper For TMS, the model f(x)=ReLU(WTWx+b)f(x)= ReLU(W^TWx+b) (4) implies that ∂fi(x)∂Wai′=gi(x)(Waixi′+δii′Wakxk) ∂ f_i(x)∂ W_ai =g_i(x)(W_aix_i + _ii W_akx_k) (15) for g = 1 if (WTWx+b)>0(W^TWx+b)>0 else 0, and the kernel is Kij(x1,x2)=gi(x1)gj(x2)[(WTW)ij(x1⋅x2)+(x1)j(WTWx2)i+(x2)i(WTWx1)j+δij(Wx1⋅Wx2)]. splitK_ij(x_1,x_2)&=g_i(x_1)g_j(x_2)[(W^TW)_ij(x_1· x_2)+(x_1)_j(W^TWx_2)_i+(x_2)_i(W^TWx_1)_j\\ &+ _ij(Wx_1· Wx_2)]\,. split (16) For the MLP/modular arithmetic experiment, f(x)=W(2)(W(1)x)2f(x)=W^(2)(W^(1)x)^2 (6) implies the layer-1 Jacobian ∂fi(x)∂Wkm(1)=2Wki(2)(W(1)x)kxm ∂ f_i(x)∂ W^(1)_km=2W^(2)_ki(W^(1)x)_kx_m (17) and hence, the layer-1 kernel Kij(1)(x1,x2)=4∑kWki(2)Wkj(2)(W(1)x1)k(W(1)x2)k(x1⋅x2).K_ij^(1)(x_1,x_2)=4 _kW^(2)_kiW^(2)_kj(W^(1)x_1)_k(W^(1)x_2)_k(x_1· x_2)\,. (18) The layer-2 Jacobian is ∂fi(x)∂Wki′(2)=δii′(W(1)x)k2, ∂ f_i(x)∂ W_ki ^(2)= _i (W^(1)x)^2_k, (19) hence the layer-2 kernel is Kij(2)(x1,x2)=δij∑k(W(1)x1)k2(W(1)x2)k2.K_ij^(2)(x_1,x_2)= _ij _k(W^(1)x_1)_k^2(W^(1)x_2)_k^2\,. (20) The total kernel is their sum, K=K(1)+K(2)K=K^(1)+K^(2). It could be interesting, and probably is tractable, to understand which properties of the weights and data lead to the mechanistic structure found in the main text. In general, we will leave this for future work. As an appetizer however, note that an especially striking result in the main text - the fact that for the MLP trained on modular arithmetic, the Layer 1 eigenvalue cliff was present already at initialization, and hence the use of Fourier features could perhaps have been predicted at initialization with the eNTK - is easily explained by the structure of Eq. (18). Eq. (18) contains a parameter-agnostic factor of x1⋅x2x_1· x_2 which is zero for one-hot-encoded data points (a,b)(a,b) and (a′,b′)(a ,b ) unless either a=a′a=a or b=b′b=b . This makes the Layer 1 kernel have nonzero values along each fixed-a and fixed-b line, and zeroes elsewhere. Moreover, the nonzero kernel entries at initialization are approximately identical for each choice of a or b by symmetry. Hence, the kernel at initialization has an approximate shift symmetry along the a and b axes, so is approximately diagonalized by Fourier waves on each axis, with (nonzero) eigenvalues related to the weight initializations. In this example, the x1⋅x2x_1· x_2 factor in (18) relied on the model architecture (specifically, the fact that each one-hot-encoded data point only touches two columns of the Layer 1 weight matrix), and its conversion to a shift-symmetric kernel relied on the shift symmetry of the dataset. This illustrates a broader point that the NTK (and its eigenstructure) inherits information about both already at initialization. Appendix C Algorithm to find a basis of cliff eigenvectors that line up with frequency-sorted Fourier features in the modular arithmetic experiment In section 4, we saw by computing the Frobenius norm of the span of cliff eigenvectors with select Fourier feature families that the first cliff in the eNTK spectrum for a MLP trained on modular arithmetic contains the a and b Fourier families. However, the heat map of inner products of naive eNTK eigenvectors in the first cliff with feature vectors for the a and b Fourier families is not 1:1. See the left-hand side of Fig. 6, where in each square we plot the squared norm of the inner product of the iith eNTK eigenvector in the cliff with the (cosa,sina)( a, a) or (cosb,sinb)( b, b) feature families. 999This equals the maximal correlation with a phase-shifted cosine in (11), i.e. the best match over possible phases φk(1) _k^(1), φk(2) _k^(2). This is because the cliff eigenvalues are almost exactly degenerate due to the exact shift symmetry of the dataset, so the eigenvectors are only defined up to a random rotation. To find an orthogonal basis within the Layer 1 cliff that lines up 1:1 with the Fourier features, we can apply the following two-stage “graph smoothness” algorithm: Figure 6: Results from applying the two-stage graph smoothness algorithm to rotate the eNTK eigenvector basis in the Layer 1 cliff. Left: heatmap showing the squared norm of the inner product of naive eNTK eigenvectors in the cliff with the (cosa,sina)( a, a) or (cosb,sinb)( b, b) feature families. Right: heatmap showing the squared norm of the inner product of basis vectors in the Layer 1 cliff found from applying the two-stage graph smoothness algorithm with the (cosa,sina)( a, a) or (cosb,sinb)( b, b) feature families. 1. On the underlying two-torus of possible inputs (a,b)(a,b) for the modular arithmetic problem, we first construct a (size (p2,p2)(p^2,p^2)) unnormalized Laplacian matrix for each cycle to be the matrix s.t. vTLav=∑a,b(va,b−va+1modp,b)2, v^TL_av= _a,b(v_a,b-v_a+1 p,b)^2\,, (21) vTLbv=∑a,b(va,b−va,b+1modp)2, v^TL_bv= _a,b(v_a,b-v_a,b+1 p)^2\,, (22) penalizing cases where entries in the (length p2p^2) vector v corresponding to lattice points that differ by 1 in the a,ba,b directions respectively are not the same. In practice, the matrices LaL_a, LbL_b that realize (21), (22) are L1⊗IpL_1 I_p and Ip⊗L1I_p L_1 respectively, for L1L_1 the dimension (p,p)(p,p) matrix with 2 on the diagonal and −1-1 along diagonals offset by 1 and at the wrap-around entries (1,p)(1,p) and (p,1)(p,1). 2. We then diagonalize one of the two Laplacians, WLOG LaL_a, inside the span of the Layer 1 cliff. Concretely, we first compress LaL_a by computing A=cTLacA=c^TL_ac for c∈ℝN,kc ^N,k the column-orthonormalized collection of eNTK eigenvectors in the Layer 1 cliff, then eigendecompose the compressed Laplacian finding A=UΣUTA=U U^T with U∈ℝk,kU ^k,k, and finally rotate the eNTK cliff by the eigenvector matrix of the compressed Laplacian, c′=cUc =cU. This sorts the directions in the cliff by a-smoothness. 101010This is because directions where we minimize vTLav^TL_av in a given vector space are the smoothest directions in the vector space by the operational definition (21) of the Laplacian, and we can minimize it from an eigendecomposition by the Rayleigh-Ritz theorem. 3. Finally, we take the directions in the cliff corresponding to the bottom half of the LaL_a eigenvalues 111111I.e., the bottom 2⌊p2⌋2 p2 directions in practice. and diagonalize LbL_b inside that span, using the same procedure as in step 2. This sorts those directions by b-smoothness. The logic is that if the Layer 1 cliff was spanned exactly by the (cosa,sina),(cosb,sinb)( a, a),( b, b) Fourier families, then we would expect the spectrum wrt LaL_a to contain degenerate near-zero values corresponding to the ‘b’ Fourier modes that are constant in the ‘a’ direction, followed by a tower of increasing pairs of nonzero eigenvalues corresponding to the sorted ‘a’ Fourier modes. Applying this algorithm to the span of the Layer 1 cliff, we indeed find a set of basis directions in 1:1 correspondence with frequency-sorted a and b Fourier modes. This is shown on the right-hand side of Fig. 6. An analogous rotation works to surface the ‘sum’ and ‘difference’ Fourier feature vectors as linear combinations of eigenvectors in the second eNTK cliff (see the left-hand column of Fig. 2). In this case, we apply the two-stage graph smoothness algorithm to the sum and difference directions, first constructing Laplacians vTLsumv=∑a,b(va,b−va+1modp,b+1modp)2, v^TL_ sumv= _a,b(v_a,b-v_a+1 p,b+1 p)^2\,, (23) vTLdiffv=∑a,b(va,b−va+1modp,b−1modp)2, v^TL_ diffv= _a,b(v_a,b-v_a+1 p,b-1 p)^2\,, (24) that penalize variation along the diagonals, then diagonalizing first the sum then difference Laplacian within the span of the second eigenvalue cliff. This situation is equivalent to two-stage diagonalization in the a and b directions up to a 45-degree rotation. In particular, (a,b)(a,b) and (a+1,b+1)(a+1,b+1) have the same a−ba-b, so the difference family is flat wrt LsumL_ sum, and vice versa. Figure 7: Results from applying the two-stage graph smoothness algorithm wrt the sum and difference Laplacians (23), (24) to rotate the eNTK eigenvectors in the second cliff of the MLP/modular arithmetic experiment in section 4 at different epochs. Each plot is a heatmap showing the squared norm of the inner product of rotated basis vectors in the second cliff with the (cosa+b,sina+b)( a+b, a+b) or (cosa−b,sina−b)( a-b, a-b) feature families. The results are shown in Fig. 7. From the bottom right plot showing the results of the rotation at the end of training, we see that the procedure indeed surfaces a basis in the second eigenvalue cliff that (approximately) lines up with the sum and difference Fourier features. The heatmaps for each family form a “>>” shape because stepping across the diagonal advances the argument of each Fourier mode by 2⋅2πk/p2· 2π k/p, so the Laplacian energy is minimal near k≈0k≈ 0 and k≈p/2k≈ p/2, and maximal near k≈p/4k≈ p/4. 121212Note that this rotation is imperfect because the low-energy ‘‘k=1”``k=1" and ‘‘k=14”``k=14" sum modes mix with the LdiffL_ diff modes. Indeed, one can make the heatmaps in Fig 7 appear closer to 1:1 by diagonalizing LdiffL_ diff over the bottom half + 4 instead of bottom half of LsumL_ sum eigenvalues to unmix them, although we haven’t included the plots that result from doing this here. Moreover, in this case the rotation algorithm reveals that the eNTK eigenvectors align continuously with the Fourier directions to either side of the grokking phase transition. From comparing the change in the heatmaps between epochs 80-100 (when grokking happens in this experiment) and the other epochs, we see that nothing special happens at the grokking phase transition at the level of the eigenvectors. Instead, incipient alignment of the second cliff eigenvectors with the sum/diff modes was visible well before the phase transition, and continues after it. As discussed in the main text, the eigenvalues associated with the second cliff do jump in magnitude at the grokking phase transition. Appendix D Analytic formulas for the layerwise eNTK of a Transformer For the 1L Transformer/modular arithmetic experiment in section 5, layerwise eNTK formulas for the late layers of the model are as follows (with i a class index ∈ dvocabd_vocab, a,ba,b indices ∈ dmodeld_model, and m,nm,n indices ∈ dmlpd_mlp): • For the unembedding matrix (of shape (dvocabd_vocab, dmodeld_model)), the per-class layer-wise eNTK is a diagonal delta function in class indices × the data-data Gram matrix of the residual stream at the final sequence (’=’) position before unembedding, i.e., the shape (N,N)(N,N) tensor whose ijijth entry is the inner product of the residual stream at the ‘=’ position when we evaluate the model on the iith point in the dataset and the residual stream at the ‘=’ position when we evaluate it on the jjth point in the dataset. To see this, let r(x)r(x) be the residual stream at the final sequence position before unembedding. The model output is fi(x)=(WU)iara(x),f_i(x)=(W_U)_iar_a(x)\,, (25) hence dfi(x)d(WU)ja=δijra(x). df_i(x)d(W_U)_ja= _ijr_a(x)\,. (26) Hence, Kii′WU(x,x′)=∑j,adfi(x)d(WU)jadfi′(x′)d(WU)ja=δii′∑ara(x)ra(x′).K^W_U_i (x,x )= _j,a df_i(x)d(W_U)_ja df_i (x )d(W_U)_ja= _i _ar_a(x)r_a(x )\,. (27) • For the output matrix WoutW_out of the MLP layer (of shape (dmodel,dmlp))(d_model,d_mlp)), the layerwise eNTK is the data-data Gram matrix of the MLP hidden-layer activation at the final sequence (‘=’) position × a fixed class-dependent rescaling from WUW_U. To see this, note that the MLP block contributes (Wout)amϕ(Winr)m(W_out)_amφ(W_inr)_m additively to the residual stream, which gets multiplied by (WU)ia(W_U)_ia to contribute to the final model output, fi(x)f_i(x). (The model output fi(x)f_i(x) also contains earlier contributions to the residual stream, but those don’t depend on WoutW_out.) Hence, dfi(x)d(Wout)am=(WU)iaϕ(Winr(x))m, df_i(x)d(W_out)_am=(W_U)_iaφ(W_inr(x))_m\,, (28) and Kii′Wout(x,x′)=∑a,mdfi(x)d(Wout)amdfi′(x′)d(Wout)am=(WUWUT)ii′∑aϕa(x)ϕa(x′).K_i ^W_out(x,x )= _a,m df_i(x)d(W_out)_am df_i (x )d(W_out)_am=(W_UW_U^T)_i _a _a(x) _a(x )\,. (29) • For the input matrix WinW_in of the MLP layer (of shape (dmlp,dmodel))d_mlp,d_model)), the layerwise eNTK is the data-data Gram matrix of the residual stream going into the MLP layer at the final sequence (‘=’) position × a data-dependent prefactor from the part of the model “after” WinW_in in the inference-time direction of the model. To see this, we again take the starting point that the MLP block contributes (WU)ib(Wout)bnϕ((Win)ncrc)(W_U)_ib(W_out)_bnφ((W_in)_ncr_c) to the model output fi(x)f_i(x). Hence, dfi(x)d(Win)ma=(WU∗Wout)imϕm′(x)ra(x). df_i(x)d(W_in)_ma=(W_U*W_out)_imφ _m(x)r_a(x)\,. (30) Hence, Kii′Win(x,x′)=∑a,mdfi(x)d(Win)madfi′(x′)d(Win)ma=∑m(WUWoutWoutTWUT)ii′ϕ′(Winr(x))mϕ′(Winr(x′))m∑ara(x)ra(x′).K_i ^W_in(x,x )= _a,m df_i(x)d(W_in)_ma df_i (x )d(W_in)_ma= _m(W_UW_outW_out^TW_U^T)_i φ (W_inr(x))_mφ (W_inr(x ))_m _ar_a(x)r_a(x )\,. (31) If the activation function is a ReLU, the data-dependence of this prefactor is just a binary gate that sets entries in Kii′WinK_i ^W_in to 0 if the arguments are negative. Note that across the examples (27), (29), (31), the layerwise eNTK factorizes into a kernel built from Jacobians of the model wrt the model activation after the layer × a data-data Gram matrix of the model activations preceding the layer. In fact, this pattern generalizes to deeper Transformers. In a model with 2+ Transformer blocks, the dependence of the output on a given parameter matrix WcW_c takes the schematic form fi=WU∗((current block after Wc)∘Wc∗rin)+WU∗F((current block after Wc)∘Wc∗rin),f_i=W_U*(( current block after $W_c$) W_c*r_in)+W_U*F(( current block after $W_c$) W_c*r_in)\,, (32) where rinr_in parametrizes the activations preceding WcW_c. (Here we’ve suppressed the index structure, but WUW_U carries the class index i and all others are contracted.) The first term on the RHS of (32) comes from the direct contribution of the current block to the residual stream, and the second, which is absent in eqs. (27) - (31), comes from how later blocks use the output of the current block as inputs. Differentiating, we get schematically that dfi(x)dWc=WU∗(1+dFd(current block))∗d(rest of current block)dWc∗rin(x). df_i(x)dW_c= \W_U* (1+ dFd( current block) )* d (rest of current block)dW_c \*r_in(x)\,. (33) If we “square” (33) to form the layerwise eNTK, we will find again the same pattern with the bracketed term giving rise to a kernel built from per-class Jacobians of the model wrt the model activation right after WcW_c, along with a class-independent data-data Gram matrix of the model activation right before WcW_c. Appendix E Towards unsupervised feature detection with the eNTK In the main text of this paper, we demonstrated a posteriori that eNTK eigenvectors align with ground-truth features by computing their alignment with feature vectors in toy models. It would be interesting, and practically useful, to promote our checks to an approach that unsupervisedly matches a candidate eNTK eigendirection with a human-interpretable semantic meaning. In this appendix, we show how one can do this for the MLP/modular arithmetic model discussed in section 4. Much has been written in the interpretability literature about how to unsupervisedly assign a semantic meaning to a would-be candidate feature that maps a model input to a scalar score, F(model input) → ℝR. (For example, the activations of individual model neurons, or SAE neurons, have this type signature.) A selection of strategies for this problem are discussed in (Olah et al., 2020). One concrete “hill-climbing” idea is to generate then try to interpret synthetic inputs that locally maximize the scalar score. I.e., given such a function, one can feed the model random noise; do gradient ascent on the scalar score over input directions until we reach a local maximum; then see if the resulting inputs are interpretable, over a collection of different starting points. The (per-class) eigenvectors of the eNTK have length N, for N the size of the dataset used to construct the eNTK. Given such an eigendirection u, a natural way to convert it to a scalar score on a model input xax_a is to take the inner product between the direction and the eNTK itself evaluated on xax_a and the points in the dataset, 131313Thanks to Tom Ingebretsen Carlson for discussions of this point. su(x)=∑iuiK(xi,xa).s_u(x)= _iu_i\,K(x_i,x_a)\,. (34) Applying the “hill-climbing” idea to the score (34) for linear combinations of eNTK eigendirections found in Appendix C to correspond to b Fourier features at different frequencies, 141414The results for eigendirections corresponding to a Fourier features are similar, localized to the other half of the input directions. we find that a random input vector after hill-climbing wrt a mode of frequency k converges on a configuration with k peaks localized to the b half of the input directions. See Figure 8. Figure 8: Results from doing gradient ascent to extremize the score (34) wrt eNTK eigendirections corresponding to b Fourier modes with the listed frequencies, starting from the random input in the left-hand panel. Hence, if somebody had handed us one of the eigendirections discussed in Appendix C without a priori knowledge that the model used Fourier modes to implement the modular arithmetic algorithm, this method could plausibly have been used to discover that each eNTK direction corresponds to Fourier modes on the correct arguments and frequencies. Appendix F Robustness check that the second eNTK cliff appears at grokking for the MLP trained on modular arithmetic Figure 9: eNTK spectra for a MLP trained on modular arithmetic around the time of grokking at different values of p (= 19, 23, 31). In each case, we find that the model has one cliff of size 4⌊p2⌋4 p2 before grokking, and develops a second cliff of the same size after grokking. In this section, we reproduce Fig. 3 for the MLP trained on modular arithmetic at different values of p, showing that the eNTK spectrum develops a second cliff at the grokking phase transition across different values of p. We focus on this result as a robustness check of section 4 because the first cliff is essentially symmetry-protected and has to be there across examples on theoretical grounds, as explained in Appendix B. In Fig. 9 we render the analog of Fig. 3 at the different values of p = (19, 23, 31) and α shown in the figure captions. In each case, we find that the model has one cliff of size 4⌊p2⌋4 p2 before the epoch where grokking takes place for the run, and develops a second cliff of the same size after grokking. Appendix G Robustness check for overlap of leading eNTK eigendirections and Fourier features at key frequencies for the Transformer trained on modular arithmetic In this section, we reproduce Table 2 of the Frobenius overlap between the leading eigendirections of the eNTK for the Transformer/modular arithmetic experiment and Fourier families at the (training run-dependent) key frequencies, for the first four random seeds in sequential order besides the one reported in the main text. In each case, we train the model to 4000 epochs and report the number of key frequencies n that appeared in each training run; the test accuracy after 4000 epochs; the Frobenius overlap between the first n+1n+1 eNTK eigenvectors at each layer and various Fourier families at the key frequencies appearing in each respective training run; and in the case of the MLPout and unembedding layers, the Frobenius overlap between the next n eNTK eigenvectors with the Fourier families at the key frequencies appearing in each training run. Table 3: Frobenius overlap of leading layerwise eNTK eigendirections in the Transformer/modular arithmetic experiment of section 5 with Fourier families at key frequencies across random seeds. Bolded numbers denote >>50% alignment with a particular feature family. See the text in Appendix G for details on how we define the overlaps in the table. Training run No. of key frequencies Test acc. Layer A+B sum diff ctrl 1 4 96.9% OV 6.65/8 0.29 0.09 0.21 MLPin 7.34/8 0.06 0.04 0.24 MLPout (1) 7.28/8 0.10 0.10 0.24 MLPout (2) 0.13 4.35/8 2.43 0.27 U (1) 0.41 6.41/8 0.07 0.22 U (2) 6.90/8 0.28 0.16 0.21 2 4 67.0% OV 5.03/8 0.20 0.06 0.29 MLPin 5.45/8 0.21 0.03 0.14 MLPout (1) 5.33/8 0.83 0.50 0.30 MLPout (2) 0.88 3.43 0.77 0.23 U (1) 1.63 4.36/8 0.14 0.26 U (2) 4.32/8 0.72 0.32 0.26 3 2 100% OV 3.73/4 0.22 0.03 0.15 MLPin 3.82/4 0.11 0.01 0.12 MLPout (1) 2.92/4 1.00 0.02 0.13 MLPout (2) 0.98 2.70/4 0.23 0.14 U (1) 0.00 3.94/4 0.01 0.12 U (2) 3.94/4 0.00 0.02 0.15 4 3 91.9% OV 4.65/6 0.18 0.10 0.11 MLPin 4.63/6 0.09 0.03 0.20 MLPout (1) 3.72/6 2.01 0.08 0.19 MLPout (2) 1.00 2.30 1.59 0.20 U (1) 1.08 4.26/6 0.14 0.20 U (2) 3.62/6 0.12 1.42 0.21 Upon doing so, we find that the overlap of leading layerwise eigendirections with specific Fourier families at the key frequencies is robust across training runs. In each case, the first n+1n+1 eNTK eigendirections of the layerwise spectrum for the O+V matrices, MLPin and MLPout and next n eigendirections of the unembedding layer primarily overlap with the sum of a and b Fourier features at the respective key frequencies, while the first n+1n+1 eigendirections of the unembedding layer primarily overlap with the ‘sum’ Fourier features, generalizing Table 2. However, after 4000 epochs these runs do not generically develop sharp gaps in the eigenspectrum. We speculate this is related to our not having reached convergence, so that the overlaps reported in Table 3 represent incipient alignment of the type discussed in section C. We save a full across-seed analysis for future work.