← Back to papers

Paper deep dive

The Geometry of Self-Verification in a Task-Specific Reasoning Model

Andrew Lee, Lihao Sun, Chris Wendler, Fernanda ViĂŠgas, Martin Wattenberg

Year: 2025Venue: arXiv preprintArea: Mechanistic Interp.Type: EmpiricalEmbeddings: 87

Models: Qwen2.5-3B

Intelligence

Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 96%

Last extracted: 3/12/2026, 6:37:44 PM

Summary

The paper investigates the internal mechanisms of self-verification in reasoning models using a task-specific model trained on the CountDown task. By employing top-down analysis (linear probes on GLU weights) and bottom-up analysis (identifying previous-token attention heads), the authors localize a verification circuit and demonstrate that disabling specific attention heads can deactivate verification-related GLU vectors, effectively disabling the model's self-verification capability.

Entities (5)

CountDown ¡ task ¡ 100%DeepSeek-R1 ¡ model ¡ 100%Gated Linear Unit ¡ architecture-component ¡ 100%R1DownCount ¡ model ¡ 100%Previous-token heads ¡ attention-mechanism ¡ 95%

Relation Signals (3)

R1DownCount → performstask → CountDown

confidence 100% ¡ We study this question by training a model using DeepSeek R1's recipe on the CountDown task.

Previous-token heads → enables → Self-Verification

confidence 95% ¡ Through causal analyses, we find that disabling previous-token heads disables model verification.

Gated Linear Unit → encodes → Verification Tokens

confidence 90% ¡ Top-down, we find Gated Linear Unit (GLU) weights encoding verification-related tokens.

Cypher Suggestions (2)

Identify models that utilize the CountDown task for verification analysis. ¡ confidence 95% ¡ unvalidated

MATCH (m:Model)-[:PERFORMS_TASK]->(t:Task {name: 'CountDown'}) RETURN m

Find all components identified as part of the verification circuit. ¡ confidence 90% ¡ unvalidated

MATCH (c:Component)-[:PART_OF]->(v:Circuit {name: 'Verification Circuit'}) RETURN c

Abstract

Abstract:How do reasoning models verify their own answers? We study this question by training a model using DeepSeek R1's recipe on the CountDown task. We leverage the fact that preference tuning leads to mode collapse, yielding a model that always produces highly structured chain-of-thought sequences. With this setup, we do top-down and bottom-up analyses to reverse-engineer how the model verifies its outputs. Top-down, we find Gated Linear Unit (GLU) weights encoding verification-related tokens, such as ``success'' or ``incorrect''. Bottom-up, we find that ``previous-token heads'' are mainly responsible for self-verification in our setup. Our analyses meet in the middle: drawing inspiration from inter-layer communication channels, we use the identified GLU weights to localize as few as three attention heads that can disable self-verification, pointing to a necessary component of a potentially larger verification circuit. Finally, we verify that similar verification components exist in our base model and a general reasoning DeepSeek-R1 model.

Tags

ai-safety (imported, 100%)empirical (suggested, 88%)mechanistic-interp (suggested, 92%)

Links

Your browser cannot display the PDF inline. Open PDF directly →

Full Text

86,256 characters extracted from source content.

Expand or collapse full text

The Geometry of Self-Verification in a Task-Specific Reasoning Model Andrew LeeH, Lihao SunC, Chris WendlerN, Fernanda ViégasH,G, Martin Wattenbergℌ,∗ℌsuperscript H, G^*start_FLOATSUPERSCRIPT H , G∗ end_FLOATSUPERSCRIPT HHarvard University, CUniversity of Chicago, NNortheastern University, GGoogle DeepMind andrewlee@g.harvard.edu Work done entirely at Harvard. This project was done as part of ARBOR. For more information, see https://arborproject.github.io/ Abstract How do reasoning models verify their own answers? We study this question by training a model using DeepSeek R1’s recipe on the CountDown task. We leverage the fact that preference tuning leads to mode collapse, yielding a model that always produces highly structured chain-of-thought sequences. With this setup, we do top-down and bottom-up analyses to reverse-engineer how the model verifies its outputs. Top-down, we find Gated Linear Unit (GLU) weights encoding verification-related tokens, such as “success” or “incorrect”. Bottom-up, we find that “previous-token heads” are mainly responsible for self-verification in our setup. Our analyses meet in the middle: drawing inspiration from inter-layer communication channels, we use the identified GLU weights to localize as few as three attention heads that can disable self-verification, pointing to a necessary component of a potentially larger verification circuit. Finally, we verify that similar verification components exist in our base model and a general reasoning DeepSeek-R1 model. 1 Introduction Recent language models like OpenAI’s o-series [27] and DeepSeek R1 [11] demonstrate impressive reasoning capabilities. Such models are trained with reinforcement learning (RL) in which they are rewarded when their final outputs are correct. Behaviorally, these models generate long chain-of-thought (CoT) [38] reasoning traces. There is an open question on whether monitoring their CoT is worthwhile, given a growing line of work suggesting that their CoTs do not faithfully reflect the model’s inner computations [36, 14, 2]. Can we monitor their hidden states instead? We take a step towards investigating this question, by studying a model’s inner mechanism for a crucial reasoning step, i.e., self-verification. General reasoning entails a broad range of tasks, requiring a diverse set of skills. In order to conduct a systematic study, we train and analyze a task-specific reasoning model using the same recipes from DeepSeek R1. We limit the scope of our study to a specific task that requires search – a core reasoning skill broadly applicable for many reasoning tasks. We also select a task in which we can expect the verification mechanism ahead of time, making our analyses easier. Namely, we study CountDown [10, 9, 32, 39], in which a set of numbers (operands) and a target number is given, and the model must find the right arithmetic combination using the operands to reach the target number. Because the target number is specified in the context, we can expect attention heads to play a role in verification and shed light onto other relevant weights and subspaces pertaining to self-verification. Studying a task-specific model has a second non-obvious benefit: training language models with RL (i.e., with preference signals) can lead to mode collapse towards majority preferences, significantly reducing the diversity of their outputs [13, 29, 21, 35]. Luckily, in the context of model interpretability, this means that our task-specific model converges to always generating well-structured CoT sequences, allowing us to easily and systematically parse its reasoning trace (e.g., see Table 1). Given our setup, we conduct “top-down” and “bottom-up” analyses to reverse-engineer how the model verifies its own predictions. Our two analyses meet in the middle, revealing key subspaces relevant for model verification. Going top-down, we leverage linear probes to find Gated Linear Unit (GLU) vectors in late layers that often encode tokens relevant for verification. CJK*UTF8gbsn Interestingly, these vectors also seem to correlate with English or Chinese tokens, like “success” or “不完” (“failed”). Furthermore, the antipodal directions of these vectors also encode the antonyms of correct or incorrect tokens. Going bottom-up, given the nature of our task, we hypothesize and verify that attention heads play a significant role. We find “previous-token heads” – attention heads that attend to previous occurrences of the current token – that attend to the provided solution in the context. Previous-token heads have been studied before, for instance in the context of induction heads [26]. Through causal analyses, we find that disabling previous-token heads disables model verification. Our two analyses meet in the middle: we find that disabling previous-token heads also deactivates our GLU vectors. Inspired by inter-layer communication channels [7, 19], we look for previous-token heads that most align with the “receptive-field” of our GLU vectors, allowing us to localize as few as three attention heads that reliably disables model verification. Thus our work finds necessary components for a potentially larger verification circuit. Finally, we verify that similar verification components exist in our base model prior to RL, as well as in a general reasoning model, DeepSeek-R1-Distill-Qwen-14B. Obviously, most reasoning tasks do not provide an easily verifiable solution in the context. However, by illustrating a thorough mechanism of verification in our simplified setup, we take a step towards the possibility of monitoring and interpreting a model’s inner computations in its hidden states. 2 Notations, Key Terminologies We first establish key terminologies and notations. A Transformer’s forward pass first embeds the input using weights WE∈IRd×VW_E∈ I\!R^d× VWitalic_E ∈ I R start_FLOATSUPERSCRIPT d × V end_FLOATSUPERSCRIPT. The embeddings go through L Transformer blocks, yielding hidden states ℓ∈IR,dℓ∈[L−1]x ∈ I\!R^d, ∈[L-1]xroman_ℓ ∈ I R start_FLOATSUPERSCRIPT d end_FLOATSUPERSCRIPT , ℓ ∈ [ L - 1 ]. The last layer, L−1superscript1x^L-1xitalic_L - 1, is “unembedded”, or projected back to the token embedding space using WEsubscriptW_EWitalic_E, and the nearest neighboring token embedding of WE⊤⁢L−1superscriptsubscripttopsuperscript1W_E x^L-1Witalic_E⊤ xitalic_L - 1 is outputted. Each block consists of attention heads and Gated Linear Units (GLUs) [34]. Attention. Each attention head consists of key (WKsubscriptW_KWitalic_K), query (WQsubscriptW_QWitalic_Q), value (WVsubscriptW_VWitalic_V), and output (WOsubscriptW_OWitalic_O) weights. an attention pattern A is computed using key and query weights: A=softmax⁢(i⊤⁢WQ⊤⁢WK⁢i)softmaxsuperscriptsubscripttopsuperscriptsubscripttopsubscriptsubscript A=softmax(x_i W_Q W_Kx% _i)A = softmax ( xitalic_i⊤ Witalic_Q⊤ Witalic_K xitalic_i ) (1) where WQ⊤⁢WKsubscriptsuperscripttopsubscriptW _QW_KW⊤italic_Q Witalic_K is sometimes referred to as a “QK circuit”. A is used to scale the “OV circuit” (WO⁢WVsubscriptsubscriptW_OW_VWitalic_O Witalic_V) to produce an output for each head: h⁢()=(A⊗WO⁢WV)⋅ℎ⋅tensor-productsubscriptsubscript h(x)=(A W_OW_V)·xh ( x ) = ( A ⊗ Witalic_O Witalic_V ) ⋅ x (2) Gated Linear Units and GLUOutsubscriptGLUOutGLU_OutGLUOut Vectors. Given a Gated Linear Unit (GLU) block: GLU⁢()=(σ⁢(Wg⁢a⁢t⁢e⁢)⊙Wu⁢p⁢)⁢Wo⁢u⁢tGLUdirect-productsubscriptsubscriptsubscript (x)= (σ(W_gatex) W_% upx )W_outGLU ( x ) = ( σ ( Witalic_g a t e x ) ⊙ Witalic_u p x ) Witalic_o u t (3) where Wg⁢a⁢t⁢e,Wu⁢p,Wo⁢u⁢t∈IRdg⁢l⁢u×dW_gate,W_up,W_out∈ I\!R^d_glu× dWitalic_g a t e , Witalic_u p , Witalic_o u t ∈ I R start_FLOATSUPERSCRIPT ditalic_g l u × d end_FLOATSUPERSCRIPT, we decompose it as following: M=σ(Wg⁢a⁢t⁢e)⊙Wu⁢p,M∈IR,dg⁢l⁢u M=σ(W_gatex) W_upx,M∈ I\!R% ^d_glu,M = σ ( Witalic_g a t e x ) ⊙ Witalic_u p x , M ∈ I R start_FLOATSUPERSCRIPT ditalic_g l u end_FLOATSUPERSCRIPT , (4) GLU⁢()=M⊤⁢Wo⁢u⁢t=∑jdg⁢l⁢umj⁢jGLUsuperscripttopsubscriptsuperscriptsubscriptsubscriptsubscriptsubscript (x)=M W_out= _j^d_glum_j% v_jGLU ( x ) = M⊤ Witalic_o u t = ∑jitalic_ditalic_g l u mitalic_j vitalic_j (5) where mjsubscriptm_jmitalic_j is the j-th entry in M and jsubscriptv_jvitalic_j is the j-th row of Wo⁢u⁢tsubscriptW_outWitalic_o u t. Importantly, the output of each GLU block can be thought of as the weighted sum of vv’s (the rows of Wo⁢u⁢tsubscriptW_outWitalic_o u t), where the weights are determined by activations M. We refer to elements of M as GLU neurons and vectors vv as GLUOutsubscriptGLUOutGLU_OutGLUOut vectors. For more details, we refer readers to [7]. A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process and then provides the user with the answer. User: Using the numbers [20, 14, 40], create an equation that equals 28. You can use basic arithmetic operations (+, -, *, /) and each number can only be used once. Show your work in <think> </think> tags. And return the final answer in <answer> </answer> tags, for example <answer> (1 + 2) / 3 </answer>. Assistant: Let me solve this step by step. <think> We have the numbers 20, 14, and 40. We need to use these numbers to make an equation that equals 28 using basic arithmetic operations. Let’s try different combinations: 40 - 14 - 20 = 10 - 20 = -10 (not 28) 40 - 14 + 20 = 26 + 20 = 46 (not 28) 40 + 14 - 20 = 54 - 20 = 34 (not 28) 40 + 14 + 20 = 54 + 20 = 74 (not 28) 40 * 14 / 20 = 560 / 20 = 28 (this works) So, the equation that equals 28 is 40 * 14 / 20. </think> <answer> (40 * 14) / 20 </answer> Table 1: CountDown Task. The model must find an arithmetic combination of the operands to reach the specified target number. We leverage the fact that preference tuning leads to mode collapse, resulting in a model that consistently generates structured CoT tokens that we can easily parse. 3 Training Task-Specific Reasoning Models We apply DeepSeek R1-Zero’s setup with Qwen2.5-3B as our base model (Hyperparams: Appx. A).111We use TinyZero: https://github.com/Jiayi-Pan/TinyZero/tree/main Our task, CountDown, is a simple testbed frequently used to study recent reasoning models [10, 9, 39, 32] – given a set of 3 or 4 operands (e.g., 19, 36, 55, 7) and target number (e.g., 65), the task is to find the right arithmetic combination of the operands to reach the target number (i.e., 55 + 36 - 7 - 19). The model is given two rewards: accuracy reward for reaching the correct final answer, and a format reward when it generates its CoT tokens in between “<think>” and “</think>” tokens. For more details on how R1-Zero is trained, see [11]. We refer to our task-specific model as R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown. One advantage of studying a specific task is in that preference training leads to mode collapse [13, 29, 21, 35], resulting in a reduction in generation diversity. In our context, this is desirable, as the model converges to generating a highly structured CoT sequence. See Table 1. This allows us to easily parse the model’s CoT. Namely, the model enumerates through many attempts, while always marking each attempt as either “(this works)” or “(not ans)”. Thus, we can study the model’s hidden states at specific timesteps, such as right before it produces either “this” or “not”, which we refer to as tv⁢a⁢l⁢i⁢dsubscriptt_validtitalic_v a l i d and ti⁢n⁢v⁢a⁢l⁢i⁢dsubscriptt_invalidtitalic_i n v a l i d. We refer to the hidden states at these timesteps as ValidsubscriptValidx_ValidxValid and InvalidsubscriptInvalidx_InvalidxInvalid. We refer to the timestep in the prompt at which the target number is specified as ta⁢n⁢ssubscriptt_anstitalic_a n s. 4 Components for Self-Verification in CountDown Here we present a series of analyses to identify weights and subspaces relevant for verification. We do a “top-down” analysis to find relevant GLU vectors in late layers, and a “bottom-up” analysis to find relevant attention heads in early layers. Our analyses meet in the middle, to identify relevant subspaces for verification. We verify the role of such weights and subspaces via causal experiments. 4.1 Top-Down: Finding Verification-Related GLU Vectors CJK* UTF8gbsn Figure 1: Averaged LogitLens from 300 samples. We see tokens related to verification (“success”, “不合”) in the last few layers. (a), (b) show the top tokens when (in)correct solutions are reached. (c), (d) shows results from intervening on either GLU weights or attention heads, given a correct solution. For (c), while the model is less certain (P(“this”) drops from 0.98 to 0.54), we still see tokens such as “success” showing up. For (d), we no longer see any tokens related to “success”, and the model’s final next-token predictions closely resembles when the model has not found a solution (b). LogitLens. We start our analysis by applying LogitLens [23] to compare the hidden states of ValidsubscriptValidx_ValidxValid and InvalidsubscriptInvalidx_InvalidxInvalid on a sample size of 300. We apply the unembedding layer at all intermediate layers ℓsuperscriptℓx xroman_ℓ and inspect the resulting nearest neighboring tokens across 300 samples. CJK* UTF8gbsn Figure 1(a, b) shows our results in the late layers (see Appendix Figure 5 for more layers). Interestingly, we see tokens such as “SUCCESS”, “yes”, “bingo” show up for ValidsubscriptValidx_ValidxValid, and “不符合” (“Does not conform”), “not”, “不合” (“Incompatible”) for InvalidsubscriptInvalidx_InvalidxInvalid. Interestingly, we often observe English tokens for ValidsubscriptValidx_ValidxValid and Chinese tokens for InvalidsubscriptInvalidx_InvalidxInvalid. What drives these tokens to appear? Probing. To answer this question, we train linear probes Wℓ∈IR2×dW ∈ I\!R^2× dWroman_ℓ ∈ I R start_FLOATSUPERSCRIPT 2 × d end_FLOATSUPERSCRIPT at every layer ℓ ℓ from timesteps right before “this” or “not” is predicted.These timesteps correspond to when an answer is produced, and an open parenthesis tokens “(” is being predicted next, as opposed to “this” or “not”. WℓsuperscriptℓW Wroman_ℓ is a linear mapping from the hidden states, ℓsuperscriptℓx xroman_ℓ, to a binary label of whether the model has found the solution. Our training data is =(yiℓ,yi)i=0N−1,yi∈0⁢(“not”),1⁢(“this”)formulae-sequencesuperscriptsubscriptsuperscriptsubscriptsuperscriptℓsuperscript01superscript0(“not”)1(“this”)D=\(x_y^i ,y^i)\_i=0^N-1,y^i∈\0\ % (``not′),1\ (``this′)\D = ( xitalic_yitalic_iroman_ℓ , yitalic_i ) i = 0N - 1 , yitalic_i ∈ 0 (“not”) , 1 (“this”) , N=327,680. We solve for WℓsuperscriptℓW Wroman_ℓ to fit y=softmax⁢(Wℓ⁢ℓ)softmaxsuperscriptℓsuperscriptℓy=softmax(W x )y = softmax ( Wroman_ℓ xroman_ℓ ) using gradient descent (hyperparameters in Appendix C). Validation accuracy (N=512512N=512N = 512) per layer is provided in the Appendix (Figure 6), with accuracy usually staying above 90% after the first few layers. High accuracy suggests that our probing vectors W⁢[0],W⁢[1]delimited-[]0delimited-[]1W[0],W[1]W [ 0 ] , W [ 1 ] identify a direction in the model’s activation space that linearly separates points of ValidsubscriptValidx_ValidxValid and points of InvalidsubscriptInvalidx_InvalidxInvalid (i.e., linearly separable subspaces). Such vectors can steer the model. Simply adding W⁢[0]delimited-[]0W[0]W [ 0 ] or W⁢[1]delimited-[]1W[1]W [ 1 ] into hidden states can push xx towards ValidsubscriptValidx_ValidxValid or InvalidsubscriptInvalidx_InvalidxInvalid, and change the model’s output to indicate that it has (or has not) found a solution, even when it has not (or has). We provide qualitative examples of steering results in Appendix E. GLUValidsubscriptGLUValidGLU_ValidGLUValid, GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid Vectors. Our probe W tells us that mid-layer activations can be linearly separated to identify solved cases (ValidsubscriptValidx_ValidxValid) from unsolved cases (InvalidsubscriptInvalidx_InvalidxInvalid), but also serves a secondary purpose. Namely, we can use W to identify GLUOutsubscriptGLUOutGLU_OutGLUOut vectors of interest [15]. Per layer, we select the top k(=50)annotatedabsent50k(=50)k ( = 50 ) GLUOutsubscriptGLUOutGLU_OutGLUOut vectors by how similar they are to Wℓ⁢[0]superscriptℓdelimited-[]0W [0]Wroman_ℓ [ 0 ] or Wℓ⁢[1]superscriptℓdelimited-[]1W [1]Wroman_ℓ [ 1 ] using cosine similarity. One can consider these vectors as weights that contribute the most towards Wℓ⁢[0]superscriptℓdelimited-[]0W [0]Wroman_ℓ [ 0 ] (no solution) or Wℓ⁢[1]superscriptℓdelimited-[]1W [1]Wroman_ℓ [ 1 ] (found solution) directions. We refer to them as GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid and GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors. This results in k×L×22k× L× 2k × L × 2 GLUValid, InvalidsubscriptGLUValid, InvalidGLU_Valid, InvalidGLUValid, Invalid vectors (0.9% of the model’s GLUOutsubscriptGLUOutGLU_OutGLUOut vectors). CJK* UTF8gbsn Vector Nearest Neighbors W⁢[0]delimited-[]0W[0]W [ 0 ] 不完 (unfinished), 不了 (unable), 不 (not), 不在 (absent), 不该 (should not) W⁢[1]delimited-[]1W[1]W [ 1 ] Exactly, >(⁢, =yes, =YES, =:, ===, quis, esac, #### (26, 744) 未能 (failed), 不够 (not enough), nicht (not), 不像 (not like), 达不到 (can’t reach) (26, 6619) 缺乏 (lack), 缺少 (lack), 不方便 (inconvenient), lacks, 难以 (difficult), 未能 (failed) (27, 9766) 是不可能 (impossible), neither, 看不到 (can’t see), 不存在 (doesn’t exist) (27, 4971) inefficient, 没能 (failed), 不方便 (inconvenient), Danger, disadvantage, 不利于 (29, 6676) yes, Yes, Bindable, exactly, Yes, "Yes, yes, Yep, Exactly, included (27, 10388) mirac, 乐观 (optimism), 安然 (safely), Relief, 幸 (fortunate), .isSuccess (30, 8233) correctly, 正确 (correct), 恰当 (appropriate), accurately, 符合 (conform) -1×(26, 744) 慎 (careful), 足 (sufficient), 同等 (equal), tend, ONDON, 足以 (enough) -1×(26, 6619) 不仅能 (not only can), 不错的 (good), 具有良好 (have good), 总算 (finally) -1×(27, 9766) might, maybe, may, 有时候 (sometimes), 部分地区 (some areas), .some -1×(27, 4971) successfully, successful, 顺利 (smooth), 成功 (successful), 删除成功 -1×(29, 6676) 都不 (neither), 不太 (not quite), neither, 不予 (not given), 没见过 (never seen) -1×(27, 10388) 失败 (failure), failure, 不良 (bad), 不利 (unfavorable), 糟糕 (bad), 失误 (mistake) -1×(30, 8233) wrong, 不良 (bad), incorrect, wrong, invalid, bad, inappropriate, invalid Table 2: GLUOutsubscriptGLUOutGLU_OutGLUOut vectors relevant to verification, and their nearest neighbors. W⁢[0],W⁢[1]delimited-[]0delimited-[]1W[0],W[1]W [ 0 ] , W [ 1 ] indicate our probe model. “(x,y)(x,y)( x , y )” indicates the GLUOutsubscriptGLUOutGLU_OutGLUOut vector at layer x, index y. “−1×(x,y)1-1×(x,y)- 1 × ( x , y )” (marked in red) indicates the antipodes of the GLUOutsubscriptGLUOutGLU_OutGLUOut vector at layer x, index y. Interestingly, we observe a correlation between valid/invalid vectors and English and Chinese. Unembedding GLUValid/InvalidsubscriptGLUValid/InvalidGLU_Valid/InvalidGLUValid/Invalid vectors reveal which tokens get promoted when they are activated. Table 2 shows their nearest neighbors in the model’s token embedding space. We observe that most interpretable GLUValid/InvalidsubscriptGLUValid/InvalidGLU_Valid/InvalidGLUValid/Invalid neurons occur in the second half of layers. Interestingly, we again note that there seems to be a correlation between GLUValid/InvalidsubscriptGLUValid/InvalidGLU_Valid/InvalidGLUValid/Invalid and English versus Chinese tokens, hinting at the underlying geometry of Valid/InvalidsubscriptValid/Invalidx_Valid/InvalidxValid/Invalid and the model’s embedding space. While GLUValid/InvalidsubscriptGLUValid/InvalidGLU_Valid/InvalidGLUValid/Invalid encode verification-related tokens, what role do they play? This can be partially answered by applying LogitLens again on 300 samples, but now by “turning off” GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors (< 1% of total GLU vectors) by scaling them to zero. Figure 1(c) shows the results: while the probability of verification-related tokens drop (e.g., P(“this”) drops from 0.98 to 0.70 in layer 36), the end behavior remains the same (i.e., “this” is still the top-1 token). This tells us that GLUs do not fully explain self-verification. We demonstrate a more thorough causal analysis in Section 4.4. 4.2 Bottom-Up: Previous-Token Attention Heads for Verification (PrevsubscriptPrevA_PrevAPrev) We next inspect the role of attention heads for verification. One motivation for choosing CountDown as our task is that the task specifies the target number in the context. Thus we can posit that a Transformer could verify its CoT tokens by comparing them against the specified target number (at timestep ta⁢n⁢ssubscriptt_anstitalic_a n s). Such a hypothesis provides an entry way for our bottom-up analysis. We test our hypothesis by inspecting the attention patterns whenever the model’s CoT produces the correct answer. We filter for attention heads that spend at least 10% of its attention on ta⁢n⁢ssubscriptt_anstitalic_a n s, and refer to these as previous-token heads (notated PrevsubscriptPrevA_PrevAPrev). Previous-token heads are not new: they were first discussed in the context of induction heads [26]. We identify 33 previous-token heads (out of a total of 576 heads). Interestingly, we find that most previous-token heads occur roughly in the first half layers (except for one at layer 31, all are at or before layer 22). In Section 4.4 we demonstrate via causal interventions that disabling previous-token heads can disable model verification. But first, what is the relationship between GLUValid/InvalidsubscriptGLUValid/InvalidGLU_Valid/InvalidGLUValid/Invalid vectors and PrevsubscriptPrevA_PrevAPrev heads? Below we adapt inter-layer component channels to understand their relationship. 4.3 Putting GLUValidsubscriptGLUValidGLU_ValidGLUValid and PrevsubscriptPrevA_PrevAPrev Together: Identifying Verification Subspaces (Polytopes) We identify subspaces for self-verification by studying the relationship between GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors and PrevsubscriptPrevA_PrevAPrev attention heads. As a reminder, we observe that PrevsubscriptPrevA_PrevAPrev usually occurs in the first half layers (1 to 22), while GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors usually occur in the later half (18 to 36). We hypothesize and empirically verify that PrevsubscriptPrevA_PrevAPrev activates GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors. First, we borrow from neuroscience to define receptive fields [25]. Consider a single neuron k, which computes an activation function fk:IR→dIRf^k: I\!R^d→ I\!Rfitalic_k : I R start_FLOATSUPERSCRIPT d end_FLOATSUPERSCRIPT → I R. A receptive field of neuron k is defined as Sk=∈IRd|fk(x)>0 S_k= \x∈ I\!R^d\ \ |\ \ f^k(x)>0 \Sitalic_k = x ∈ I R start_FLOATSUPERSCRIPT d end_FLOATSUPERSCRIPT | fitalic_k ( x ) > 0 (6) In simpler terms, SksubscriptS_kSitalic_k is the subspace that triggers a neuron active. In the context of GLUs, this means Sk=∈IRd|σ(Wg⁢a⁢t⁢ek)⊙Wu⁢pk>0 S_k= \x∈ I\!R^d\ \ |\ \ σ(W^k_% gatex) W_up^kx>0 \Sitalic_k = x ∈ I R start_FLOATSUPERSCRIPT d end_FLOATSUPERSCRIPT | σ ( Witalic_kitalic_g a t e x ) ⊙ Witalic_u pitalic_k x > 0 (7) Now consider a set of neurons, K, and the intersection of all of their receptive fields: K=⋂i|K|SisubscriptsuperscriptsubscriptsubscriptS_K= _i^|K|S_iSitalic_K = ⋂i| K | Sitalic_i. KsubscriptS_KSitalic_K can be considered a polytope in the model’s activation space parameterized by Wg⁢a⁢t⁢ei,Wu⁢pii=1|K|superscriptsubscriptsuperscriptsubscriptsuperscriptsubscript1 \W_gate^i,W_up^i \_i=1^|K| Witalic_g a t eitalic_i , Witalic_u pitalic_i i = 1| K |. Here we demonstrate that the receptive fields of GLUValidsubscriptGLUValidGLU_ValidGLUValid yield polytopes for self-verification. Namely, we identify a small subset of as few as three previous-token heads that can disable self-verification. To do so, we check how strongly the weights of each previous-token head (as opposed to hidden states) activate GLUValidsubscriptGLUValidGLU_ValidGLUValid neurons. The output of each head is its OV-circuit (i.e., WO⁢WVsubscriptsubscriptW_OW_VWitalic_O Witalic_V), scaled according to some attention distribution. Meanwhile, the strength of each GLUValidsubscriptGLUValidGLU_ValidGLUValid activation is determined by its gating weights Wg⁢a⁢t⁢esubscriptW_gateWitalic_g a t e and up-projection weights Wu⁢psubscriptW_upWitalic_u p. We score each previous-token head based on how strongly they activate GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors on average: s⁢c⁢o⁢r⁢e⁢(A,GLU^Valid)=1N⁢∑iN(σ⁢(Wg⁢a⁢t⁢ei⁢WO⁢WV)⋅Wu⁢pi⁢WO⁢WV)subscript^GLUValid1superscriptsubscript⋅subscriptsuperscriptsubscriptsubscriptsubscriptsuperscriptsubscriptsubscript score(A, GLU_Valid)= 1N _i% ^N (σ(W^i_gateW_OW_V)· W^i_upW_OW_V )s c o r e ( A , over start_ARG GLU end_ARGValid ) = divide start_ARG 1 end_ARG start_ARG N end_ARG ∑iitalic_N ( σ ( Witalic_iitalic_g a t e Witalic_O Witalic_V ) ⋅ Witalic_iitalic_u p Witalic_O Witalic_V ) (8) where GLU^Valid=Wg⁢a⁢t⁢ei,Wu⁢pii=0N−1subscript^GLUValidsuperscriptsubscriptsubscriptsuperscriptsubscriptsuperscript01 GLU_Valid=\W^i_gate,W^i_up\_i=0^N-1over start_ARG GLU end_ARGValid = Witalic_iitalic_g a t e , Witalic_iitalic_u p i = 0N - 1, N=|GLU^Valid|subscript^GLUValidN=| GLU_Valid|N = | over start_ARG GLU end_ARGValid |, WO⁢WVsubscriptsubscriptW_OW_VWitalic_O Witalic_V is the OV circuit of attention head A, and Wg⁢a⁢t⁢ei,Wu⁢pi,WOWV∈IRdW^i_gate,W^i_up,W_OW_V∈ I\!R^dWitalic_iitalic_g a t e , Witalic_iitalic_u p , Witalic_O Witalic_V ∈ I R start_FLOATSUPERSCRIPT d end_FLOATSUPERSCRIPT. Note that this is akin to inter-layer communication channels between model components studied in [7, 19], which scores how strongly two components in different layers communicate with a “composition score”: C⁢S⁢(W1,W2)=∥W1⁢W2∥F∥W1∥F∗∥W2∥Fsubscript1subscript2subscriptdelimited-∥subscript1subscript2subscriptdelimited-∥subscript1subscriptdelimited-∥subscript2 CS(W_1,W_2)= W_1W_2 _F% W_1 _F* W_2 _FC S ( W1 , W2 ) = divide start_ARG ∥ W1 W2 ∥F end_ARG start_ARG ∥ W1 ∥F ∗ ∥ W2 ∥F end_ARG (9) where W1subscript1W_1W1 might be an OV component of one head and W2subscript2W_2W2 the QK component of another at a later layer. Our formulation can be considered a composition score between attention heads and GLUs, using both Wg⁢a⁢t⁢esubscriptW_gateWitalic_g a t e and Wu⁢psubscriptW_upWitalic_u p in place of W2subscript2W_2W2 with some additional steps in between. Once we score each previous-token head using Eq. 8, we incrementally ablate one head at a time until we achieve perfect intervention scores (Section 4.4). Using this approach, we identify as few as three attention heads that can disable model verification. We notate this subset as AVerifsubscriptAVerifA_VerifAVerif. To summarize, we claim that the model has subspace(s) (polytope(s)), GLUValidsubscriptsubscriptGLUValidS_GLU_ValidSGLU start_POSTSUBSCRIPT Valid end_POSTSUBSCRIPT, for self-verification. The model’s hidden state enters this subspace when it has verified its solution. In our setting, given the nature of our task, previous-token heads APrevsubscriptAPrevA_PrevAPrev take the hidden-state into this subspace, while for other tasks, different components may be used. This subspace also activates verification-related GLU weights, promoting the likelihood of tokens such as “success” to be predicted (Figure 3). We find that alternative hyperparameters or scoring functions can yield different subsets of previous-token heads that also disable self-verification. We discuss these results in Appendix F. This suggests that we do not identify a full circuit, but rather a critical component for verification. Also note that our scoring function makes simplifications by ignoring possible interactive effects across heads, as well as transformations (layer norms, GLUs) across layers. Regardless, our finding remains robust: a small subset of previous-token heads can disable verification. 4.4 Causal Interventions We study the role of each component above with causal intervention. Our test set consists of 300 samples in which the model originally correctly finds and validates its solutions. For each test case, the model generates 100 tokens. Every time an attempt for a solution is made (i.e., “(this works)” or “(not (ans)” is about to be predicted), we turn off some of the model weights as described below. We measure intervention success rate: the percentage of times the model fails to validate its solution, despite having found the solution within the 100 generated tokens. Interestingly, the model occasionally marks a correct attempt as invalid (desired intervention result), but continues its generation to say it has found a solution (e.g., given target number 62, the model will output “68 - 11 + 5 = 62 (not 62) So, the answer is 68 - 11 + 5”). We mark these cases as partial successes. Figure 2: Intervention Results: Disabling as few as 3 attention heads disables self-verification, rendering the model to generate tokens indefinitely. APrevsubscriptAPrevA_PrevAPrev refers to 33 previous-token heads. APrevsubscriptAPrevA_PrevAPrev Baseline refers to the average of 5 runs, each run randomly sampling 33 attention heads. AVerifsubscriptAVerifA_VerifAVerif refers to a subset of 3 previous-token heads. AVerifsubscriptAVerifA_VerifAVerif Baseline refers to the average from 5 runs, each run randomly samping 3 attention heads. We ablate a few components: (1) GLUValidsubscriptGLUValidGLU_ValidGLUValid: We zero-out 50 GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors per layer from layer 18 to 36 (the second half of the model; 0.45% of all GLU vectors). (2) GLUValidsubscriptGLUValidGLU_ValidGLUValid & GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid: We zero-out the top 50 GLUValidsubscriptGLUValidGLU_ValidGLUValid and top 50 GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid vectors per layer (0.9% of all GLU vectors). (3) PrevsubscriptPrevA_PrevAPrev: We turn off 33 previous-token heads (5.7% of all attention heads) by zeroing-out their WOsubscriptW_OWitalic_O weights. (4) VerifsubscriptVerifA_VerifAVerif: We turn off 333 verification heads identified via Eq. 8: L17H14, L17H11, L17H10. We also provide two baselines: PrevsubscriptPrevA_PrevAPrev Baseline and VerifsubscriptVerifA_VerifAVerif Baseline, which each report the average from 5 runs, where each run randomly samples 33 (or 3) attention heads. Results are shown in Figure 2, with an example of an intervened output. We achieve near perfect intervention rates when turning off 33 previous-token heads, or a smaller subset of 3 heads. The model misclassifies correct solutions as invalid without these heads, and continue its CoT indefinitely. Figure 3: GLUValidsubscriptGLUValidGLU_ValidGLUValid activations before and after turning off 3 AVerifsubscriptAVerifA_VerifAVerif attention heads. Adjacent pairs of blue and orange bars indicate the same GLUValidsubscriptGLUValidGLU_ValidGLUValid vector. Turning off our identified attention heads leads to a significant drop in their activations. These attention heads directly affect the GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors. Figure 3 demonstrates the activations of the top GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors before and after disabling the subset of 3 previous-token heads. In most cases, we observe a large drop (to near 0, or often even negative values) in GLUValidsubscriptGLUValidGLU_ValidGLUValid activations. Antipodal GLUOutsubscriptGLUOutGLU_OutGLUOut Vectors. While attention heads achieve near perfect interventions, Figure 2 also indicates that disabling both GLUValidsubscriptGLUValidGLU_ValidGLUValid and GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid performs better than disabling just GLUValidsubscriptGLUValidGLU_ValidGLUValid. Why should disabling GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid improve intervening, i.e., make the model fail at verification? This can be explained by two facts: (1) the geometry of GLUValidsubscriptGLUValidGLU_ValidGLUValid and GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid vectors, and (2) the nonlinear activation used in GLU. Interestingly, we find that the antipodal directions of GLUValidsubscriptGLUValidGLU_ValidGLUValid and GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid also often encode tokens relevant for verification. The last 7 rows of Table 2 marked in red indicate the nearest neighbors of the antipodes of GLUValidsubscriptGLUValidGLU_ValidGLUValid and GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid. In addition, Qwen2.5-3B uses SiLU activations [12]. Thus inactive neurons take on small negative values (as opposed to zero, had ReLU been used). With that said, consider only zeroing out GLUValidsubscriptGLUValidGLU_ValidGLUValid neurons: given a correct CoT sequence, GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid vectors are inactive. However, because of SiLU, the inactive GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid vectors have negative activations, thus get multiplied by a small negative value, flipping directions, and therefore contribute towards the “success direction”. In the case of zeroing out both GLUValidsubscriptGLUValidGLU_ValidGLUValid and GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid, we are further zeroing out the effects of inactive GLUInvalidsubscriptGLUInvalidGLU_InvalidGLUInvalid neurons. Figure 4: Intervention Results for the base model and R114BsubscriptR114B R1_ 14BR114B. In the base model, APrevsubscriptAPrevA_PrevAPrev can similarly disable self-verification, while AVerifsubscriptAVerifA_VerifAVerif only plays a partial role for verification, hinting at the effects of RL on their weights. In R114BsubscriptR114B R1_ 14BR114B, interventions mostly leads to partial success, in which the model first marks a solution as incorrect but self-corrects itself, hinting at a larger verification circuit. Also interestingly, the smaller subset of AVerifsubscriptAVerifA_VerifAVerif is more effective at self-verification than APrevsubscriptAPrevA_PrevAPrev. 5 Similar Verification Components in Base Model and DeepSeek-R1 Model We verify that similar verification subspaces exist in our base model (Qwen2.5-3B), as well as a general reasoning model, DeepSeek-R1-Distill-Qwen-14B (henceforth R114BsubscriptR114B R1_ 14BR114B). For both models, we provide CountDown as an in-context learning (ICL) task, including 5 demonstrations of solving CountDown using the structured output of R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown. We find that both models can solve the ICL version of CountDown while following the same CoT structure of R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown, corroborating recent findings that reasoning capabilities often already exist in pre-trained models [40, 9]. We then repeat our intervention analyses above. In our ICL setting, our interventions sometimes make the model generate “out-of-range”, by which we mean their generations do not adhere to the structured CoT of R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown. We mark these cases as out-of-range. Base Model. Figure 4 (a) shows the interventions from Section 4.4 on our base model. Note that previous-token heads still achieve near perfect (partial) intervention rates, suggesting that they play a similar role for self-verification in the base model. Also note that the three AVerifsubscriptAVerifA_VerifAVerif heads demonstrate a lower success rate. Similar to [31], which demonstrates that fine-tuning enhances existing mechanisms in a base model, we hypothesize that RL enhances an existing verification mechanism, thus resulting in highly localized attention heads in R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown that can control self-verification. DeepSeek-R1 Model. In the case of R114BsubscriptR114B R1_ 14BR114B, we repeat the procedures in Sections 4.1∼similar-to ∼4.4. However, a probe vector W is required to identify GLUValid/InvalidsubscriptGLUValidInvalidGLU_Valid/InvalidGLUValid / Invalid. Thus we apply Emb2Emb [16], a simple technique to transfer and re-use steering vectors across language models (see Appendix G for a brief explanation). Applying Emb2Emb on R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown’s probe, W, results in a probe vector WR⁢1subscript1W_R1Witalic_R 1 for R114BsubscriptR114B R1_ 14BR114B, allowing us to repeat our analyses from Section 4.1 on R114BsubscriptR114B R1_ 14BR114B. We find similar GLUValid, InvalidsubscriptGLUValid, InvalidGLU_Valid, InvalidGLUValid, Invalid vectors in R114BsubscriptR114B R1_ 14BR114B, analogous to Table 2 (see Appendix H), hinting at similar verification subspaces in R114BsubscriptR114B R1_ 14BR114B. We identify and intervene on previous-token heads (APrevsubscriptAPrevA_PrevAPrev) in R114BsubscriptR114B R1_ 14BR114B, following Section 4.2. We use an attention threshold of 5% (as opposed to 10% in R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown) to compensate for the longer context induced from our ICL setup, which yields 552 (out of 1920) previous-token heads. We discuss results from different hyperparameters (thresholds) in Appendix I. We also replicate Section 4.3 to identify a smaller subset of 351 attention heads that achieve near perfect (partial) intervention success rates. Results are shown in Figure 4 (b). Interestingly, our interventions mostly lead to partial successes in R114BsubscriptR114B R1_ 14BR114B, in which the model initially fails at self-verification (labels a correct solution as “(not ans)”), but corrects itself (generates “Wait, 68 - 11 + 5 is 62 so that works.”). This hints at a larger verification circuit for R114BsubscriptR114B R1_ 14BR114B. We also note that AVerifsubscriptAVerifA_VerifAVerif has a higher success rate than APrevsubscriptAPrevA_PrevAPrev, despite being a smaller set, suggesting that not all previous-token heads (or their interactions) are helpful in self-verification. We leave further exploration for future work. 6 Related Work Decoding Interpretable Representations. A growing line of work focuses on decoding and manipulating interpretable representations in model activations [42]. Conveniently, many concepts take on linear representations [20, 22, 30], in which simple vectors encode human-interpretable concepts. This allows for easily manipulating such representations to steer the model’s behavior. Examples include refusal [3], sycophancy [33], toxicity [15], or even user representations [5]. For “non-reasoning” models, researchers have studied “truthful” representations before [4], where steering towards a “truthful” direction has led to improvements in tasks related to factual recall [17]. In a similar vein, researchers have shown that the model’s representations can reveal whether they will make errors (e.g., hallucinations) [28], or when they are unable to recall facts about an entity [8]. Most recently, concurrent work [41, 37] also investigate how models solve reasoning tasks. [41] find that models know when they have reached a solution, while [37] decode directions that mediate behaviors such as handling uncertainty or self-corrections. While our work corroborates these findings, we take a deeper dive into how a reasoning model verifies its own reasoning trace. Circuit Analysis. A growing line or work decomposes the forward pass of a neural network as “circuits” [24], or computational graphs. This allows researchers to identify key components and their causal effects for a given forward pass. A common approach to construct computational graphs is to replace model components with dense activations with a sparsely-activating approximation. [6] introduces Transcoders to approximate MLP layers, while [1] further develops Cross-layer Transcoders to handle inter-layer features. [18] uses Cross-layer Transcoders to conduct circuit analyses for a wide range of behaviors, such as multi-step reasoning (for factual recall) or addition, and also investigate when a model’s CoT is (un)faithful. In our work, we identify key components needed for a potentially larger verification circuit without the need for separate sparse approximations. 7 Discussion We studied how a task-specific model verifies its own outputs. We repurposed mode collapse as a feature, not a bug: by leveraging the fact that preference tuning leads to mode collapse, we train a model with highly structured CoT, making it easy to parse its reasoning trace. With this setup, we found GLU weights that encode verification-related tokens, and previous-token heads that can disable verification. We offer a simple extension to inter-layer communication channels that allow us to localize as few as three attention heads that can also disable verification. Finally, we verify the existence of similar components in our base model and a general reasoning DeepSeek-R1 model. We view our work as a step towards understanding the inner mechanisms of recent reasoning models. Limitations. Note that we do not claim to have uncovered a full verification circuit, but rather critical components for verification. We also reiterate the scope of our work: we study a specific task that allows for context-based verification. Obviously, not all reasoning tasks share this property: many tasks likely require prior-based verification using general knowledge. We speculate that similar subspaces are used for prior-based verification, but is less obvious where they show up. Acknowledgments AL acknowledges support from the Superalignment Fast Grant from OpenAI. MW and FV acknowledge support from the Superalignment Fast Grant from OpenAI, Effective Ventures Foundation, Effektiv Spenden Schweiz, and the Open Philanthropy Project. All experiments were conducted on the FASRC cluster supported by the FAS Division of Science Research Computing Group at Harvard University and the University of Chicago AI Cluster. References [1] Emmanuel Ameisen, Jack Lindsey, Adam Pearce, Wes Gurnee, Nicholas L. Turner, Brian Chen, Craig Citro, David Abrahams, Shan Carter, Basil Hosmer, Jonathan Marcus, Michael Sklar, Adly Templeton, Trenton Bricken, Callum McDougall, Hoagy Cunningham, Thomas Henighan, Adam Jermyn, Andy Jones, Andrew Persic, Zhenyi Qi, T. Ben Thompson, Sam Zimmerman, Kelley Rivoire, Thomas Conerly, Chris Olah, and Joshua Batson. Circuit tracing: Revealing computational graphs in language models. Transformer Circuits Thread, 2025. [2] Iván Arcuschin, Jett Janiak, Robert Krzyzanowski, Senthooran Rajamanoharan, Neel Nanda, and Arthur Conmy. Chain-of-thought reasoning in the wild is not always faithful. arXiv preprint arXiv:2503.08679, 2025. [3] Andy Arditi, Oscar Balcells Obeso, Aaquib Syed, Daniel Paleka, Nina Rimsky, Wes Gurnee, and Neel Nanda. Refusal in language models is mediated by a single direction. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. [4] Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in language models without supervision. In The Eleventh International Conference on Learning Representations. [5] Yida Chen, Aoyu Wu, Trevor DePodesta, Catherine Yeh, Kenneth Li, Nicholas Castillo Marin, Oam Patel, Jan Riecke, Shivam Raval, Olivia Seow, et al. Designing a dashboard for transparency and control of conversational ai. arXiv preprint arXiv:2406.07882, 2024. [6] Jacob Dunefsky, Philippe Chlenski, and Neel Nanda. Transcoders find interpretable llm feature circuits. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. [7] Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. https://transformer-circuits.pub/2021/framework/index.html. [8] Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan, and Neel Nanda. Do i know this entity? knowledge awareness and hallucinations in language models. arXiv preprint arXiv:2411.14257, 2024. [9] Kanishk Gandhi, Ayush Chakravarthy, Anikait Singh, Nathan Lile, and Noah D Goodman. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars. arXiv preprint arXiv:2503.01307, 2025. [10] Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, and Noah D Goodman. Stream of search (sos): Learning to search in language. arXiv preprint arXiv:2404.03683, 2024. [11] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. [12] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. [13] Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis, Jelena Luketina, Eric Hambro, Edward Grefenstette, and Roberta Raileanu. Understanding the effects of rlhf on llm generalisation and diversity. In The Twelfth International Conference on Learning Representations, 2024. [14] Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, et al. Measuring faithfulness in chain-of-thought reasoning. arXiv preprint arXiv:2307.13702, 2023. [15] Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K Kummerfeld, and Rada Mihalcea. A mechanistic understanding of alignment algorithms: A case study on dpo and toxicity. In International Conference on Machine Learning, pages 26361–26378. PMLR, 2024. [16] Andrew Lee, Melanie Weber, Fernanda Viégas, and Martin Wattenberg. Shared global and local geometry of language model embeddings. arXiv preprint arXiv:2503.21073, 2025. [17] Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference-time intervention: Eliciting truthful answers from a language model. Advances in Neural Information Processing Systems, 36:41451–41530, 2023. [18] Jack Lindsey, Wes Gurnee, Emmanuel Ameisen, Brian Chen, Adam Pearce, Nicholas L. Turner, Craig Citro, David Abrahams, Shan Carter, Basil Hosmer, Jonathan Marcus, Michael Sklar, Adly Templeton, Trenton Bricken, Callum McDougall, Hoagy Cunningham, Thomas Henighan, Adam Jermyn, Andy Jones, Andrew Persic, Zhenyi Qi, T. Ben Thompson, Sam Zimmerman, Kelley Rivoire, Thomas Conerly, Chris Olah, and Joshua Batson. On the biology of a large language model. Transformer Circuits Thread, 2025. [19] Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. Talking heads: Understanding inter-layer communication in transformer language models. Advances in Neural Information Processing Systems, 37:61372–61418, 2024. [20] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [21] Sonia K Murthy, Tomer Ullman, and Jennifer Hu. One fish, two fish, but not the whole sea: Alignment reduces language models’ conceptual diversity. arXiv preprint arXiv:2411.04427, 2024. [22] Neel Nanda, Andrew Lee, and Martin Wattenberg. Emergent linear representations in world models of self-supervised sequence models. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 16–30, 2023. [23] Nostalgebraist. Interpreting gpt: The logit lens, 2020. [24] Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. Distill, 2020. https://distill.pub/2020/circuits/zoom-in. [25] Bruno A. Olshausen and David J. Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research, 37(23):3311–3325, 1997. [26] Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. In-context learning and induction heads. Transformer Circuits Thread, 2022. https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html. [27] OpenAI. Learning to reason with llms. https://openai.com/index/learning-to-reason-with-llms/. Accessed: 2025-03-21. [28] Hadas Orgad, Michael Toker, Zorik Gekhman, Roi Reichart, Idan Szpektor, Hadas Kotek, and Yonatan Belinkov. Llms know more than they show: On the intrinsic representation of llm hallucinations. arXiv preprint arXiv:2410.02707, 2024. [29] Vishakh Padmakumar and He He. Does writing with language models reduce content diversity? In The Twelfth International Conference on Learning Representations, 2024. [30] Kiho Park, Yo Joong Choe, and Victor Veitch. The linear representation hypothesis and the geometry of large language models. In Forty-first International Conference on Machine Learning. [31] Nikhil Prakash, Tamar Rott Shaham, Tal Haklay, Yonatan Belinkov, and David Bau. Fine-tuning enhances existing mechanisms: A case study on entity tracking. In The Twelfth International Conference on Learning Representations, 2024. [32] Tian Qin, David Alvarez-Melis, Samy Jelassi, and Eran Malach. To backtrack or not to backtrack: When sequential search limits model reasoning. arXiv preprint arXiv:2504.07052, 2025. [33] Nina Rimsky, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Turner. Steering llama 2 via contrastive activation addition. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15504–15522, 2024. [34] Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020. [35] Stewart Slocum, Asher Parker-Sartori, and Dylan Hadfield-Menell. Diverse preference learning for capabilities and alignment. In The Thirteenth International Conference on Learning Representations, 2025. [36] Miles Turpin, Julian Michael, Ethan Perez, and Samuel Bowman. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. Advances in Neural Information Processing Systems, 36:74952–74965, 2023. [37] Constantin Venhoff, Iván Arcuschin, Philip Torr, Arthur Conmy, and Neel Nanda. Understanding reasoning in thinking language models via steering vectors. In Workshop on Reasoning and Planning for Large Language Models, 2025. [38] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. [39] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems, 36:11809–11822, 2023. [40] Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837, 2025. [41] Anqi Zhang, Yulin Chen, Jane Pan, Chen Zhao, Aurojit Panda, Jinyang Li, and He He. Reasoning models know when they’re right: Probing hidden states for self-verification. arXiv preprint arXiv:2504.05419, 2025. [42] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405, 2023. Appendix A Hyperparameters for R1 Here we provide the hyperparameters used to train R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown. Parameter Value Train Batch Size 256 Validation Batch Size 1312 Max Prompt Length 256 Max Response Length 1024 Actor Learning Rate 1e-6 PPO Mini Batch Size 128 PPO Micro Batch Size 8 Log Prob Micro Batch Size 8 Tensor Model Parallel Size 2 Critic Learning Rate 1e-5 KL Coefficient 0.001 Table 3: Training Hyperparameters. Appendix B LogitLens on More Layers Figure 5 demonstrates LogitLens as described in Section 4.1 on more layers. Figure 5: Averaged LogitLens from 300 samples (Same as Figure 1 but demonstrating more layers). We see tokens related to verification (“success”, “incorrect”) in the last few layers. (A), (B) show the top tokens when a correct / incorrect solution is reached. (C), (D) shows results from intervening on either GLU weights or attention heads, given a correct solution. For (C), while the model is less certain (P(“this”) versus P(“not”) becomes 0.51 vs. 0.49 in last layer), we still see tokens such as “success” showing up. For (D), we no longer see any tokens related to “success” show up, and the model is certain that it has not found a solution. Appendix C Hyperparameters for Probing We use a batch size of 8, validation size of 256, weight decay of 0.01, and learning rate of 1e-4. We validate every 50 gradient steps, and terminate training when validation loss has not improved after a patience value of 10. Appendix D Probe Accuracy Figure 6 shows probing results. The model has a linear separation in its hidden states given correct versus incorect CoT tokens. Figure 6: Probe Accuracy. The model has a linear separation in its hidden states given correct versus incorrect CoT tokens. Appendix E Examples of Steering Verification with Probe Once we identify a direction that encodes solved versus unsolved states (i.e., Wp⁢r⁢o⁢b⁢esubscriptW_probeWitalic_p r o b e), we can simply add this direction into the model’s hidden states to make the model believe that it has found a solution: ℓ=ℓ+α⁢Wp⁢r⁢o⁢b⁢esuperscriptℓsuperscriptℓsubscript =x +α W_probexroman_ℓ = xroman_ℓ + α Witalic_p r o b e (10) where ℓ,Wp⁢r⁢o⁢b⁢e∈IRdx ,W_probe∈ I\!R^dxroman_ℓ , Witalic_p r o b e ∈ I R start_FLOATSUPERSCRIPT d end_FLOATSUPERSCRIPT and α∈I⁢RIRα∈ I\!Rα ∈ I R. Some hyperparameters include ℓ ℓ (which layers to steer on) and α, where a larger α amplifies the target behavioral effect. While an extensive hyperparameter search and a systematic experiment may be useful, steering is not a core component but rather a tangential experiment. We thus provide qualitative examples using ℓ=n∣24≤n≤36ℓconditional-set2436 =\n 24≤ n≤ 36\ℓ = n ∣ 24 ≤ n ≤ 36 and α=2020α=20α = 20 (after normalizing Wp⁢r⁢o⁢b⁢esubscriptW_probeWitalic_p r o b e) in Table 5. Appendix F Alternative Subsets of Previous-Token Heads Of the 25 previous-token heads that we identify, there are many ways to identify subsets that disable verification. We offer a few examples, and document how many heads are needed to disable verification with perfect success rates (including partial successes). Attention Density. The simplest method is to sort the heads based on how much they attend to the target token that timestep ta⁢n⁢ssubscriptt_anstitalic_a n s. Varying Hyperparameters. Given our approach (Section 4.3), we try different parameters for N. Sort by Similarity to Wg⁢a⁢t⁢e,Wu⁢psubscriptsubscriptW_gate,W_upWitalic_g a t e , Witalic_u p. An alternative approach is to sort the attention heads based on how similar their OV circuits are to the Wg⁢a⁢t⁢esubscriptW_gateWitalic_g a t e and Wu⁢psubscriptW_upWitalic_u p weights of the GLUV⁢a⁢l⁢i⁢dsubscriptGLUGLU_ValidGLUV a l i d weights. We simply take the mean of Wg⁢a⁢t⁢ei,Wu⁢piiNsuperscriptsubscriptsuperscriptsubscriptsuperscriptsubscript\W_gate^i,W_up^i\_i^N Witalic_g a t eitalic_i , Witalic_u pitalic_i iitalic_N vectors from our N GLUV⁢a⁢l⁢i⁢dsubscriptGLUGLU_ValidGLUV a l i d weights. We then sort attention heads based on how similar they are to the resulting mean vector. Sort by Probe WP⁢r⁢o⁢b⁢esubscriptW_ProbeWitalic_P r o b e. An alternative is to sort the attention heads based on how similar their OV circuits are to the probe vector WP⁢r⁢o⁢b⁢esubscriptW_ProbeWitalic_P r o b e. Table 6 demonstrates how many heads are needed from each approach to disable verification with perfect accuracy. Appendix G Brief Explanation of Emb2Emb Language models represent numerous concepts using linear representations [22, 30], by which we mean one can add a single vector that encodes a specific concept into the activations during inference-time to raise the probability for the model to exhibit such concept or behavior [33, 15, 17]. Researchers often refer to such vectors as “steering vectors”. In other words, during the forward pass, imagine the activations at layer i. One can simply add a steering vector W (scaled by some hyperparameter α) to control the model’s behavior: i+1=i+Fi⁢(i)+α⁢Wsuperscript1superscriptsuperscriptsuperscript ^i+1=x^i+F^i(x^i)+ [rgb% ]0,0,1 [named]pgfstrokecolorrgb0,0,1α Wxitalic_i + 1 = xitalic_i + Fitalic_i ( xitalic_i ) + α W (11) where xisuperscriptxx^ixi and FisuperscriptF^iFitalic_i are the hidden state and transformer block at layer i. Emb2Emb [16] is a simple method that transfers a steering vector from one language model to another, by leveraging the fact that the unembedding spaces of language models are often quite similar. Emb2Emb works as following. Given a “source” and “target” language model, ℳSsubscriptℳM_SMitalic_S and ℳTsubscriptℳM_TMitalic_T, first randomly sample a set of N (= 100,000) tokens, notating their token (un)embeddings as ℰSsubscriptℰE_SEitalic_S and ℰTsubscriptℰE_TEitalic_T. Then, learn a linear transformation, T, to map points ℰSsubscriptℰE_SEitalic_S to ℰTsubscriptℰE_TEitalic_T, using something as simple as least squares minimization. Note that T maps between spaces with different dimensions. Given transformation T and a steering vector WSsubscriptW_SWitalic_S from the source model ℳSsubscriptℳM_SMitalic_S, one can steer the target model ℳTsubscriptℳM_TMitalic_T by simply applying transformation T to WSsubscriptW_SWitalic_S: Ti+1=Ti+FTi⁢(Ti)+α⁢T⁢WS,superscriptsubscript1superscriptsubscriptsuperscriptsubscriptsuperscriptsubscriptsubscript _T^i+1=x_T^i+F_T^i(x_T^% i) [rgb]0,0,1 [named]pgfstrokecolorrgb0,0,1+α TW% _S,xitalic_Titalic_i + 1 = xitalic_Titalic_i + Fitalic_Titalic_i ( xitalic_Titalic_i ) + α T Witalic_S , (12) where Tsubscriptx_Txitalic_T is the activations and FTsubscriptF_TFitalic_T is the transformer block of target model ℳTsubscriptℳM_TMitalic_T. In our work, we use Emb2Emb to transfer our probe vector W from R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown to a general reasoning R1 model, R114BsubscriptR114B R1_ 14BR114B. Appendix H GLUValid, InvalidsubscriptGLUValid, InvalidGLU_Valid, InvalidGLUValid, Invalid in DeepSeek-R1-Distill-Qwen-14B Table 4 contains GLUOutsubscriptGLUOutGLU_OutGLUOut weights in R114BsubscriptR114B R1_ 14BR114B related to verification. CJK* UTF8gbsn Vector Nearest Neighbors (36, 10079) 不失 (not losing), NotNull, 得起 (can afford), 得住 (can endure), 不惜 (not hesitate) (32, 497) 删除成功 (deletion successful), successes, Success, success, succeeded, favorable (35, 6041) 的强大 (powerful), excellent, powerful, 强大的 (powerful), 很棒 (great), strong, 优异 (37, 5399) 等于 (equal), equal, 同样的 (same), 相同 (same), equals, 相同的 (same), 同等 (equal) (32, 13572) successfully, 成功 (success), 解决了 (solved), 实现了 (achieved), 顺利 (smoothly) (30, 10150) 没问题 (no problem), 无忧 (no worries), .NoError, harmless, 不变 (unchanged) (45, 6650) 没有 (do not have), 不存在 (does not exist), 没有任何 (do not have any), 不需要 (39, 6070) never, 不会 (will not), doesn, not, 不能 (cannot), nowhere, cannot, neither (46, 12380) neither, none, nowhere, None, Neither, none, nobody, cannot (44, 12793) não (not), 不 (not), nicht (not), tidak (no), не (not), ikke (not), niet (not) (41, 12498) 不在 (not present), 不再 (no longer), non, 非 (non-), 不再是 (is no longer), 不属于 (37, 7636) 不合适 (inappropriate), 不足 (insufficient), 达不到 (cannot reach), 不够 (not enough) (31, 5164) 没能 (did not), fails, 未能 (failed), 不够 (not enough), 做不到 (cannot), 不及 (35, 2509) 不 (not), 不含 (does not contain), 不对 (incorrect), 不影响 (does not affect), Table 4: GLUOutsubscriptGLUOutGLU_OutGLUOut vectors relevant to verification in R114BsubscriptR114B R1_ 14BR114B. Appendix I Alternative Hyperparameters for R1 We add a quick note on using alternative hyperparameters for the experiment on R114⁢BsubscriptR114 R1_14BR114 B in Section 5. Namely, there are two hyperparameters, the attention threshold to categorize an attention head as a previous-token head (APrevsubscriptAPrevA_PrevAPrev), and the number of GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors to include to identify a smaller subset of attention heads (AVerifsubscriptAVerifA_VerifAVerif). For the first hyperparameter (attention threshold), we use 5% as our threshold, as reported in the main text. We find that using a higher threshold (10%) is too constrained and perhaps yields too small of a set of previous-token heads, leading to a lower intervention success rate (0.4%, with 25% partial success rate). We believe this is because there are still many attention heads that attends to the correct solution (say, with an attention pattern < 10% but > 5%) to offset our intervention. Using a smaller threshold (2.5%) yields too large of a set of previous-token heads. In this case, interventions now lead to a much higher rate of “out-of-range” results – i.e., the model no longer adheres to the structured CoT sequence after our interventions. Regarding the number of GLUValidsubscriptGLUValidGLU_ValidGLUValid vectors, we see our results converge to the same results after 200 vectors, which is what we use in our experiments. Appendix J Compute Resources Experiments for R1DownCountsubscriptsuperscriptR1CountDown R1 Count_ DownR1CountDown ran on a single A100 or H100 GPU. Experiments for R114BsubscriptR114B R1_ 14BR114B ran on 2 GPUs. Operands Target Type Output 3, 87, 38 46 Orig. <think> 87 - 38 - 3 = 116 - 3 = 113 (not 46) 87 - 38 + 3 = 49 + 3 = 52 (not 46) 87 - 38 + 3 = 49 + 3 = 52 (not 46) 87 + 38 - 3 = 125 - 3 = 122 (not 46) 38 + 38 - 3 = 76 - 3 = 73 (not 46)… 3, 87, 38 46 Steer <think> 87 - 38 - 3 = 116 - 3 = 113 (not 46) 87 - 38 + 3 = 49 + 3 = 52 (not 46) 87 - 38 + 3 = 49 + 3 = 52! Yes, we got it! 87 - 38 + 3 = 49 + 3 = 52. </think> <answer> 87 - 38 + 3 </answer> 60, 62, 12, 34 68 Orig. <think> 60 - 62 + 12 + 34 = 54 + 12 + 34 = 66 + 34 = 100 (not 68) 60 + 62 - 12 - 34 = 122 - 12 - 34 = 108 - 34 = 74 - 34 = 40 (not 68) 60 + 62 - 34 + 12 = 122 - 34 + 12 = 88 + 12 = 100 - 34 = 66 (not 68) 60 + 34 + 12 - 62 = 94 + 12 - 62 = 106 - 62 = 44 (not 68)… 60, 62, 12, 34 68 Steer <think> 60 - 62 + 12 + 34 = 54 + 12 + 34 = 66 + 34 = 100 (not 68) 60 + 62 - 12 - 34 = 122 - 12 - 34 = 108 - 34 = 74 - 34 = 40 (not 68) 60 + 62 - 34 + 12 = 122 - 34 + 12 = 88 + 12 = 100 - 34 = 66 exactly! So the equation that equals 68 is: 60 + 62 - 34 + 12 </think> <answer> 60 + 62 - 32 + 12 </answer> 30, 90, 27, 19 74 Orig. <think> 90 - 30 - 27 + 19 = 60 - 27 + 19 = 33 + 19 = 52 (not 74) 90 - 30 - 27 + 19 = 60 - 27 + 19 = 33 + 19 = 52 (not 74) 90 - 30 - 27 + 19 = 60 - 27 + 19 = 33 + 19 = 52 (not 74) 90 + 27 + 19 - 30 = 117 + 19 - 30 = 136 - 30 = 106 (not 74)… 30, 90, 27, 19 74 Steer <think> 90 - 30 - 27 + 19 = 60 - 27 + 19 = 33 + 19 = 52 (not 74) 90 - 30 - 27 + 19 = 60 - 27 + 19 = 33 + 19 = 52! Indeed, 90 - 30 - 27 + 19 = 60 - 27 + 19 = 33 + 19 = 52 So, let’s try another combination: 90 - 30 - 27 + 19 = 60 - 27 + 19 = 33 + 19 = 52 No luck. Let’s try this: 90 - 30 - 27 + 19 = 60 - 27 Table 5: Steering with probe Wp⁢r⁢o⁢b⁢esubscriptW_probeWitalic_p r o b e. Method Num. Heads Heads Eq. 8 (N=200) 3 L17H14, L17H11, L17H10 Eq. 8 (N=50) 15 L12H3, L12H8, L11H8, L17H1, L17H3, L10H5, L17H10, L17H11, L17H13, L21H10, L19H8, L13H3, L13H6, L5H15, L17H14 Eq. 8 (N=100) 100 L17H3, L17H1, L12H8, L17H10, L17H14, L17H11 Eq. 8 (N=300) 12 L17H14, L5H15, L19H13, L5H14, L13H6, L17H11, L15H8, L13H3, L19H8, L4H5, L17H3, L17H10 Attention Density 8 L17H14, L17H10, L13H3, L13H6, L5H14, L19H8, L4H3, L22H14 Sort by Wg⁢a⁢t⁢esubscriptW_gateWitalic_g a t e, Wu⁢psubscriptW_upWitalic_u p 17 L18H3, L21H7, L12H8, L21H14, L22H14, L11H8, L21H10, L12H3, L15H15, L17H3, L17H14, L15H8, L5H15, L13H6, L17H11, L19H13, L19H8 Sort by WP⁢r⁢o⁢b⁢esubscriptW_ProbeWitalic_P r o b e 17 L18H7, L21H2, L22H12, L17H13, L17H11, L17H10, L4H5, L15H8, L17H14, L5H14, L22H14, L13H5, L5H15, L10H5, L15H15, L19H13, L13H6 Table 6: Alternative approaches to localize attention heads that disable verification, and the number of heads required to disable verification.