Paper deep dive
Observable Propagation: Uncovering Feature Vectors in Transformers
Jacob Dunefsky, Arman Cohan
Models: GPT-Neo-1.3B
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 93%
Last extracted: 3/12/2026, 6:05:45 PM
Summary
The paper introduces 'Observable Propagation' (ObProp), a novel, data-efficient method for identifying linear feature vectors in transformer language models. By defining 'observables' as linear functionals on model logits, ObProp uses first-order Taylor approximations to propagate these observables backward through the model's computational paths, allowing for the identification of feature vectors without requiring large labeled datasets. The authors demonstrate the method's effectiveness in analyzing gendered occupational bias, political party prediction, and programming language detection, showing it outperforms traditional probing methods in low-data regimes.
Entities (5)
Relation Signals (3)
Observable Propagation → identifies → Feature Vector
confidence 95% · ObProp, for finding linear features used by transformer language models
Observable Propagation → utilizes → Observable
confidence 95% · Our paradigm centers on the concept of 'observables', linear functionals corresponding to given tasks.
Coupling Coefficient → measuressimilarityof → Feature Vector
confidence 90% · a similarity metric between feature vectors called the coupling coefficient
Cypher Suggestions (2)
Map the relationship between methods and the concepts they analyze · confidence 85% · unvalidated
MATCH (m:Method)-[r:IDENTIFIES|ANALYZES]->(c:Concept) RETURN m.name, type(r), c.name
Find all methods related to interpretability in transformers · confidence 80% · unvalidated
MATCH (m:Method)-[:USED_IN]->(t:ModelArchitecture {name: 'Transformer'}) RETURN mAbstract
Abstract:A key goal of current mechanistic interpretability research in NLP is to find linear features (also called "feature vectors") for transformers: directions in activation space corresponding to concepts that are used by a given model in its computation. Present state-of-the-art methods for finding linear features require large amounts of labelled data -- both laborious to acquire and computationally expensive to utilize. In this work, we introduce a novel method, called "observable propagation" (in short: ObProp), for finding linear features used by transformer language models in computing a given task -- using almost no data. Our paradigm centers on the concept of "observables", linear functionals corresponding to given tasks. We then introduce a mathematical theory for the analysis of feature vectors, including a similarity metric between feature vectors called the coupling coefficient which estimates the degree to which one feature's output correlates with another's. We use ObProp to perform extensive qualitative investigations into several tasks, including gendered occupational bias, political party prediction, and programming language detection. Our results suggest that ObProp surpasses traditional approaches for finding feature vectors in the low-data regime, and that ObProp can be used to better understand the mechanisms responsible for bias in large language models.
Tags
Links
Full Text
244,090 characters extracted from source content.
Expand or collapse full text
Observable Propagation: Uncovering Feature Vectors in Transformers Jacob Dunefsky Arman Cohan Abstract A key goal of current mechanistic interpretability research in NLP is to find linear features (also called ?feature vectors?) for transformers: directions in activation space corresponding to concepts that are used by a given model in its computation. Present state-of-the-art methods for finding linear features require large amounts of labelled data – both laborious to acquire and computationally expensive to utilize. In this work, we introduce a novel method, called ?observable propagation? (in short: ObProp), for finding linear features used by transformer language models in computing a given task – using almost no data. Our paradigm centers on the concept of ?observables?, linear functionals corresponding to given tasks. We then introduce a mathematical theory for the analysis of feature vectors, including a similarity metric between feature vectors called the coupling coefficient which estimates the degree to which one feature’s output correlates with another’s. We use ObProp to perform extensive qualitative investigations into several tasks, including gendered occupational bias, political party prediction, and programming language detection. Our results suggest that ObProp surpasses traditional approaches for finding feature vectors in the low-data regime, and that ObProp can be used to better understand the mechanisms responsible for bias in large language models. Interpretability 1 Introduction When a large language model predicts that the next token in a sentence is far more likely to be ?him? than ?her?, what is causing it to make this decision? The field of mechanistic interpretability aims to answer such questions by investigating how to decompose the computation carried out by a model into human-understandable pieces. This helps us predict their behavior, identify and correct discrepancies, align them with our goals, and assess their trustworthiness, especially in high-risk scenarios. The primary goal is to improve output prediction on real-world data distributions, identify and understand discrepancies between intended and actual behavior, align the model with our objectives, and assess trustworthiness in high-risk applications (Olah et al., 2018). One important notion in mechanistic interpretability is that of ?features?. A feature can be thought of as a simple function of the activations at a particular layer of the model, the value of which is important for the model’s computation at that layer. For instance, in the textual domain, features used by a language model at some layer might reflect whether a token is an adverb, whether the language of the token is French, or other such characteristics. Possibly the most sought-after type of feature is a ?linear feature?, or ?feature vector?: a fixed vector in embedding space that the model utilizes by determining how much the input embedding points in the direction of that vector. Linear features are in some sense the holy grails of features: they are both easy for humans to interpret and amenable to mathematical analysis (Olah, 2022). Contributions Our primary contribution is a method, which we call ?observable propagation? (ObProp in short), for both finding feature vectors in large language models corresponding to given tasks, and analyzing these features in order to understand how they affect other tasks. Unlike non-feature-based interpretability methods such as saliency methods (Simonyan et al., 2013; Jacovi et al., 2021; Wallace et al., 2019) or circuit discovery methods (Conmy et al., 2023; Wang et al., 2022), observable propagation reveals the specific information from the model’s internal activations that are responsible for its output, rather than merely tokens or model components that are relevant. And unlike methods for finding feature vectors such as probing (Gurnee et al., 2023; Li et al., 2023; Elazar et al., 2021) or sparse autoencoders (Cunningham et al., 2023), observable propagation can find these feature vectors without having to store large datasets of embeddings, perform many expensive forward passes, or utilize vast quantities of labeled data. In addition, we present the following contributions: • We develop a detailed theoretical analysis of feature vectors. In Theorem 1, we provide theoretical motivation explaining why LayerNorm sublayers do not affect the direction of feature vectors; making progress towards answering the question of the extent to which LayerNorms are used in computation in transformers, which has been raised in mechanistic interpretability (Winsor, 2022). In Theorem 2, we introduce and motivate a measurement of feature vector similarity called the ?coupling coefficient?, which can be used to determine the extent to which the model’s output on one task is coupled with the model’s output on another task. • In order to determine the effectiveness of ObProp in understanding the causes of bias in large language models, we investigate gendered pronoun prediction (§4.1) and occupational gender bias (§4.2). By using observable propagation, we show that the model uses the same features to predict gendered pronouns given a name, as it does to predict an occupation given a name; this is supported by further experiments on both artificial and natural datasets. • We perform a quantitative comparison between ObProp and probing methods for finding feature vectors on diverse tasks (subject pronoun prediction, programming language detection, political party prediction). We find that ObProp is able to achieve superior performance to these traditional data-heavy approaches in low-data regimes (§4.3). All code used in this paper is provided at https://github.com/jacobdunefsky/ObservablePropagation. 1.1 Background and Related Work In interpretability for NLP applications, there are a number of saliency-based methods that attempt to determine which tokens in the input are relevant to the model’s prediction (Simonyan et al., 2013; Jacovi et al., 2021; Wallace et al., 2019). Additionally, recent circuit-based mechanistic interpretability research has involved determining which components of a model are most relevant to the model’s computation on a given task (Conmy et al., 2023; Wang et al., 2022; Yu et al., 2023). Our work goes beyond these two approaches by considering not just relevant tokens, and not just relevant model components, but relevant feature vectors, which can be analyzed and compared to understand all the intermediate information used by models in their computation. A separate line of research aims to find feature vectors by performing supervised training of probes to find directions in embedding space that correspond to labels (Gurnee et al., 2023; Li et al., 2023; Elazar et al., 2021; Tigges et al., 2023), or use unsupervised autoencoders on model embeddings to find feature vectors (Bricken et al., 2023; Cunningham et al., 2023). ObProp does not rely on any training – and importantly, exhibits greater fidelity to the model’s actual computation, because it directly uses the model weights to find feature vectors. A number of recent studies in interpretability involve finding feature vectors by decomposing transformer weight matrices into a set of basis vectors and projecting these vectors into token space (Dar et al., 2023; Millidge & Black, 2023). ObProp goes beyond this by taking into account nonlinearities, by finding precise feature vectors for tasks (rather than merely being limited to choosing from among a fixed set of vectors), and by formulating the concept of ?observables?, which is more general than the tasks considered in these prior works. Another approach to mechanistic interpretability involves making causal interventions on the model to determine whether a given abstraction accurately characterizes model behavior (Lepori et al., 2023; Wu et al., 2024). In contrast, ObProp directly finds specific feature vectors responsible for model behavior, rather than requiring humans to impose a given ontology onto the model to be tested. Recently, Hernandez et al. (2023) used differentiation to obtain linear approximations to computations in language models responsible for encoding relations (such as ?plays musical instrument?) between entities. ObProp also uses differentiation to obtain linear approximations, but in contrast with this recent work, we seek to find feature vectors (rather than linear transformations) which are shared across multiple tasks rather than merely means by which a single task is implemented. The concurrent work of Park et al. (2023) also investigates linear functionals on language models’ logit vectors and the connection between them and vectors in the model’s embedding space. However, their approach relies on sampling from the data distribution in order to estimate the covariance of unembedding vectors; this is in contrast to the data-free approach presented here. 2 Observable Propagation: From Tasks to Feature Vectors In this section, we present our method, which we call ?observable propagation? (ObProp), for finding feature vectors directly corresponding to a given task. We begin by introducing the concept of ?observables?, which is central to our paradigm. We then explain observable propagation for simple cases, and then build up to a general understanding. 2.1 Our Paradigm: Observables Often, in mechanistic interpretability, we care about interpreting the model’s computation on a specific task. In particular, the model’s behavior on a task can frequently be expressed as the difference between the logits of two tokens. For instance, (Mathwin et al., 2023) attempt to interpret the model’s understanding of gendered pronouns, and as such, measure the difference between the logits for the tokens " she" and " he". This has been identified as a general pattern of taking ?logit differences? that appears in mechanistic interpretability work (Nanda, 2022). The first insight that we introduce is that each of these logit differences corresponds to a linear functional on the logits. That is, if the logits are represented by the vector y, then each logit difference can be represented by nTysuperscriptn^Tynitalic_T y for some vector n. For instance, if etokensubscripttokene_tokenetoken is the one-hot vector with a one in the position corresponding to the token token, then the logit difference between " she" and " he" corresponds to the linear functional n=(e" she"−e" he")subscript" she"subscript" he"n=(e_" she"-e_" he")n = ( e" she" - e" he" ). We thus define an observable to be a linear functional on the logits of a language model. More formally: Definition 2.1. An observable is a linear functional111Note that because all observables are linear functionals on a finite vector space, they can be written as row vectors. As such, it is often convenient to abuse notation, and associate an observable with its corresponding vector. n:ℝ_→ℝ:→superscriptℝ_ℝn:R d\_vocab : blackboard_Rtypewriter_d _ typewriter_vocab → blackboard_R, where d_vocab is the number of tokens in the model’s vocabulary. We refer to the action of taking the inner product of the model’s output logits with an observable n as getting the output of the model under the observable n. Returning to the previous example, the output of the model under the observable (e" she"−e" he")subscript" she"subscript" he"(e_" she"-e_" he")( e" she" - e" he" ) corresponds to the logit difference between the " she" token and the " he" token. In defining observables in this way, we no longer consider logit differences as merely a part of the process of performing an interpretability experiment; rather, we consider the broader class of linear functionals as being objects amenable to study in their own right. (And as we will see in §4.2, these linear functionals are not limited to merely those corresponding to logit differences between two tokens.) Next, we will demonstrate how concretizing observables like this enables us to find sets of feature vectors corresponding to different observables. 2.2 Observable Propagation for Attention Sublayers First, let us consider a linear model f(x)=Wxf(x)=Wxf ( x ) = W x. Given an observable n, we can compute the measurement associated with n as nTf(x)superscriptn^Tf(x)nitalic_T f ( x ), which is just nTWxsuperscriptn^TWxnitalic_T W x. But now, notice that nTWx=(WTn)Txsuperscriptsuperscriptsuperscriptn^TWx=(W^Tn)^Txnitalic_T W x = ( Witalic_T n )T x. In other words, WTnsuperscriptW^TnWitalic_T n is a feature vector in the domain, such that the dot product of the input x with the feature vector WTnsuperscriptW^TnWitalic_T n directly gives the output measurement nTf(x)superscriptn^Tf(x)nitalic_T f ( x ). Next, let us consider how to extend this idea to address attention sublayers in transformers. Attention combines information across tokens. They can be decomposed into two parts: the part that determines from which tokens information is taken (query-key interaction), and the part that determines what information is taken from each token to form the output (output-value). Elhage et al. (2021) refer to the former part as the ?QK circuit? of attention, and the latter part as the ?OV circuit?. Following their formulation, each attention layer can be written as xjl+1=xjl+∑h=1H∑i=1Sscorel,h(xil,xjl)Wl,hOVxilsuperscriptsubscript1superscriptsubscriptsuperscriptsubscriptℎ1superscriptsubscript1subscriptscoreℎsuperscriptsubscriptsuperscriptsubscriptsubscriptsuperscriptℎsuperscriptsubscriptx_j^l+1=x_j^l+Σ _h=1^HΣ _i=1^S% score_l,h(x_i^l,x_j^l)W^OV_l,hx_i^lxitalic_jitalic_l + 1 = xitalic_jitalic_l + ∑h = 1H ∑i = 1S scoreitalic_l , h ( xitalic_iitalic_l , xitalic_jitalic_l ) Witalic_O Vitalic_l , h xitalic_iitalic_l where xjlsuperscriptsubscriptx_j^lxitalic_jitalic_l is the residual stream for token j∈1,…,S1…j∈\1,...,S\j ∈ 1 , … , S at layer l, scorel,h(xil,xjl)subscriptscoreℎsuperscriptsubscriptsuperscriptsubscriptscore_l,h(x_i^l,x_j^l)scoreitalic_l , h ( xitalic_iitalic_l , xitalic_jitalic_l ) is the attention score at layer l associated with attention head h∈1,…,Hℎ1…h∈\1,...,H\h ∈ 1 , … , H for tokens xilsuperscriptsubscriptx_i^lxitalic_iitalic_l and xjlsuperscriptsubscriptx_j^lxitalic_jitalic_l, and Wl,hOVsubscriptsuperscriptℎW^OV_l,hWitalic_O Vitalic_l , h is the combined output-value weight matrix for attention head hℎh at layer l. In each term in this sum, the scoreh(xil,xjl)subscriptscoreℎsuperscriptsubscriptsuperscriptsubscriptscore_h(x_i^l,x_j^l)scoreitalic_h ( xitalic_iitalic_l , xitalic_jitalic_l ) factor corresponds to the QK circuit, and the Wl,hOVxilsubscriptsuperscriptℎsuperscriptsubscriptW^OV_l,hx_i^lWitalic_O Vitalic_l , h xitalic_iitalic_l factor corresponds to the OV circuit. Note that the primary nonlinearity in attention layers comes from the computation of the attention scores, and their multiplication with the Wl,hOVxilsubscriptsuperscriptℎsuperscriptsubscriptW^OV_l,hx_i^lWitalic_O Vitalic_l , h xitalic_iitalic_l terms. As such, as Elhage et al. (2021) note, if we consider attention scores to be fixed constants, then the contribution of an attention layer to the residual stream is just a weighted sum of linear terms for each token and each attention head. This means that if we restrict our analysis to the OV circuit, then we can find feature vectors using the method described for linear models. While this restricts the scope of computation, analyzing OV circuits in isolation is still very valuable: doing so tells us what sort of information, at each stage of the model’s computation, corresponds to our observable. From this point of view, if we have an attention head hℎh at layer l, then the direct effect of that attention head on the output logits of the model is proportional to WUWl,hOVxilsubscriptsubscriptsuperscriptℎsuperscriptsubscriptW_UW^OV_l,hx_i^lWitalic_U Witalic_O Vitalic_l , h xitalic_iitalic_l for token i, where WUsubscriptW_UWitalic_U is the model unembedding matrix (which projects the model’s final activations into logits space). We thus have that the feature vector corresponding to the OV circuit for this attention head is given by (WUWl,hOV)Tnsuperscriptsubscriptsubscriptsuperscriptℎ(W_UW^OV_l,h)^Tn( Witalic_U Witalic_O Vitalic_l , h )T n. This feature vector corresponds to the direct contribution that the attention head has to the output. But an earlier-layer attention head’s output can then be used as the input to a later-layer attention head. For attention heads h,h′ℎsuperscriptℎ′h,h h , h′ in layers l,l′,l l , l′ respectively with l<l′<l l < l′, the computational path starting at token i in layer l is then passed as the input to attention head hℎh; the output of this head for that token is then used as the input to head h′ℎ′h h′ in layer l′. Then by the same reasoning, the feature vector for this path is: (WUWl′,h′OVWl,hOV)Tnsuperscriptsubscriptsubscriptsuperscriptsuperscript′ℎ′subscriptsuperscriptℎ(W_UW^OV_l ,h W^OV_l,h)^Tn( Witalic_U Witalic_O Vitalic_l′ , h′ Witalic_O Vitalic_l , h )T n. Note that this process can be repeated ad infinitum. 2.3 General Form: Addressing MLPs and LayerNorms Along with attention sublayers, transformers also contain nonlinear MLP sublayers and LayerNorm nonlinearities before each sublayer. One main challenge in interpretability for large models has been the difficulty in understanding the MLP sublayers, due to the polysemantic nature of their neurons (Olah et al., 2020; Elhage et al., 2022). One prior approach to address this is modifying model architecture to increase the interpretability of MLP neurons (Elhage et al., 2022). Instead of architecture modification, we address these nonlinearities by approximating them as linear functions using their first-order Taylor approximations. This approach is reminiscent of that presented by Nanda et al. (2023), who use linearizations of language models to speed up the process of activation patching (Wang et al., 2022); we go beyond this by recognizing that the gradients used in these linearizations act as feature vectors that can be independently studied and interpreted, rather than merely making activation patching more efficient. Taking this into account, the general form of observable propagation, including first-order approximations of nonlinearities, can be implemented as follows. Consider a computational path PP in the model through sublayers l1<l2<⋯<lksubscript1subscript2⋯subscriptl_1<l_2<…<l_kl1 < l2 < ⋯ < litalic_k. Then for a given observable n, the feature vector corresponding to sublayer l in PP can be computed according to Algorithm 1. (For details on how to choose x0subscript0x_0x0 in line 14, see App. C.) Note that before every sublayer, there is a nonlinear LayerNorm operation. For greatest accuracy, one can find the feature vector corresponding to this LayerNorm by taking its gradient as described above. But as shown in Theorem 1, if one only cares about the directions of the feature vectors and not their magnitudes, then the LayerNorms can be ignored entirely. Algorithm 1 Observable propagation 1: Input: observable n 2: Let WUsubscriptW_UWitalic_U be the model unemebdding matrix. 3: if there exists a LayerNorm operation x↦f(x)maps-tox f(x)x ↦ f ( x ) before the unembedding operation then 4: y←∇(nTWUf(x))|x=x0←evaluated-at∇superscriptsubscriptsubscript0y←∇ (n^TW_Uf(x) )|_x=x_0y ← ∇ ( nitalic_T Witalic_U f ( x ) ) |x = x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT for some suitable value of x0subscript0x_0x0 5: else 6: y←(WU)Tn←superscriptsubscripty←(W_U)^Tny ← ( Witalic_U )T n 7: end if 8: for k∈||,…,1…1k∈\ |P |,…,1\k ∈ | P | , … , 1 , starting at the end do 9: if lksubscriptl_klitalic_k is an attention head then 10: Let WksubscriptW_kWitalic_k be the OV matrix for lksubscriptl_klitalic_k. 11: y←WkTy←superscriptsubscripty← W_k^Ty ← Witalic_kitalic_T y. 12: end if 13: if lksubscriptl_klitalic_k is an a nonlinearity that maps x↦f(x)maps-tox f(x)x ↦ f ( x ) then 14: y←∇(yTf(x))|x=x0←evaluated-at∇superscriptsubscript0y←∇ (y^Tf(x) )|_x=x_0y ← ∇ ( yitalic_T f ( x ) ) |x = x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT for some suitable value of x0subscript0x_0x0 15: end if 16: end for 17: Output: feature vector y 2.4 The Effect of LayerNorms on Feature Vectors LayerNorm nonlinearities are ubiquitous in Transformers, appearing before every MLP and attention sublayer, and before the final unembedding matrix. Therefore, it is worth investigating how they affect feature vectors; if LayerNorms are highly nonlinear, this would cause trouble for ObProp. Nanda et al. (2023) provide intuition for why we should expect that in high-dimensional spaces, LayerNorm is approximately linear. However, the gradient of LayerNorm depends on the norm of the input, so we cannot consider LayerNorm gradients to be constant for inputs of different norms. Nevertheless, empirically, we found that LayerNorms had almost no impact on the direction of feature vectors. In particular, for the feature vectors discussed in §4.3, we looked at the cosine similarities between the feature vectors computed by differentiating through LayerNorm and those computed by ignoring LayerNorm; the minimum cosine similarity was as high as 0.998. Further details can be found in Appendix E.6. The following statement, which we prove in Appendix E.5, provides further theoretical underpinning for this behavior. (Note that while this theorem assumes that observables are normally distributed, which is not necessarily the case, we believe that it nevertheless provides useful theoretical motivation for explaining our empirical findings.) Theorem 1. Define LayerNorm(x)=x−(1→Tx)1→‖x−(1→Tx)1→‖LayerNormsuperscript→1→1normsuperscript→1→1LayerNorm(x)= x-( 1^Tx) 1 \|x-( 1^% Tx) 1 \|LayerNorm ( x ) = divide start_ARG x - ( over→ start_ARG 1 end_ARGT x ) over→ start_ARG 1 end_ARG end_ARG start_ARG ∥ x - ( over→ start_ARG 1 end_ARGT x ) over→ start_ARG 1 end_ARG ∥ end_ARG, where 1→1 1over→ start_ARG 1 end_ARG is the vector of all ones. For a feature vector n, define f(x;n)=n⋅LayerNorm(x)⋅LayerNormf(x;n)=n·LayerNorm(x)f ( x ; n ) = n ⋅ LayerNorm ( x ). Define θ(x;n)=arccos(n⋅∇xf(x;n)‖n‖‖∇xf(x;n)‖)⋅subscript∇normnormsubscript∇θ(x;n)= ( n· _xf(x;n) \|n \| \|% _xf(x;n) \| )θ ( x ; n ) = arccos ( divide start_ARG n ⋅ ∇x f ( x ; n ) end_ARG start_ARG ∥ n ∥ ∥ ∇x f ( x ; n ) ∥ end_ARG ) – that is, θ(x;n)θ(x;n)θ ( x ; n ) is the angle between n and ∇xf(x;n)subscript∇ _xf(x;n)∇x f ( x ; n ). Then if n∼(0,I)similar-to0n (0,I)n ∼ N ( 0 , I ) in ℝdsuperscriptℝR^dblackboard_Rd, and d≥88d≥ 8d ≥ 8 then [θ(x;n)]<2arccos(1−1/(d−1)).delimited-[]2111E [θ(x;n) ]<2 ( 1-1/(d-1) ).blackboard_E [ θ ( x ; n ) ] < 2 arccos ( square-root start_ARG 1 - 1 / ( d - 1 ) end_ARG ) . 3 Data-free Analysis of Feature Vectors Once we have used this to obtain a given set of feature vectors, we can then perform some preliminary analyses on them, using solely the vectors themselves. This can give us insights into the behavior of the model without having to run forward passes of the model on data. Feature vector norms One technique that can be used to assess the relative importance of model components is investigating the norms of the feature vectors associated with those components. To see why, recall that if y is a feature vector associated with observable n for a model component that implements function f, then for an input x, we have n⋅f(x)=y⋅x⋅n· f(x)=y· xn ⋅ f ( x ) = y ⋅ x. Now, if we have no prior knowledge regarding the distribution of inputs to this model component, we can expect y⋅x⋅y· xy ⋅ x to be proportional to ‖y‖norm \|y \|∥ y ∥. Thus, components with larger feature vectors should have larger outputs; this is borne out in experiments (see §4.1) Note that when calculating the norm of a feature vector for a computation path starting with a LayerNorm, one must multiply the norm by an estimated norm of the LayerNorm’s input (see Appendix E.3 for explanation). Coupling coefficients An important question that we might want to ask about a model’s behavior is the following: given two separate tasks, to what extent should we expect that the model will have a high output on one task whenever it has a high output on the other task? In other words, to what extent are the model’s outputs on the two tasks coupled? Let us translate this problem into the language of feature vectors. If n1subscript1n_1n1 and n2subscript2n_2n2 are observables with feature vectors y1subscript1y_1y1 and y2subscript2y_2y2 for a function f, then for inputs x, we have n1⋅f(x)=y1⋅x⋅subscript1⋅subscript1n_1· f(x)=y_1· xn1 ⋅ f ( x ) = y1 ⋅ x and n2⋅f(x)=y2⋅x⋅subscript2⋅subscript2n_2· f(x)=y_2· xn2 ⋅ f ( x ) = y2 ⋅ x. Now, if we constrain our input x to have norm c, and constrain x⋅y1=k⋅subscript1x· y_1=kx ⋅ y1 = k, then what is the expected value of x⋅y2⋅subscript2x· y_2x ⋅ y2? And what are the maximum/minimum values of x⋅y2⋅subscript2x· y_2x ⋅ y2? We present the following theorem to provide theoretical grounding towards answering both questions: Theorem 2. Let y1,y2∈ℝdsubscript1subscript2superscriptℝy_1,y_2 ^dy1 , y2 ∈ blackboard_Rd. Let x be uniformly distributed on the hypersphere defined by the constraints ‖x‖=snorm\|x\|=s∥ x ∥ = s and x⋅y1=k⋅subscript1x· y_1=kx ⋅ y1 = k. Then we have [x⋅y2]=ky1⋅y2‖y1‖2delimited-[]⋅subscript2⋅subscript1subscript2superscriptnormsubscript12E[x· y_2]=k y_1· y_2\|y_1\|^2blackboard_E [ x ⋅ y2 ] = k divide start_ARG y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG and the maximum and minimum values of x⋅y2⋅subscript2x· y_2x ⋅ y2 are given by ‖y2‖y1‖(kcos(θ)±sin(θ)s2‖y1‖2−k2)normsubscript2normsubscript1plus-or-minussuperscript2superscriptnormsubscript12superscript2 \|y_2\|\|y_1\| (k (θ)± (θ) s^2\|y_1% \|^2-k^2 )divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG ( k cos ( θ ) ± sin ( θ ) square-root start_ARG s2 ∥ y1 ∥2 - k2 end_ARG ) where θ is the angle between y1subscript1y_1y1 and y2subscript2y_2y2. We denote the value y1⋅y2‖y1‖2⋅subscript1subscript2superscriptnormsubscript12 y_1· y_2\|y_1\|^2divide start_ARG y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG by C(y1,y2)subscript1subscript2C(y_1,y_2)C ( y1 , y2 ), and call it the ?coupling coefficient from y1subscript1y_1y1 to y2subscript2y_2y2?. Intuitively, C(y1,y2)subscript1subscript2C(y_1,y_2)C ( y1 , y2 ) measures the expected dot product between a vector and y2subscript2y_2y2, assuming that that vector has a dot product of 1111 with y1subscript1y_1y1. Additionally, note that Theorem 2 also implies that the coupling coefficient becomes a more accurate estimator as the cosine similarity between y1subscript1y_1y1 and y2subscript2y_2y2 increases. Returning back to our original motivation, the coupling coefficient can be interpreted as estimating the constant of proportionality between a model’s outputs on two tasks (where each task corresponds to an observable and a feature vector); the cosine similarity can be interpreted as quantifying the extent to which the model’s outputs might deviate from this proportional relationship. In this manner, the coupling coefficient helps us predict the model’s behavior on unseen tasks. Note that while Theorem 2 does make the assumption that hidden states x are spherically-symmetrically distributed, nevertheless, experimental evidence does bear out that the coupling coefficient is an accurate estimator of the expected constant of proportionality between activations of different features, with accuracy increasing as the cosine similarity increases; see §4.1 for results. 4 Experiments Armed with our ?observable propagation? toolkit for obtaining and analyzing feature vectors, we now turn our attention to the problem of gender bias in LLMs in order to determine the extent to which these tools can be used to diagnose the causes of unwanted behavior. 4.1 Gendered Pronouns Prediction We first consider the related question of understanding how a large language model predicts gendered pronouns. Specifically, given a sentence prefix including a traditionally=gendered name (for example, ?Mike? is often associated with males and ?Jane? is often associated with females), how does the model predict what kind of pronoun should come after the sentence prefix? We will later see that understanding the mechanisms driving the model’s behavior on this benign task will yield insights for understanding gender-biased behavior of the model. Additionally, this investigation also provides an opportunity to test the ability of ObProp to accurately predict model behavior. The gendered pronoun prediction problem was previously considered by Mathwin et al. (2023), where the authors used the ?Automated Circuit Discovery? tool presented by Conmy et al. (2023) to investigate the flow of information between different components of GPT-2-small (Radford et al., 2019) in predicting subject pronouns (i.e. ?he?, ?she?, etc). We extend the problem setting in various ways. We investigate both the subject pronoun case (in which the model is to predict the token ?she? versus ?he?) and the object pronoun case (in which the model is to predict ?her? versus ?him?). Additionally, we seek to understand the underlying features responsible for this task, rather than just the model components involved, so that we can compare these features with the features that the model uses in producing gender-biased output. Problem setting We consider two observables, corresponding to the subject pronoun prediction task and the object pronoun prediction task. The observable for the subject pronoun task, nsubjsubscriptsubjn_subjnsubj, is given by e" she"−e" he"subscript" she"subscript" he"e_" she"-e_" he"e" she" - e" he", where etokensubscripttokene_tokenetoken is the one-hot vector with a one in the position corresponding to the token token. This corresponds to the logit difference between the tokens " she" and " he", and indicates how likely the model predicts the next token to be " she" versus " he". Similarly, the observable for the object pronoun task, nobjsubscriptobjn_objnobj, is given by e" her"−e" him"subscript" her"subscript" him"e_" her"-e_" him"e" her" - e" him". We investigate the model GPT-Neo-1.3B (Black et al., 2021), which has approximately 1.3B parameters, 24 layers, 16 attention heads, an embedding dimension of 2048, and an MLP hidden dimension of 8192. Note that ObProp is able to work with models that are significantly larger than those previously explored, such as GPT-2-small (117M parameters) (Radford et al., 2019), which has been the focus of recent interpretability work by (Wang et al., 2022), inter alia. Additionally, a note on notation. The attention head with index hℎh at layer l will be presented as ?l::hℎh?. For instance, 17::14 refers to attention head 14 at layer 17. Furthermore, the MLP at layer L will be presented as ?mlpLLL?. For instance, mlp1 refers to the MLP at layer 1. Feature vector norms for single attention heads We begin by analyzing the norms for the feature vectors corresponding to nsubjsubscriptsubjn_subjnsubj and nobjsubscriptobjn_objnobj for each attention head in the model. We then used path patching (Goldowsky-Dill et al., 2023) to measure the mean degree to which each attention head contributes to the model’s output on dataset of male/female prompt pairs. If our method is effective, then we would expect to see that the heads with the greatest feature norms are those identified by path patching as most important to model behavior. The results are given in Table 1. Observable Heads with greatest feature norms Feature vector norms nsubjsubscriptsubjn_subjnsubj 18::11 17::14 13::11 15::13 237.3 236.2 186.4 145.4 nobjsubscriptobjn_objnobj 17::14 18::11 13::11 15::13 159.2 157.0 145.0 112.3 Observable Heads with greatest attributions Path patching attributions nsubjsubscriptsubjn_subjnsubj 17::14 13::11 15::13 13::3 5.004 3.050 1.199 0.584 nobjsubscriptobjn_objnobj 17::14 13::11 15::13 22::2 2.949 1.885 1.863 0.365 Table 1: The four attention heads with the greatest feature norms and path patching attributions (corrupted-clean logit differences) for both the nsubjsubscriptsubjn_subjnsubj and nobjsubscriptobjn_objnobj observables. nsubjsubscriptsubjn_subjnsubj is the observable measuring the difference between the logits for " she" and " he"; nobjsubscriptobjn_objnobj is the observable measuring the difference between the logits for " her" and " him". ?l::k? denotes the attention head with index k at layer with index l. ?Feature vector norms? refers to the norm of the feature vector associated with the attention head; ?Path patching attributions? refers to the difference between the model’s output for the given observable when the given attention head’s activations was patched, and the model’s output for that given observable when the attention head was not patched. We see that three of the four attention heads with the highest feature norms – that is, 17::14, 15::13, and 13::11 – also have very high attributions for both the subject and object pronoun cases. (Interestingly, head 18::11 does not have a high attribution in either case despite having a large feature norm; this may be due to effects involving the model’s QK circuit.) This indicates that observable propagation was largely successful in being able to predict the most important attention heads, despite only using one forward pass per observable (to estimate LayerNorm gradients). Cosine similarities and coupling coefficients Next, we investigated the cosine similarities between feature vectors for nsubjsubscriptsubjn_subjnsubj and nobjsubscriptobjn_objnobj. We found that the four heads with the highest cosine similarities between its nsubjsubscriptsubjn_subjnsubj feature vector and its nobjsubscriptobjn_objnobj feature vector are 17::14, 18::11, 15::13, and 13::11, with cosine similarities of 0.9882, 0.9831, 0.9816, 0.9352. The high cosine similarities of these feature vectors indicates that the model uses the same underlying features for both the task of predicting subject pronoun genders and the task of predicting object pronoun genders. We also looked at the feature vectors for the computational paths 6::6→9::1→13::11 for nsubjsubscriptsubjn_subjnsubj and 6::6→13::11 for nobjsubscriptobjn_objnobj, because performing path patching on a pair of prompts suggested that these computational paths were relevant. The feature vectors for these paths had a cosine similarity of 0.9521. We then computed the coupling coefficients between the nsubjsubscriptsubjn_subjnsubj and nobjsubscriptobjn_objnobj feature vectors for heads 17::14, 15::13, and 13::11. This is because these heads were present among the heads with the highest cosine similarities, highest feature norms, and highest patching attributions, for both the nsubjsubscriptsubjn_subjnsubj and nobjsubscriptobjn_objnobj cases. After this, we tested the extent to which the coupling coefficients accurately predicted the constant of proportionality between the dot products of different feature vectors with their inputs. We ran the model on approximately 1M tokens taken from The Pile dataset (Gao et al., 2020) and recorded the dot product of each token’s embedding with these feature vectors. We then computed the least-squares best fit line that predicts the nobjsubscriptobjn_objnobj values given the nsubjsubscriptsubjn_subjnsubj values, and compared the slope of the line to the coupling coefficients. The results are given in Table 2. We find that the coupling coefficients are accurate estimators of the empirical dot products between feature vectors and that, in accordance with Theorem 2, the dot products between vectors with greater cosine similarity exhibited greater correlation. Head Coupling coefficient Cosine similarity Best-fit slope r2superscript2r^2r2 17::14 0.71230.71230.71230.7123 0.98820.98820.98820.9882 0.76920.76920.76920.7692 0.95670.95670.95670.9567 15::13 0.80110.80110.80110.8011 0.98160.98160.98160.9816 0.80030.80030.80030.8003 0.95230.95230.95230.9523 13::11 0.74780.74780.74780.7478 0.93520.93520.93520.9352 0.76320.76320.76320.7632 0.81890.81890.81890.8189 6::6→… – 0.95210.95210.95210.9521 – 0.86130.86130.86130.8613 Table 2: Coupling coefficients and cosine similarity, compared to the slope of the best-fit line for empirical dot products with feature vectors of nsubjsubscriptsubjn_subjnsubj versus nobjsubscriptobjn_objnobj. Note that for the 6::6→… feature vectors, we do not investigate coupling coefficients, because these earlier-layer attention heads are involved in many computational paths, so the magnitudes obtained for these feature vectors along one computational path do not reflect the importance along the sum total of computational paths. 4.2 Occupational Gender Bias Now that we have understood some of the features relevant to predicting gendered pronous, we more directly consider the setting of occupational gender bias in language models, a widely-investigated problem (Bolukbasi et al., 2016; Vig et al., 2020). For a prompt like "My friend [NAME] is an excellent …...…", an LM which hasn’t been aligned using e.g. RLHF (Ouyang et al., 2022) is more likely to predict that the next token is " programmer" than " nurse" if [NAME] is replaced with a male name, and vice-versa for a female name (Brown et al., 2020). We applied observable propagation to the problem in order to go beyond prior work and understand the features responsible for this behavior. In particular, we considered the observable nbias=(e" nurse"+e" teacher"+e" secretary")−(e" programmer"+e" engineer"+e" doctor")subscriptbiassubscript" nurse"subscript" teacher"subscript" secretary"subscript" programmer"subscript" engineer"subscript" doctor"n_bias=(e_" nurse"+e_" teacher"+e_" % secretary")-(e_" programmer"+e_" engineer"+e_% " doctor")nbias = ( e" nurse" + e" teacher" + e" secretary" ) - ( e" programmer" + e" engineer" + e" doctor" ); this observable represents the extent to which the model predicts stereotypically-female occupations instead of stereotypically-male ones. The same features are used to predict gendered pronouns and occupations We ran path patching on a single pair of prompts in order to determine computational paths relevant to nbiassubscriptbiasn_biasnbias. The results were computational paths beginning with mlp1→6::6→9::1→… and 6::6→9::1→…, which began on the token in the prompt associated with the gendered name. Even though there were many relevant computational paths beginning with these prefixes, and even though these computational paths passed through multiple later-layer MLPs, the feature vectors for these different paths nevertheless had high cosine similarity with one another. More surprising is that the feature vector for nbiassubscriptbiasn_biasnbias for 6::6→9::1→… had a cosine similarity of 0.966 with the feature vector for nsubjsubscriptsubjn_subjnsubj for 6::6→9::1→13::11. Similarly, the nbiassubscriptbiasn_biasnbias feature vector for mlp1→6::6→9::1→… had a cosine similarity of 0.977 with the nsubjsubscriptsubjn_subjnsubj feature vector for mlp1→6::6→9::1→13::11. This indicates that the model uses the same features to identify both gendered pronouns and likely occupations, given a traditionally-gendered name. To determine the extent to which these feature vectors reflected model behavior, we ran the model on an artificial dataset of 600 prompts involving gendered names (see Appendix A), recorded the dot product of the model’s activations on the name token with the feature vectors, and recorded the model’s output with respect to the observables. The results can be found in Figure 1. Note that the correlation coefficient r2superscript2r^2r2 between the dot product with the nbiassubscriptbiasn_biasnbias feature vector and the actual model output is 0.88, indicating that the feature vector is a very good predictor of model output. (a) (b) Figure 1: The dot product of model activations with (normalized) feature vectors, compared to the model’s output for observables. (a) Dot products with the nbiassubscriptbiasn_biasnbias feature vector for 6::6→9::1→…, versus the model’s output with respect to nbiassubscriptbiasn_biasnbias. (b) Dot products with the nsubjsubscriptsubjn_subjnsubj feature vector for 6::6→9::1→13::11, versus the model’s output w.r.t nsubjsubscriptsubjn_subjnsubj. We then investigated the tokens in a 1M-token subset of The Pile that maximally activated the nbiassubscriptbiasn_biasnbias feature vector222If y is a feature vector, then the maximally-activating tokens for y are given by argmaxi(yTxi)subscriptargmaxsuperscriptsubscript argmax_i(y^Tx_i)argmaxi ( yitalic_T xitalic_i ) where xisubscriptx_ixitalic_i is the model’s hidden state on the i-th token in the dataset.. These tokens were primarily female names: tokens like " Rita", " Catherine", and " Mary", along with female name suffixes like "a" (as in ?Phillipa?), "ine" (as in ?Josephine?), and "ia" (as in ?Antonia?). Surprisingly, the least-activating tokens were generally male common nouns, such as " husband", " brother", and " son" – but also words like " his", and even " male". This evidence even further supports the hypothesis that the model specifically uses gendered features in order to determine which occupations are most likely to be associated with a name. However, it is worth noting that part of the power of ObProp is that it allows us to test hypotheses such as this without needing to run the model on large datasets and record the tokens with the highest feature vector activations: simply by virtue of the extremely high cosine similarity between the nsubjsubscriptsubjn_subjnsubj feature vector and the nbiassubscriptbiasn_biasnbias feature vector, we could infer that the model was using gendered information to predict occupations. As such, looking at the maximally-activating tokens primarily served as a ?sanity check?, verifying that the feature vectors returned by ObProp are human-interpretable. 4.3 Quantitative Analysis Across Observables We now evaluate ObProp’s performance across a broader variety of tasks, including subject pronoun prediction, identifying American politicians’ party affiliations, and distinguishing between C and Python code. We use ObProp to find feature vectors for each task; for comparison, we also find feature vectors using the more data-intensive methods of linear/logistic regression and mean difference, standard methods used by Kim et al. (2018), Tigges et al. (2023) and many others. For the pronoun prediction task, we use the same artificial dataset used in the subject pronoun prediction experiments; for the political affiliation task, we use an artificial dataset comprised of 40 Democratic politicians’ and 40 Republican politicians’ names and consider the model’s logit difference between the tokens " Democrat" and " Republican". For the programming language classification task, we use a natural dataset of code. For each feature vector y, we look at the dot product yTxsuperscripty^Txyitalic_T x across inputs x. For the former two tasks, we evaluate the correlation between yTxsuperscripty^Txyitalic_T x and the model’s output; on the latter task, we apply the ?AUC-ROC? metric to evaluate the accuracy of yTxsuperscripty^Txyitalic_T x in differentiating between C and Python code. The results are given in Table 3. For the subject pronoun prediction task, in order for the feature vector found by linear regression to match the performance of the ObProp feature vector, 60 prompts’ worth of embeddings had to be used for training; similarly, for the C vs. Python classification task, the logistic regression had to be trained on 50 code snippets’ worth of embeddings to obtain equal performance. In the political party prediction task, even when training on 3/4 of the dataset, the linear regression feature vector’s performance on the test set was well below that of the ObProp feature vector’s performance on the whole dataset. This suggests the ability of ObProp to match the performance of prior methods for finding feature vectors, and outcompete them especially in the low-data regime. For more details on these experiments, refer to Appendix B. Task ObProp Logistic regression (trained) Mean difference vector (trained) Training set size Subject pronouns r2≈0.945superscript20.945r^2≈ 0.945r2 ≈ 0.945 r2≈0.945superscript20.945r^2≈ 0.945r2 ≈ 0.945 r2≈0.899superscript20.899r^2≈ 0.899r2 ≈ 0.899 60 prompts Political parties r2≈0.427superscript20.427r^2≈ 0.427r2 ≈ 0.427 r2≈0.295superscript20.295r^2≈ 0.295r2 ≈ 0.295 r2≈0.0605superscript20.0605r^2≈ 0.0605r2 ≈ 0.0605 60 prompts (3/4 of dataset) C vs. Python AUC ≈0.9974absent0.9974≈ 0.9974≈ 0.9974 AUC ≈0.9971absent0.9971≈ 0.9971≈ 0.9971 AUC ≈0.9052absent0.9052≈ 0.9052≈ 0.9052 50 code snippets Table 3: Accuracy of regression-derived feature vectors vs. ObProp feature vectors. 5 Conclusion and Discussion In this paper, we introduced observable propagation (or ObProp for short), a novel method for finding feature vectors in transformer models using little to no data. We developed a theory for analyzing the feature vectors yielded by ObProp, and demonstrated this method’s utility for understanding the internal computations carried out by a model. In our case studies, we found that investigating the norms of feature vectors obtained via ObProp could be used to predict relevant attention heads for a task without actually running the model on any data; that ObProp can be used to understand when two different tasks utilize the same feature; that coupling coefficients can be used to show the extent to which a high output for one observable implies a high output for another on a general distribution of data; and that the feature vectors returned by ObProp accurately predict model behavior. We also demonstrated that in data-scarce settings, ObProp outperforms traditional data-heavy probing approaches for finding feature vectors. This culminated in a demonstration that the model specifically uses the feature of ?gender? to predict the occupation associated with a name. Notably, even though experiments on larger datasets further supported this claim, observable propagation alone was able to provide striking evidence of this using minimal amounts of data. We hope that our approach, being independent of data, can democratize interpretability research and facilitate broader-scale investigations. Furthermore, the conclusion that the model uses the same mechanisms to predict grammatical gender as it does to predict occupations portends difficulties in attempting to ?debias? the model. This means that inexpensive inference-time attempts to remove bias from the model will likely also decrease model performance on desired tasks like correct gendered pronoun prediction (see Appendix G for additional experiments.) This reveals a clear future work direction to invest in more powerful methods, to ensure that models are both unbiased and useful. Note that although ObProp demonstrates significant promise in cheaply unlocking the internal computations of language models, it does have limitations. In particular, ObProp currently primarily addresses the OV circuits of Transformers, ignoring computations in QK circuits responsible for mechanisms such as ?induction heads? (Elhage et al., 2021). However, even though QK circuits are responsible for moving information around in Transformers, OV circuits are where computation on this information occurs. Thus, whenever we want to understand what sort of information the model uses to predict one token as opposed to another, the answer to this question lies in the model’s OV circuits, and ObProp can provide such answers. Given the power that the current formulation of ObProp has demonstrated already in our experiments, we are very excited about the potential for this method, and methods building upon it, to yield even greater insights in the near future. A note on SAEs Sparse autoencoders (SAEs), as described by Cunningham et al. (2023) and Bricken et al. (2023), have recently made waves in the mechanistic interpretability community. SAEs can be trained unsupervised on model hidden states in order to yield a set of feature vectors that can represent any hidden state as a sparse linear combination of these feature vectors. Although we do find SAEs to be very promising, we fear that SAEs might incur some philosophical risks by assuming the existence of a set of ?ground-truth features? that can be found. Plausibly, SAEs might fail to account for cases where later-layer features depend on a dense subset of earlier-layer features or where feature vectors are composed in unintuitive ways to compute a task. As such, ObProp takes a different approach that privileges fidelity to computation over data: ObProp aims to find feature vectors (approximately) corresponding to the computation of human-interpretable tasks, the behavior of which feature vectors can then be quantifiably understood. In this manner, we hope to provide a complementary approach to the SAE paradigm (and one that could perhaps be integrated with it) in order to account for potential shortcomings of the latter. Supplementary Statements Impact Statement In this work, we present observable propagation, our method for finding feature vectors used by large language models in their computation of a given task. We demonstrate in an experiment that observable propagation can be used to pin-point specific features that are responsible for gender bias in large language models, suggesting that observable propagation might prove to be useful in mechanistically understanding how to debias language models. Additionally, the data-efficient nature of observable propagation allows this sort of inquiry into model bias to be democratized, conducted by researchers who might not have access to compute or data required by other methods. However, it is important to note that observable propagation does not necessarily make perfect judgments about model bias or lack thereof; a model might be biased even if observable propagation fails to find specific feature vectors responsible for that bias. As such, it is incumbent upon researchers, practitioners, and organizations working with large language models to continue to perform deeper investigations into model bias issues, and be aware of the way in which it might affect their results. Reproducibility Statement A proof of Theorem 1 is given in Appendix E.5; a proof of Theorem 2 is given in Appendix H. Details on the datasets that we used in our experiments can be found in Appendix A. Further details regarding the experiments in Section 4.3 can be found in Appendix B. Details on how we chose the x0subscript0x_0x0 point used to approximate nonlinearities (as described in §2.3) can be found in Appendix C; for LayerNorm linear approximations, we used the estimation method described in Appendix E.3. Code is available at https://github.com/jacobdunefsky/ObservablePropagation. References AI@Meta (2024) AI@Meta. Llama 3. 2024. URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md. Ainslie et al. (2023) Ainslie, J., Lee-Thorp, J., de Jong, M., Zemlyanskiy, Y., Lebrón, F., and Sanghai, S. Gqa: Training generalized multi-query transformer models from multi-head checkpoints, 2023. Black et al. (2021) Black, S., Gao, L., Wang, P., Leahy, C., and Biderman, S. GPT-Neo: Large scale autoregressive language modeling with mesh-tensorflow, 2021. URL http://github.com/eleutherai/gpt-neo. Bolukbasi et al. (2016) Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., and Kalai, A. T. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29, 2016. Bricken et al. (2023) Bricken, T., Templeton, A., Batson, J., Chen, B., Jermyn, A., Conerly, T., Turner, N., Anil, C., Denison, C., Askell, A., Lasenby, R., Wu, Y., Kravec, S., Schiefer, N., Maxwell, T., Joseph, N., Hatfield-Dodds, Z., Tamkin, A., Nguyen, K., McLean, B., Burke, J. E., Hume, T., Carter, S., Henighan, T., and Olah, C. Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2023. https://transformer-circuits.pub/2023/monosemantic-features/index.html. Brody et al. (2023) Brody, S., Alon, U., and Yahav, E. On the expressivity role of layernorm in transformers’ attention. arXiv preprint arXiv:2305.02582, 2023. Brown et al. (2020) Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, p. 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.c/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf. Conmy et al. (2023) Conmy, A., Mavor-Parker, A. N., Lynch, A., Heimersheim, S., and Garriga-Alonso, A. Towards automated circuit discovery for mechanistic interpretability. arXiv preprint arXiv:2304.14997, 2023. Crowson (2021) Crowson, K. mdmm, 2021. URL http://github.com/crowsonkb/mdmm. Cunningham et al. (2023) Cunningham, H., Ewart, A., Riggs, L., Huben, R., and Sharkey, L. Sparse autoencoders find highly interpretable features in language models, 2023. Dar et al. (2023) Dar, G., Geva, M., Gupta, A., and Berant, J. Analyzing transformers in embedding space. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 16124–16170, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.893. URL https://aclanthology.org/2023.acl-long.893. Elazar et al. (2021) Elazar, Y., Ravfogel, S., Jacovi, A., and Goldberg, Y. Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals. Transactions of the Association for Computational Linguistics, 9:160–175, 03 2021. ISSN 2307-387X. doi: 10.1162/tacl˙a˙00359. URL https://doi.org/10.1162/tacl_a_00359. Elhage et al. (2021) Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., DasSarma, N., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Jones, A., Kernion, J., Lovitt, L., Ndousse, K., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S., and Olah, C. A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021. https://transformer-circuits.pub/2021/framework/index.html. Elhage et al. (2022) Elhage, N., Hume, T., Olsson, C., Nanda, N., Henighan, T., Johnston, S., ElShowk, S., Joseph, N., DasSarma, N., Mann, B., Hernandez, D., Askell, A., Ndousse, K., Jones, A., Drain, D., Chen, A., Bai, Y., Ganguli, D., Lovitt, L., Hatfield-Dodds, Z., Kernion, J., Conerly, T., Kravec, S., Fort, S., Kadavath, S., Jacobson, J., Tran-Johnson, E., Kaplan, J., Clark, J., Brown, T., McCandlish, S., Amodei, D., and Olah, C. Softmax linear units. Transformer Circuits Thread, 2022. https://transformer-circuits.pub/2022/solu/index.html. Gao et al. (2020) Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., Presser, S., and Leahy, C. The pile: An 800gb dataset of diverse text for language modeling, 2020. Goldowsky-Dill et al. (2023) Goldowsky-Dill, N., MacLeod, C., Sato, L., and Arora, A. Localizing model behavior with path patching. arXiv preprint arXiv:2304.05969, 2023. Gurnee et al. (2023) Gurnee, W., Nanda, N., Pauly, M., Harvey, K., Troitskii, D., and Bertsimas, D. Finding neurons in a haystack: Case studies with sparse probing, 2023. Hernandez et al. (2023) Hernandez, E., Sharma, A. S., Haklay, T., Meng, K., Wattenberg, M., Andreas, J., Belinkov, Y., and Bau, D. Linearity of relation decoding in transformer language models, 2023. Jacovi et al. (2021) Jacovi, A., Swayamdipta, S., Ravfogel, S., Elazar, Y., Choi, Y., and Goldberg, Y. Contrastive explanations for model interpretability. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, p. 1597–1611, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.120. URL https://aclanthology.org/2021.emnlp-main.120. Kim et al. (2018) Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., and sayres, R. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In Dy, J. and Krause, A. (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, p. 2668–2677. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/kim18d.html. Lepori et al. (2023) Lepori, M. A., Serre, T., and Pavlick, E. Uncovering intermediate variables in transformers using circuit probing, 2023. Li et al. (2023) Li, K., Patel, O., Viégas, F., Pfister, H., and Wattenberg, M. Inference-time intervention: Eliciting truthful answers from a language model, 2023. Mathwin et al. (2023) Mathwin, C., Corlouer, G., Kran, E., Barez, F., and Nanda, N. Identifying a preliminary circuit for predicting gendered pronouns in gpt-2 small. Apart Research Alignment Jam #4 (Mechanistic Interpretability), 2023. URL https://cmathw.itch.io/identifying-a-preliminary-circuit-for-predicting-gendered-pronouns-in-gpt-2-smal. Millidge & Black (2023) Millidge, B. and Black, S. The singular value decompositions of transformer weight matrices are highly interpretable. 2023. URL w.lesswrong.com/posts/mkbGjzxD8d8XqKHzA/the-singular-value-decompositions-of-transformer-weight. Nanda (2022) Nanda, N. A comprehensive mechanistic interpretability explainer & glossary, 2022. URL https://w.neelnanda.io/mechanistic-interpretability/glossary. Nanda et al. (2023) Nanda, N., Olah, C., Olsson, C., Elhage, N., and Tristan, H. Attribution patching: Activation patching at industrial scale. 2023. URL https://w.neelnanda.io/mechanistic-interpretability/attribution-patching. Olah (2022) Olah, C. Mechanistic interpretability, variables, and the importance of interpretable bases, 2022. URL https://transformer-circuits.pub/2022/mech-interp-essay/index.html. Olah et al. (2018) Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., and Mordvintsev, A. The building blocks of interpretability. Distill, 2018. doi: 10.23915/distill.00010. https://distill.pub/2018/building-blocks. Olah et al. (2020) Olah, C., Cammarata, N., Schubert, L., Goh, G., Petrov, M., and Carter, S. Zoom in: An introduction to circuits. Distill, 2020. doi: 10.23915/distill.00024.001. https://distill.pub/2020/circuits/zoom-in. Ouyang et al. (2022) Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022. Park et al. (2023) Park, K., Choe, Y. J., and Veitch, V. The linear representation hypothesis and the geometry of large language models, 2023. Petersen & Pedersen (2012) Petersen, K. and Pedersen, M. The matrix cookbook, version 2012/11/15. Technical Univ. Denmark, Kongens Lyngby, Denmark, Tech. Rep, 3274, 2012. Radford et al. (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. Simonyan et al. (2013) Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. Syed et al. (2023) Syed, A., Rager, C., and Conmy, A. Attribution patching outperforms automated circuit discovery, 2023. Tigges et al. (2023) Tigges, C., Hollinsworth, O. J., Geiger, A., and Nanda, N. Linear representations of sentiment in large language models. arXiv preprint arXiv:2310.15154, 2023. UCI Machine Learning Repository (2020) UCI Machine Learning Repository. Gender by Name. UCI Machine Learning Repository, 2020. DOI: https://doi.org/10.24432/C55G7X. Vig et al. (2020) Vig, J., Gehrmann, S., Belinkov, Y., Qian, S., Nevo, D., Singer, Y., and Shieber, S. Investigating gender bias in language models using causal mediation analysis. Advances in neural information processing systems, 33:12388–12401, 2020. Wallace et al. (2019) Wallace, E., Tuyls, J., Wang, J., Subramanian, S., Gardner, M., and Singh, S. AllenNLP interpret: A framework for explaining predictions of NLP models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, p. 7–12, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-3002. URL https://aclanthology.org/D19-3002. Wang et al. (2022) Wang, K., Variengien, A., Conmy, A., Shlegeris, B., and Steinhardt, J. Interpretability in the wild: a circuit for indirect object identification in gpt-2 small, 2022. Winsor (2022) Winsor, E. Re-examining layernorm. 2022. URL https://w.lesswrong.com/posts/jfG6vdJZCwTQmG7kb/re-examining-layernorm. Wu et al. (2024) Wu, Z., Geiger, A., Potts, C., and Goodman, N. D. Interpretability at scale: Identifying causal mechanisms in alpaca, 2024. Yu et al. (2023) Yu, Q., Merullo, J., and Pavlick, E. Characterizing mechanisms for factual recall in language models, 2023. Zhang & Sennrich (2019) Zhang, B. and Sennrich, R. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32, 2019. Appendix A Datasets In our experiments, we made use of an artificial dataset, along with a natural dataset. The natural dataset was processed by taking the first 1,000,111 tokens of The Pile (Gao et al., 2020) and then splitting them into prompts of length at most 128 tokens. This yielded 7,680 prompts. To construct the artificial dataset, we wrote three prompt templates for the nsubjsubscriptsubjn_subjnsubj observable, three prompt templates for the nbiassubscriptbiasn_biasnbias observable, and three prompt templates for the nobjsubscriptobjn_objnobj observable. The prompt templates are as follows: • Prompt templates for nsubjsubscriptsubjn_subjnsubj (inspired by (Mathwin et al., 2023)): 1. "<|endoftext|>So, [NAME] really is a great friend, isn’t" 2. "<|endoftext|>Man, [NAME] is so funny, isn’t" 3. "<|endoftext|>Really, [NAME] always works so hard, doesn’t" • Prompt templates for nobjsubscriptobjn_objnobj: 1. "<|endoftext|>What do I think about [NAME]? Well, to be honest, I love" 2. "<|endoftext|>When it comes to [NAME], I gotta say, I really hate" 3. "<|endoftext|>This is a present for [NAME]. Tomorrow, I’m gonna give it to" • Prompt templates for nbiassubscriptbiasn_biasnbias: 1. "<|endoftext|>My friend [NAME] is an excellent" 2. "<|endoftext|>Recently, [NAME] has been recognized as a great" 3. "<|endoftext|>His cousin [NAME] works hard at being a great" A dataset of prompts was then generated by replacing the [NAME] substring in each prompt template with a name from a set of traditionally-male names and a set of traditionally-female names. These names were obtained from the ?Gender by Name? dataset from (UCI Machine Learning Repository, 2020), which provided a list of names, the gender traditionally associated with each name, and a measure of the frequency of each name. The top 100 single-token traditionally-male names and top 100 single-token traditionally-female names from this dataset were collected; this comprised the list of names that we used. Appendix B Experimental Details for Section 4.3 B.1 Datasets The dataset used in the subject pronoun prediction task is the same artificial dataset described in Appendix A. The dataset used in the C vs. Python classification task consists of 730 code snippets, each 128 tokens long, taken from C and Python subsets of the GitHub component of The Pile (Gao et al., 2020). The dataset used in the American political party prediction task is an artificial dataset consisting of prompts of the form "[NAME] is a", where [NAME] is replaced by the name of a politician drawn from a list of 40 Democratic Party politicians and 40 Republican Party politicians. These politicians were chosen according to the list of ?the most famous Democrats? and ?the most famous Republicans? for Q3 2023 compiled by YouGov, available at https://today.yougov.com/ratings/politics/fame/Democrats/all and https://today.yougov.com/ratings/politics/fame/Democrats/all. The intuition behind this choice of dataset is that the model would be more likely to identify the political affiliation of well-known politicians, because better-known politicians would be more likely to occur in its training data. This is the primary reason that a smaller dataset is being used. B.2 Task Definition The subject pronoun prediction task involves the model predicting the correct token for each prompt. The target scores are considered to be the difference between the model’s logit prediction for the token " she" and the model’s logit prediction for the token " he". The political party prediction task also involves the model predicting the correct token for each prompt. The target scores are considered to be the difference between the model’s logit prediction for the token " Democrat" and the model’s logit prediction for the token " Republican". For the C vs. Python classification task, because the data is drawn from a diverse corpus of code, the task is treated as a binary classification task instead of a token prediction task. B.3 Feature Vectors The ObProp feature vector used for the pronoun prediction task is the feature vector corresponding to the computational path 6::6 → 9::1 → 13::1 for the nsubjsubscriptsubjn_subjnsubj observable. The ObProp feature vector used for the political party prediction task is the feature vector corresponding to attention head 15::8 for the observable defined by e" Democrat"−e" Republican"subscript" Democrat"subscript" Republican"e_ " Democrat"-e_ " Republican"e" Democrat" - e" Republican". The ObProp feature vector used for the C versus Python classification task is the feature vector corresponding to attention head 16::9 for the observable defined by e" ):"−e" )"subscript" ):"subscript" )"e_ " ):"-e_ " )\"e" ):" - e" )". (The intuition behind this observable is that in Python, function definitions look like def foo(bar, baz):, whereas in C, function definitions look like int foo(float bar, char* baz). Notice how the former line ends in the token "):" whereas the latter line ends in the token ")".) The regression feature vectors for each task were trained on model embeddings at the same layer as the ObProp feature vectors for that task. Thus, for example, the linear regression feature vector for the pronoun prediction task was trained on model embeddings at layer 6. The ?mean difference? feature vectors for each task were calculated as follows. First, run the model on inputs from one class (e.g. female names, Democratic politicians, C code) and compute the mean vector across these inputs of the model embeddings at the same layer as the ObProp feature vector. Then, run the model on inputs from the other class (e.g. male names, Republican politicians, Python code) and compute the mean vector. Now, the ?mean difference? feature vector is simply the difference between these two mean vectors. B.4 Task Evaluation For the pronoun prediction task, the predicted score was determined as the dot product of the feature vector with the model’s embedding at layer 6 for the name token in the prompt. For the political party prediction task, the predicted score was determined as the dot product of the feature vector with the model’s embedding at layer 15 for the last token in the politician’s name in each prompt. For the C versus Python classification task, the predicted score for each code snippet was determined by taking the mean of the model’s embeddings at layer 16 for all tokens in the code snippet, and then taking the dot product of the feature vector with those mean embeddings. Appendix C Details on Linear Approximations for MLPs Finding feature vectors for MLPs is a relatively straightforward application of the first-order Taylor approximation. However, there is a fear that if one takes the gradient at the wrong point, then the local gradient will not reflect well the larger-scale behavior of the MLP. For example, the output of the MLP with respect to a given observable might be saturated at a certain point: the gradient at this point might be very small, and might even point in a direction inconsistent with the MLP’s gradient in the unsaturated regime. To alleviate this, we use the following method. Define g(x)=nTMLP(x)superscriptMLPg(x)=n^TMLP(x)g ( x ) = nitalic_T MLP ( x ), where n is a given observable. If this observable n represents the logit difference between two tokens, then we should be able to find an input on which this difference is very negative, along with an input on which this difference is very positive. For example, if n represents the logit difference between the token " her" and the token " him", then an input containing a male name should make this difference very negative, and an input containing a female name should cause this difference to be very positive. Thus, we have two points x−subscriptx_-x- and x+subscriptx_+x+ such that g(x−)<0subscript0g(x_-)<0g ( x- ) < 0 and g(x+)>0subscript0g(x_+)>0g ( x+ ) > 0. Since MLPs are continuous, there therefore must be some point x0subscript0x_0x0 on the line between x−subscriptx_-x- and x+subscriptx_+x+ at which g(x0)=0subscript00g(x_0)=0g ( x0 ) = 0; this point lies at the intersection of the line between x−subscriptx_-x- and x+subscriptx_+x+ with the MLP’s decision boundary. It stands to reason that the gradient at this decision boundary is more likely to capture the larger-scale behavior of the MLP and is less likely to be saturated, when compared to the gradient at more ?extreme? points like x−subscriptx_-x- and x+subscriptx_+x+. Such an x0subscript0x_0x0 can be found using constrained optimization methods; we use the Python library MDMM (Crowson, 2021) to do so. This approach given here is used in Line 14 of Algorithm 1 for dealing with MLP nonlinearities. For LayerNorms, we simply take the gradient at an input point x−subscriptx_-x- or x+subscriptx_+x+. Appendix D Details on Path Patching for Finding Important Computational Paths In Section 4, we use path patching (Goldowsky-Dill et al., 2023) to determine which computational paths in the model are most important for a given task. Here, we provide more details on our implementation of path patching to measure the importance of a single computational path, along with more details on how we used path patching to find a set of important computational paths. D.1 Path patching Path patching is a causal method for determining the importance of a computational path in the model according to a given metric. This metric can generally be any function of the model’s outputs (e.g. the metric could be the cross-entropy next-token-prediction loss), but in this paper, our metrics are the dot product of the model’s logits with a given observable. We thus use path patching for determining how important a computational path is to the observable corresponding to a given task. The high-level approach to path patching (which is common to other causal methods) is as follows. We consider two inputs to the model, a clean input and a dirty input, which display different behavior with respect to the metric. (For instance, if the metric is the dot product of the model’s logits with the nsubjsubscriptsubjn_subjnsubj observable, then the clean input would be a prompt containing a traditionally-male name, and the dirty input would be a prompt containing a traditionally-female name.) First, run the model on both the clean and the dirty input, storing the model’s hidden states on both inputs. Then, take the hidden states from the clean run, replace certain hidden states with the corresponding ones from the dirty run, and re-run the model on this modified set of hidden states. After doing so, measure the difference, with respect to the given metric, between the model’s output on the clean input and the model’s output using the modified hidden states. Our implementation of path patching is slightly different from the implementation described in Goldowsky-Dill et al. (2023): the implementation described here avoids the costly ?Treeify? function from the original. First, we will describe how to perform path patching for a single-edge path. Given an earlier-layer component333A component is an attention head or an MLP sublayer c and a later-layer component c′, let ocleansubscriptcleano_cleanoclean be the output of c on the clean input, and let xclean′subscriptsuperscript′cleanx _cleanx′clean be the hidden states on the clean input before c′. Similarly, let odirtysubscriptdirtyo_dirtyodirty be the output of c on the dirty input. Now, as explained by Elhage et al. (2021), a transformer’s hidden state before any component can be decomposed as the sum of all previous components’ outputs (along with the original token embedding and positional embedding). Thus, to measure the direct effect of the computational edge from c to c′, we replace xclean′subscriptsuperscript′cleanx _cleanx′clean with xclean′−oclean+odirtysubscriptsuperscript′cleansubscriptcleansubscriptdirtyx _clean-o_clean+o_dirtyx′clean - oclean + odirty. This corresponds to replacing the contribution of c to the input of c′ with the output of c on the dirty input. The path patching score is then computed by running the model with this modified hidden state and measuring the difference between its output and the clean output. Now, this can be extended to longer computational paths as follows. Given a computational path of components c(1),…,c(k)superscript1…superscriptc^(1),…,c^(k)c( 1 ) , … , c( k ), let xclean(i)subscriptsuperscriptcleanx^(i)_cleanx( i )clean be the hidden states on the clean input before c(i)superscriptc^(i)c( i ), let oclean(i)subscriptsuperscriptcleano^(i)_cleano( i )clean be the output of c(i)superscriptc^(i)c( i ) on the clean input, and let odirty(i)subscriptsuperscriptdirtyo^(i)_dirtyo( i )dirty be the output of c(i)superscriptc^(i)c( i ) on the dirty input. Just as before, replace xclean(2)subscriptsuperscript2cleanx^(2)_cleanx( 2 )clean with xdirty(2)=xclean(2)−oclean(1)+odirty(1)subscriptsuperscript2dirtysubscriptsuperscript2cleansubscriptsuperscript1cleansubscriptsuperscript1dirtyx^(2)_dirty=x^(2)_clean-o^(1)_clean+o^(1)_% dirtyx( 2 )dirty = x( 2 )clean - o( 1 )clean + o( 1 )dirty. Run c(2)superscript2c^(2)c( 2 ) on xdirty(2)subscriptsuperscript2dirtyx^(2)_dirtyx( 2 )dirty and store the output as odirty(2)subscriptsuperscript2dirtyo^(2)_dirtyo( 2 )dirty. Repeat this process for the later components: replace xclean(i)subscriptsuperscriptcleanx^(i)_cleanx( i )clean with xdirty(i)=xclean(i)−oclean(i−1)+odirty(i−1)subscriptsuperscriptdirtysubscriptsuperscriptcleansubscriptsuperscript1cleansubscriptsuperscript1dirtyx^(i)_dirty=x^(i)_clean-o^(i-1)_clean+o^(i-1% )_dirtyx( i )dirty = x( i )clean - o( i - 1 )clean + o( i - 1 )dirty, run c(i)superscriptc^(i)c( i ) on xdirty(i)subscriptsuperscriptdirtyx^(i)_dirtyx( i )dirty, and store the output as odirty(i)subscriptsuperscriptdirtyo^(i)_dirtyo( i )dirty. The path patching score is then given by the difference in the model’s output on these dirty hidden states and the model’s original output on the clean input. D.2 Finding Important Computational Paths Now that we can use path patching to allow us to determine the importance of a given computational path, we use the following greedy method in conjunction with path patching to find a set of important computational paths. First, use path patching to identify the k most important single-edge computational paths. Then, for each of those paths, identify the k most important paths with the current path as a suffix; this gives us k2superscript2k^2k2 total paths. Now, take the top k paths from these k2superscript2k^2k2 total paths, and repeat for as many iterations as one needs (until one has paths of a desired length). The complexity of this process is O(nk(pm+mlogm))O(nk(pm+m m))O ( n k ( p m + m log m ) ), where n is the number of iterations, k is the number of paths, m is the number of nodes in the full computational graph of the model, and p is the cost of path patching for one computational path. (To see this: at each iteration, we do path patching on k parent paths, which has cost p for each computational node m in the model. Then, we sort the m nodes to get the top k paths, which gives us the mlogm m log m term.) Note that this process can be made more efficient by using faster alternatives to path patching such as edge attribution patching (Syed et al., 2023). Appendix E More on LayerNorms In this section, we put forth various results relevant for the discussion of LayerNorm gradients in §2.4. E.1 LayerNorm Input Norms per Layer We calculated the average norms of inputs to each LayerNorm sublayer in the model, over the activations obtained from the 600 subject pronoun prompts in the artificial dataset described in §A. The results can be found in Figure 2. The wide variation in the input norms across different layers implies that input norms must be taken into account in any approximation of LayerNorm gradients. Figure 2: Mean norms of activations before each LayerNorm E.2 LayerNorm Weight Values Are Very Similar In §2.4, the LayerNorm nonlinearity is defined as LayerNorm(x)=x−(1→Tx)1→‖x−(1→Tx)1→‖+bLayerNormsuperscript→1→1normsuperscript→1→1LayerNorm(x)= x-( 1^Tx) 1 \|x-( 1^% Tx) 1 \|+bLayerNorm ( x ) = divide start_ARG x - ( over→ start_ARG 1 end_ARGT x ) over→ start_ARG 1 end_ARG end_ARG start_ARG ∥ x - ( over→ start_ARG 1 end_ARGT x ) over→ start_ARG 1 end_ARG ∥ end_ARG + b, where 1→1 1over→ start_ARG 1 end_ARG is the vector of all ones. However, in actual models, after every LayerNorm operation as defined above, the output is multiplied by a fixed scalar constant equal to d dsquare-root start_ARG d end_ARG (where d is the embedding diension), multiplied by a learned diagonal matrix W, and then added to a learned bias vector b. Thus, the actual operation implemented is dWLayerNorm(x)+bLayerNorm dWLayerNorm(x)+bsquare-root start_ARG d end_ARG W LayerNorm ( x ) + b, where W is the learned diagonal matrix and b is the learned vector. This is important with regard to our earlier discussion of the extent to which LayerNorm affects feature vector directions, because although b does not affect the gradient, nevertheless, if the values of W are different from one another, this could cause the gradient to point in a different direction from the original feature vector. However, empirically, we find that most of the entries in W are very close to one another. This suggests that we can approximate W as a scalar, meaning that W primarily scales the gradient, rather than changing its direction. Therefore, if we want to analyze the directions of feature vectors rather than their magnitudes, then we can largely do so without worrying about LayerNorms. In particular, we found that the average variance of scaling matrix entries across all LayerNorms in GPT-Neo-1.3B is 0.0078270.0078270.0078270.007827. To determine the extent to which this variance is large, we calculated the ratio of the variance of each LayerNorm’s weight matrix’s entries to the mean absolute value of each layer’s embeddings’ entries. The results can be found in Figure 3. Note that the highest value found was 0.07140.07140.07140.0714 at Layer 0 – meaning that the average entry in Layer 0 embeddings was over 14.01 times larger than the variance between entries in that layer’s ln_1 LayerNorm weight. This supports our assertion that LayerNorm scaling matrices can be largely treated as constants. Figure 3: Ratio between LayerNorm weight matrix variances and mean absolute entries of each layer’s embeddings One possible guess as to why this behavior might be occurring is this: much of the computation taking place in the model does not occur with respect to basis directions in activation space. However, the diagonal LayerNorm weight matrices can only act on these very basis directions. Therefore, the weight matrices end up ?settling? on the same nearly-constant value in all entries. E.3 LayerNorm Gradients Are Inversely Proportional to Input Norms In §2.4, it was stated that LayerNorm gradients are not constant, but instead, depend on the norm of the input to the LayerNorm. To elaborate, the gradient of nT(dWLayerNorm(x)+b)superscriptLayerNormn^T( dWLayerNorm(x)+b)nitalic_T ( square-root start_ARG d end_ARG W LayerNorm ( x ) + b ) can be shown to be dW‖Px‖P(I−(Px)(Px)T‖Px‖2)nnormsuperscriptsuperscriptnorm2 dW \|Px \|P (I- (Px)(Px)^T \|Px% \|^2 )ndivide start_ARG square-root start_ARG d end_ARG W end_ARG start_ARG ∥ P x ∥ end_ARG P ( I - divide start_ARG ( P x ) ( P x )T end_ARG start_ARG ∥ P x ∥2 end_ARG ) n (see Appendix E.5). P and (I−(Px)(Px)T‖Px‖2)superscriptsuperscriptnorm2 (I- (Px)(Px)^T \|Px \|^2 )( I - divide start_ARG ( P x ) ( P x )T end_ARG start_ARG ∥ P x ∥2 end_ARG ) are both orthogonal projections that leave ‖n‖norm \|n \|∥ n ∥ relatively untouched, so the term that is most responsible for affecting the norm of the feature vector is the dW‖Px‖norm dW \|Px \|divide start_ARG square-root start_ARG d end_ARG W end_ARG start_ARG ∥ P x ∥ end_ARG factor. Now, by Lemma 1 in Appendix E.5, we have that dW‖Px‖≈dW‖x‖normnorm dW \|Px \|≈ dW \|x \|divide start_ARG square-root start_ARG d end_ARG W end_ARG start_ARG ∥ P x ∥ end_ARG ≈ divide start_ARG square-root start_ARG d end_ARG W end_ARG start_ARG ∥ x ∥ end_ARG. Thus, if ‖x‖~~norm \|x \|over~ start_ARG ∥ x ∥ end_ARG a good estimate of ‖x‖norm \|x \|∥ x ∥ for a given set of input prompts at a given layer, then a good approximation of the gradient of a LayerNorm sublayer is given by (dW/‖x‖~)n~norm ( dW/ \|x \| )n( square-root start_ARG d end_ARG W / over~ start_ARG ∥ x ∥ end_ARG ) n. This approximation can be used to speed up the computation of gradients for LayerNorms. E.4 Feature Vector Norms with LayerNorms In §4.1, we explained that looking at the norms of feature vectors can provide a fast and reasonable guess of which model components will be the most important for a given task. However, there is a caveat that must be taken into account regarding LayerNorms. As shown in Appendix E.3, the gradient of a LayerNorm sublayer is approximately inversely proportional to the norm of the input to the LayerNorm sublayer. Now, assume that we have a computational path beginning at a LayerNorm, where ‖x‖~~norm \|x \|over~ start_ARG ∥ x ∥ end_ARG is an estimate of the norm of the inputs to that LayerNorm. Let y be the feature vector for this computational path. Then we have y≈dW/‖x‖~y′~normsuperscript′y≈ dW/ \|x \|y y ≈ square-root start_ARG d end_ARG W / over~ start_ARG ∥ x ∥ end_ARG y′, where y′ is the feature vector for the ?tail? of the computational path, that comes after the initial LayerNorm. Given an input x, we have that y⋅x⋅ y· xy ⋅ x ≈dW/‖x‖~y′⋅xabsent⋅~normsuperscript′ ≈ dW/ \|x \|y · x≈ square-root start_ARG d end_ARG W / over~ start_ARG ∥ x ∥ end_ARG y′ ⋅ x =d‖x‖~‖Wy′‖‖x‖cosθabsent~normnormsuperscript′norm = d \|x \| \|Wy \|% \|x \| θ= square-root start_ARG d end_ARG over~ start_ARG ∥ x ∥ end_ARG ∥ W y′ ∥ ∥ x ∥ cos θ ≈d‖Wy′‖cosθabsentnormsuperscript′ ≈ d \|Wy \| θ≈ square-root start_ARG d end_ARG ∥ W y′ ∥ cos θ Therefore, the dot product of an input vector with the feature vector y will be approximately proportional to d‖Wy′‖normsuperscript′ d \|Wy \|square-root start_ARG d end_ARG ∥ W y′ ∥ – not d‖Wy′‖/‖x‖~normsuperscript′~norm d \|Wy \|/ \|x \|square-root start_ARG d end_ARG ∥ W y′ ∥ / over~ start_ARG ∥ x ∥ end_ARG. As such, if one wants to use feature vector norms to predict which feature vectors will have the highest dot products with their inputs, then that feature vector must not be multiplied by 1/‖x‖~1~norm1/ \|x \|1 / over~ start_ARG ∥ x ∥ end_ARG. A convenient consequence of this is that when analyzing computational paths that do not involve any compositionality (e.g. analyzing a single attention head or a single MLP) – then ignoring LayerNorms entirely still provides an accurate idea of the relative importance of attention heads. This is because the only time that a (dW/‖x‖~)~norm( dW/ \|x \|)( square-root start_ARG d end_ARG W / over~ start_ARG ∥ x ∥ end_ARG ) term appears with the factor of 1/‖x‖~1~norm1/ \|x \|1 / over~ start_ARG ∥ x ∥ end_ARG included is for the final LayerNorm before the logits output. As such, since this factor is not dependent on the layer of the component being analyzed, it can be ignored. E.5 Proof of Theorem 1 Theorem 1. Define f(x;n)=n⋅LayerNorm(x)⋅LayerNormf(x;n)=n·LayerNorm(x)f ( x ; n ) = n ⋅ LayerNorm ( x ). Define θ(x;n)=arccos(n⋅∇xf(x;n)‖n‖‖∇xf(x;n)‖)⋅subscript∇normnormsubscript∇θ(x;n)= ( n· _xf(x;n) \|n \| \|% _xf(x;n) \| )θ ( x ; n ) = arccos ( divide start_ARG n ⋅ ∇x f ( x ; n ) end_ARG start_ARG ∥ n ∥ ∥ ∇x f ( x ; n ) ∥ end_ARG ) – that is, θ(x;n)θ(x;n)θ ( x ; n ) is the angle between n and ∇xf(x;n)subscript∇ _xf(x;n)∇x f ( x ; n ). Then if n∼(0,I)similar-to0n (0,I)n ∼ N ( 0 , I ) in ℝdsuperscriptℝR^dblackboard_Rd, and d≥88d≥ 8d ≥ 8 then [θ(x;n)]<2arccos(1−1d−1)delimited-[]2111E [θ(x;n) ]<2 ( 1- 1d-1 )blackboard_E [ θ ( x ; n ) ] < 2 arccos ( square-root start_ARG 1 - divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG end_ARG ) To prove this, we will introduce a lemma: Lemma 1. Let y be an arbitrary vector. Let A=I−vvT‖v‖2superscriptsuperscriptnorm2A=I- v^T \|v \|^2A = I - divide start_ARG v vitalic_T end_ARG start_ARG ∥ v ∥2 end_ARG be the orthogonal projection onto the hyperplane normal to v. Then the cosine similarity between y and AyAyA y is given by 1−cos(θ)2 1- (θ)^2square-root start_ARG 1 - cos ( θ )2 end_ARG, where cos(θ) (θ)cos ( θ ) is the cosine similarity between y and v. Proof. Assume without loss of generality that y is a unit vector. (Otherwise, we could rescale it without affecting the angle between y and v, or the angle between y and AyAyA y.) We have Ay=y−y⋅v‖v‖2v⋅superscriptnorm2Ay=y- y· v\|v\|^2vA y = y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v. Then, y⋅Ay⋅ y· Ayy ⋅ A y =y⋅(y−y⋅v‖v‖2v)absent⋅superscriptnorm2 =y·(y- y· v\|v\|^2v)= y ⋅ ( y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ) =‖y‖2−(y⋅v)2‖v‖2absentsuperscriptnorm2superscript⋅2superscriptnorm2 =\|y\|^2- (y· v)^2\|v\|^2= ∥ y ∥2 - divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG =1−(y⋅v)2‖v‖2absent1superscript⋅2superscriptnorm2 =1- (y· v)^2\|v\|^2= 1 - divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG and ‖Ay‖2superscriptnorm2 \|Ay\|^2∥ A y ∥2 =(y−y⋅v‖v‖2v)⋅(y−y⋅v‖v‖2v)absent⋅superscriptnorm2⋅superscriptnorm2 =(y- y· v\|v\|^2v)·(y- y· v\|v\|^2% v)= ( y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ) ⋅ ( y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ) =y⋅(y−y⋅v‖v‖2v)−y⋅v‖v‖2v⋅(y−y⋅v‖v‖2v)absent⋅superscriptnorm2⋅superscriptnorm2⋅superscriptnorm2 =y·(y- y· v\|v\|^2v)- y· v\|v\|^2% v·(y- y· v\|v\|^2v)= y ⋅ ( y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ) - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ⋅ ( y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ) =y⋅Ay−y⋅v‖v‖2v⋅(y−y⋅v‖v‖2v)absent⋅superscriptnorm2⋅superscriptnorm2 =y· Ay- y· v\|v\|^2v·(y- y· v\|v% \|^2v)= y ⋅ A y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ⋅ ( y - divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ) =y⋅Ay−(y⋅v)2‖v‖2+‖y⋅v‖v‖2v‖2absent⋅superscript⋅2superscriptnorm2superscriptnorm⋅superscriptnorm22 =y· Ay- (y· v)^2\|v\|^2+ \| y· v% \|v\|^2v \|^2= y ⋅ A y - divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG + ∥ divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥2 end_ARG v ∥2 =y⋅Ay−(y⋅v)2‖v‖2+(y⋅v)2‖v‖4‖v‖2absent⋅superscript⋅2superscriptnorm2superscript⋅2superscriptnorm4superscriptnorm2 =y· Ay- (y· v)^2\|v\|^2+ (y· v)^2% \|v\|^4\|v\|^2= y ⋅ A y - divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG + divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥4 end_ARG ∥ v ∥2 =y⋅Ay−(y⋅v)2‖v‖2+(y⋅v)2‖v‖2absent⋅superscript⋅2superscriptnorm2superscript⋅2superscriptnorm2 =y· Ay- (y· v)^2\|v\|^2+ (y· v)^2% \|v\|^2= y ⋅ A y - divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG + divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG =y⋅Ayabsent⋅ =y· Ay= y ⋅ A y Now, the cosine similarity between y and AyAyA y is given by y⋅Ay‖y‖‖Ay‖⋅normnorm y· Ay\|y\|\|Ay\|divide start_ARG y ⋅ A y end_ARG start_ARG ∥ y ∥ ∥ A y ∥ end_ARG =y⋅Ay‖Ay‖absent⋅norm = y· Ay\|Ay\|= divide start_ARG y ⋅ A y end_ARG start_ARG ∥ A y ∥ end_ARG =‖Ay‖2‖Ay‖absentsuperscriptnorm2norm = \|Ay\|^2\|Ay\|= divide start_ARG ∥ A y ∥2 end_ARG start_ARG ∥ A y ∥ end_ARG =‖Ay‖absentnorm =\|Ay\|= ∥ A y ∥ At this point, note that ‖Ay‖=y⋅Ay=1−(y⋅v)2‖v‖2norm⋅1superscript⋅2superscriptnorm2\|Ay\|= y· Ay= 1- (y· v)^2\|v\|^2∥ A y ∥ = square-root start_ARG y ⋅ A y end_ARG = square-root start_ARG 1 - divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG end_ARG. But y⋅v‖v‖⋅norm y· v\|v\|divide start_ARG y ⋅ v end_ARG start_ARG ∥ v ∥ end_ARG is just the cosine similarity between y and v. Now, if we denote the angle between y and v by θ, we thus have ‖Ay‖=1−(y⋅v)2‖v‖2=1−cos(θ)2.\|Ay\|= 1- (y· v)^2\|v\|^2= 1- (θ)^2.∥ A y ∥ = square-root start_ARG 1 - divide start_ARG ( y ⋅ v )2 end_ARG start_ARG ∥ v ∥2 end_ARG end_ARG = square-root start_ARG 1 - cos ( θ )2 end_ARG . ∎ Now, we are ready to prove Theorem 1. Proof. First, as noted by Brody et al. (2023), we have that LayerNorm(x)=Px‖Px‖LayerNormnormLayerNorm(x)= Px \|Px \|LayerNorm ( x ) = divide start_ARG P x end_ARG start_ARG ∥ P x ∥ end_ARG, where P=I−1d1→1→T1→1superscript→1P=I- 1d 1 1^TP = I - divide start_ARG 1 end_ARG start_ARG d end_ARG over→ start_ARG 1 end_ARG over→ start_ARG 1 end_ARGT is the orthogonal projection onto the hyperplane normal to 1→1 1over→ start_ARG 1 end_ARG, the vector of all ones. Thus, we have f(x;n)=nT(Px‖Px‖)superscriptnormf(x;n)=n^T ( Px\|Px\| )f ( x ; n ) = nitalic_T ( divide start_ARG P x end_ARG start_ARG ∥ P x ∥ end_ARG ) Using the multivariate chain rule along with the rule that the derivative of x‖x‖norm x\|x\|divide start_ARG x end_ARG start_ARG ∥ x ∥ end_ARG is given by I‖x‖−xxT‖x‖3normsuperscriptsuperscriptnorm3 I\|x\|- x^T\|x\|^3divide start_ARG I end_ARG start_ARG ∥ x ∥ end_ARG - divide start_ARG x xitalic_T end_ARG start_ARG ∥ x ∥3 end_ARG (see §2.6.1 of (Petersen & Pedersen, 2012)), we thus have that ∇xf(x;n)subscript∇ _xf(x;n)∇x f ( x ; n ) =(nT(I‖Px‖−(Px)(Px)T‖Px‖3)P)Tabsentsuperscriptsuperscriptnormsuperscriptsuperscriptnorm3 = (n^T ( I\|Px\|- (Px)(Px)^T\|Px\|^3% )P )^T= ( nitalic_T ( divide start_ARG I end_ARG start_ARG ∥ P x ∥ end_ARG - divide start_ARG ( P x ) ( P x )T end_ARG start_ARG ∥ P x ∥3 end_ARG ) P )T =(1‖Px‖nT(I−(Px)(Px)T‖Px‖2)P)Tabsentsuperscript1normsuperscriptsuperscriptsuperscriptnorm2 = ( 1\|Px\|n^T (I- (Px)(Px)^T\|Px\|^2% )P )^T= ( divide start_ARG 1 end_ARG start_ARG ∥ P x ∥ end_ARG nitalic_T ( I - divide start_ARG ( P x ) ( P x )T end_ARG start_ARG ∥ P x ∥2 end_ARG ) P )T =1‖Px‖P(I−(Px)(Px)T‖Px‖2)nabsent1normsuperscriptsuperscriptnorm2 = 1\|Px\|P (I- (Px)(Px)^T\|Px\|^2 )n= divide start_ARG 1 end_ARG start_ARG ∥ P x ∥ end_ARG P ( I - divide start_ARG ( P x ) ( P x )T end_ARG start_ARG ∥ P x ∥2 end_ARG ) n because P is symmetric Denote Q=I−(Px)(Px)T‖Px‖2superscriptsuperscriptnorm2Q=I- (Px)(Px)^T\|Px\|^2Q = I - divide start_ARG ( P x ) ( P x )T end_ARG start_ARG ∥ P x ∥2 end_ARG. Note that this is an orthogonal projection onto the hyperplane normal to PxPxP x. We now have that ∇xf(x;n)=1‖Px‖PQnsubscript∇1norm _xf(x;n)= 1\|Px\|PQn∇x f ( x ; n ) = divide start_ARG 1 end_ARG start_ARG ∥ P x ∥ end_ARG P Q n. Because we only care about the angle between n and ∇xf(x;n)subscript∇ _xf(x;n)∇x f ( x ; n ), it suffices to look at the angle between n and PQnPQnP Q n, ignoring the 1‖Px‖1norm 1\|Px\|divide start_ARG 1 end_ARG start_ARG ∥ P x ∥ end_ARG term. Denote the angle between n and PQnPQnP Q n as θ(x,n)θ(x,n)θ ( x , n ). (Note that θ is also a function of x because Q is a function of x.) Then if θQ(x,n)subscript _Q(x,n)θitalic_Q ( x , n ) is the angle between n and QnQnQ n, and θP(x,n)subscript _P(x,n)θitalic_P ( x , n ) is the angle between QnQnQ n and PQnPQnP Q n, then θ(x,n)≤θQ(x,n)+θP(x,n)subscriptsubscriptθ(x,n)≤ _Q(x,n)+ _P(x,n)θ ( x , n ) ≤ θitalic_Q ( x , n ) + θitalic_P ( x , n ), so [θ(x,n)]≤[θQ(x,n)]+[θP(x,n)]delimited-[]delimited-[]subscriptdelimited-[]subscriptE[θ(x,n)] [ _Q(x,n)]+E[ _P(x% ,n)]blackboard_E [ θ ( x , n ) ] ≤ blackboard_E [ θitalic_Q ( x , n ) ] + blackboard_E [ θitalic_P ( x , n ) ]. Using Lemma 1, we have that θQ(x,n)=arccos(1−cos(ϕ(n,Px))2) _Q(x,n)= ( 1- (φ(n,Px))^2 )θitalic_Q ( x , n ) = arccos ( square-root start_ARG 1 - cos ( ϕ ( n , P x ) )2 end_ARG ), where ϕ(n,Px)italic-ϕφ(n,Px)ϕ ( n , P x ) is the angle between n and PxPxP x. Now, because n∼(0,I)similar-to0n (0,I)n ∼ N ( 0 , I ), we have [cos(ϕ(n,Px))2]=1/dE[ (φ(n,Px))^2]=1/dblackboard_E [ cos ( ϕ ( n , P x ) )2 ] = 1 / d, using the well-known fact that the expected squared dot product between a uniformly distributed unit vector in ℝdsuperscriptℝR^dblackboard_Rd and a given unit vector in ℝdsuperscriptℝR^dblackboard_Rd is 1/d11/d1 / d. At this point, define g(t)=arccos(1−t)1g(t)= ( 1-t )g ( t ) = arccos ( square-root start_ARG 1 - t end_ARG ), h(t)=g′(1d−1)(t−1d−1)+g(1d−1)ℎsuperscript′111111h(t)=g ( 1d-1 ) (t- 1d-1 )+g (% 1d-1 )h ( t ) = g′ ( divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG ) ( t - divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG ) + g ( divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG ). Then if 1d−1<c11 1d-1<cdivide start_ARG 1 end_ARG start_ARG d - 1 end_ARG < c, where c is the least solution to g′(c)=π−2g(c)2(1−c)superscript′221g (c)= π-2g(c)2(1-c)g′ ( c ) = divide start_ARG π - 2 g ( c ) end_ARG start_ARG 2 ( 1 - c ) end_ARG, then h(t)≥g(t)ℎh(t)≥ g(t)h ( t ) ≥ g ( t ). (Note that g(t)g(t)g ( t ) is convex on (0,0.5]00.5(0,0.5]( 0 , 0.5 ] and concave on [0.5,1)0.51[0.5,1)[ 0.5 , 1 ). Therefore, there are exactly two solutions to g′(c)=π−2g(c)2(1−c)superscript′221g (c)= π-2g(c)2(1-c)g′ ( c ) = divide start_ARG π - 2 g ( c ) end_ARG start_ARG 2 ( 1 - c ) end_ARG. The lesser of the two solutions is the value at which g′(c)superscript′g (c)g′ ( c ) equals the slope of the line between (c,g(c))(c,g(c))( c , g ( c ) ) and (1,π/2)12(1,π/2)( 1 , π / 2 ) – the latter point being the maximum of g – at the same time that g′(c)≥0superscript′0g (c)≥ 0g′ ′ ( c ) ≥ 0.) One can compute c≈0.155241…0.155241…c≈ 0.155241…c ≈ 0.155241 …, so if d≥88d≥ 8d ≥ 8, then 1/(d−1)<c111/(d-1)<c1 / ( d - 1 ) < c is satisfied, so h(t)≥g(t)ℎh(t)≥ g(t)h ( t ) ≥ g ( t ). Thus, we have the following inequality: h(1/(d−1))ℎ11 h(1/(d-1))h ( 1 / ( d - 1 ) ) >h(1/d)absentℎ1 >h(1/d)> h ( 1 / d ) =h([cos(ϕ(n,Px))2]) =h(E[ (φ(n,Px))^2])= h ( blackboard_E [ cos ( ϕ ( n , P x ) )2 ] ) =[h(cos(ϕ(n,Px))2)] due to linearity =E[h( (φ(n,Px))^2)] due to linearity= blackboard_E [ h ( cos ( ϕ ( n , P x ) )2 ) ] due to linearity ≥[g(cos(ϕ(n,Px))2)] because h(t)≥g(t) for all t [g( (φ(n,Px))^2)] because $h(t)≥ g% (t)$ for all $t$≥ blackboard_E [ g ( cos ( ϕ ( n , P x ) )2 ) ] because h ( t ) ≥ g ( t ) for all t =[θQ(x,n)]absentdelimited-[]subscript =E[ _Q(x,n)]= blackboard_E [ θitalic_Q ( x , n ) ] Now, h(1/(d−1))=g(1/(d−1))=arccos(1−1d−1)ℎ1111111h(1/(d-1))=g(1/(d-1))= ( 1- 1d-1 )h ( 1 / ( d - 1 ) ) = g ( 1 / ( d - 1 ) ) = arccos ( square-root start_ARG 1 - divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG end_ARG ). Thus, we have that arccos(1−1d−1)>[θQ(x,n)]111delimited-[]subscript ( 1- 1d-1 )>E[ _Q(x,n)]arccos ( square-root start_ARG 1 - divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG end_ARG ) > blackboard_E [ θitalic_Q ( x , n ) ]. The next step is to determine an upper bound for [θP(x,n)]delimited-[]subscriptE[ _P(x,n)]blackboard_E [ θitalic_P ( x , n ) ]. By Lemma 1, we have that θP(x,n)=arccos(1−cos(ϕ(Qn,1→))2) _P(x,n)= ( 1- (φ(Qn, 1))^2 )θitalic_P ( x , n ) = arccos ( square-root start_ARG 1 - cos ( ϕ ( Q n , over→ start_ARG 1 end_ARG ) )2 end_ARG ). Now, note that because n∼(0,I)similar-to0n (0,I)n ∼ N ( 0 , I ), then QnQnQ n is distributed according to a unit Gaussian in ImQImImQIm Q, the (d−11d-1d - 1)-dimensional hyperplane orthogonal to PxPxP x. Note that because 1→1 1over→ start_ARG 1 end_ARG is orthogonal to PxPxP x (by the definition of P) and PxPxP x is orthogonal to ImQImImQIm Q, this means that 1→∈ImQ→1Im 1 → start_ARG 1 end_ARG ∈ Im Q. Now, let us apply the same fact from earlier: that the expected squared dot product between a uniformly distributed unit vector in ℝd−1superscriptℝ1R^d-1blackboard_Rd - 1 and a given unit vector in ℝd−1superscriptℝ1R^d-1blackboard_Rd - 1 is 1/(d−1)111/(d-1)1 / ( d - 1 ). Thus, we have that [cos(ϕ(Qn,1→))2]=1/(d−1)E[ (φ(Qn, 1))^2]=1/(d-1)blackboard_E [ cos ( ϕ ( Q n , over→ start_ARG 1 end_ARG ) )2 ] = 1 / ( d - 1 ). From this, by the same logic as in the previous case, arccos(1−1d−1)≥[θP(x,n)]111delimited-[]subscript ( 1- 1d-1 ) [ _P(x,n)]arccos ( square-root start_ARG 1 - divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG end_ARG ) ≥ blackboard_E [ θitalic_P ( x , n ) ]. Adding this inequality to the inequality for [θQ(x,n)]delimited-[]subscriptE[ _Q(x,n)]blackboard_E [ θitalic_Q ( x , n ) ], we have 2arccos(1−1d−1)>[θQ(x,n)]+[θP(x,n)]≥[θ(x,n)]2111delimited-[]subscriptdelimited-[]subscriptdelimited-[]2 ( 1- 1d-1 )>E[ _Q(x,n)]+% E[ _P(x,n)] [θ(x,n)]2 arccos ( square-root start_ARG 1 - divide start_ARG 1 end_ARG start_ARG d - 1 end_ARG end_ARG ) > blackboard_E [ θitalic_Q ( x , n ) ] + blackboard_E [ θitalic_P ( x , n ) ] ≥ blackboard_E [ θ ( x , n ) ] . ∎ E.6 Empirical Results Regarding LayerNorm Gradients In Section 2.4, we mention that we empirically found that feature vectors computed by differentiating through LayerNorms had high cosine similarities with feature vectors computed while ignoring LayerNorms. In particular, for the feature vectors considered in Section 4.3, these cosine similarities and angles are given in Table 4. Task Cosine similarity Angle (radians) Subject pronoun prediction (attention 6::6) 0.997790.997790.997790.99779 0.06640.06640.06640.0664 C vs. Python 0.999360.999360.999360.99936 0.03580.03580.03580.0358 Political party prediction 0.999000.999000.999000.99900 0.04470.04470.04470.0447 Table 4: Cosine similarities between the feature vectors used in Section 4.3, computed with and without LayerNorms Theorem 1 can be used to estimate the expected angle in radians between a feature vector computed with LayerNorm and a feature vector computed without LayerNorm. In this case, given that the model has dimensionality 512, this expected angle is approximately 0.0442 radians. In general, this value is a decent estimation to the empirical values that we found – especially when considering that Theorem 1 makes the assumption that feature vectors are normally distributed, and when considering that this theorem does not take into account the scaling matrix after the LayerNorm described in Appendix E. Additionally, while the feature vectors for subject pronoun prediction have a higher angle between them of 0.0664 radians, this can be possibly be attributed to the fact that the circuit for these feature vectors goes through multiple LayerNorms. Appendix F Applicability to Modified Transformer Architectures Many modern transformers include architectural modifications from older transformer models such as GPT2 (Radford et al., 2019). For instance, the open-source Llama 3 family of models (AI@Meta, 2024) modifies attention sublayers by using grouped-query attention (GQA) (Ainslie et al., 2023), and uses RMSNorm (Zhang & Sennrich, 2019) instead of LayerNorm. It is thus natural to consider the extent to which ObProp generalizes to models incorporating these architectural changes. Happily, ObProp functions the exact same way in the presence of GQA and RMSNorm. Because GQA only affects the QK circuit of attention, and ObProp only addresses the OV circuit of attention, GQA does not change the operation of our method. As for RMSNorm, from a practical perspective, Algorithm 1 still works the same (because RMSNorm can just be treated as another nonlinearity). And from a theoretical perspective, RMSNorm is actually easier to handle than vanilla LayerNorm: for Theorem 1, the bound becomes tighter (arccos(1−1/d)11 ( 1-1/d)arccos ( square-root start_ARG 1 - 1 / d end_ARG )), the normality assumption can be changed to apply to x instead of n, and the proof follows almost immediately from Lemma 1. (This is because RMSNorm does not include the P projection found in the proof of Theorem 1.) Appendix G Further Debiasing Experiments We ran further experiments on the artificial dataset described in Appendix A, in order to determine the extent to which the feature vectors yielded by observable propagation could be used for debiasing the model’s outputs. The idea is similar to that presented by Li et al. (2023): by adding a feature vector to the activations at a given layer for the name token, we can hopefully shift the model’s output to be less biased. Specifically, we used the following methodology. We paired each of the 300 female-name prompts for nbiassubscriptbiasn_biasnbias with one of the 300 male-name prompts for nbiassubscriptbiasn_biasnbias. For each prompt pair, we ran the model on the female-name prompt and on the male-name prompt, recording the scores with respect to the nbiassubscriptbiasn_biasnbias observable. We then ran the model on the male-name prompt – but added a multiple of the 6::6 feature vector for nbiassubscriptbiasn_biasnbias described in §4.2 to the model’s activations for the name token before the LayerNorm preceding the layer 6 attention sublayer. In particular, let y be the unit 6::6 feature vector for nbiassubscriptbiasn_biasnbias, let xfemalesubscriptfemalex_femalexfemale be the activation vector for the name token at that layer for the female prompt, and let xmalesubscriptmalex_malexmale be the activation vector for the name token at that layer for the male prompt. Then we added the vector y′=((xfemale−xmale)⋅y)ysuperscript′⋅subscriptfemalesubscriptmaley = ((x_female-x_male)· y )y′ = ( ( xfemale - xmale ) ⋅ y ) y to xmalesubscriptmalex_malexmale. If the model were a linear model whose output was solely determined by the dot product of the input at this layer with the feature vector y, then the output of the model in the case where y′ is added to the male embeddings would be the same as the output of the model on the female prompt. Therefore, the difference between this ?patched? output and the model’s output on the female prompt can be viewed as an indicator of the extent to which the feature vector is affected by nonlinearity in the model. We also ran this same experiment, but adding 2y′2superscript′2y 2 y′ instead of y′ to the male embeddings, in order to get a stronger debiasing effect. The results are given in Table 5. We see that adding y′ to the activations for the male prompts is in fact able to cause the model’s output to become closer to that of the female prompts – although not as much as it would if the model were linear. But adding 2y′2superscript′2y 2 y′ to the male prompts’ activations is able to bring the model’s output to within an average of 1.31801.31801.31801.3180 logits of the model’s output on the female prompts. And when the mean difference between the patched male prompt outputs and the female prompt outputs is calculated without taking the absolute value, this difference becomes even smaller – only 0.13160.13160.13160.1316 logits on average – which indicates that sometimes, adding 2y′2superscript′2y 2 y′ to the male prompts’ activations even overshoots the model’s behavior on the female prompts. As such, we can infer that this feature vector obtained via observable propagation has utility in debiasing the model. Male prompt Female prompt Male patched +y′+y + y′ Male patched +2y′2superscript′+2y + 2 y′ Mean nbiassubscriptbiasn_biasnbias score −2.9472.947-2.947- 2.947 5.0115.0115.0115.011 −0.54890.5489-0.5489- 0.5489 4.8794.8794.8794.879 Mean absolute difference with female scores 7.99037.99037.99037.9903 00 5.62455.62455.62455.6245 1.31801.31801.31801.3180 Mean difference from female scores 7.95837.95837.95837.9583 00 5.55995.55995.55995.5599 0.13160.13160.13160.1316 Table 5: The results of the debiasing experiments for the nbiassubscriptbiasn_biasnbias observable. ?Mean absolute difference with female scores? refers to the mean absolute difference between the nbiassubscriptbiasn_biasnbias score for each male prompt (or male prompt with patched activations) and the score for the corresponding female prompt. ?Mean difference from female scores? refers to the mean difference, without taking the absolute value, between the nbiassubscriptbiasn_biasnbias for the female prompt, and the score for the corresponding (patched) male prompt. We then wanted to investigate the extent to which adding this ?debiasing vector? would harm the model’s performance on the pronoun prediction task. As such, we repeated these experiments on the dataset of prompts for nsubjsubscriptsubjn_subjnsubj, but adding 2y′2superscript′2y 2 y′ to the male activations. The results can be found in Table 6. The results show that adding the ?debiasing vector? to the male name embeddings also causes the model’s ability to correctly predict gendered pronouns to drop dramatically. This suggests that in cases such as this one, where the model uses the same features for undesirable outputs as it does for desirable outputs, inference-time interventions such as that presented by Li et al. (2023) may cause an inevitable decrease in model quality. Male prompt Female prompt Male patched +2y′2superscript′+2y + 2 y′ Mean nsubjsubscriptsubjn_subjnsubj score −5.13935.1393-5.1393- 5.1393 5.04045.04045.04045.0404 4.87944.87944.87944.8794 Mean absolute difference with female scores 10.18010.18010.18010.180 00 2.1482.1482.1482.148 Mean difference from female scores 10.18010.18010.18010.180 00 0.1610.1610.1610.161 Table 6: The results of the debiasing experiments for the nsubjsubscriptsubjn_subjnsubj observable, adding 2y′2superscript′2y 2 y′ to the male prompts’ activations. Appendix H Proof of Theorem 2 Theorem 2. Let y1,y2∈ℝdsubscript1subscript2superscriptℝy_1,y_2 ^dy1 , y2 ∈ blackboard_Rd. Let x be uniformly distributed on the hypersphere defined by the constraints ‖x‖=snorm\|x\|=s∥ x ∥ = s and x⋅y1=k⋅subscript1x· y_1=kx ⋅ y1 = k. Then we have [x⋅y2]=ky1⋅y2‖y1‖2delimited-[]⋅subscript2⋅subscript1subscript2superscriptnormsubscript12E[x· y_2]=k y_1· y_2\|y_1\|^2blackboard_E [ x ⋅ y2 ] = k divide start_ARG y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG and the maximum and minimum values of x⋅y2⋅subscript2x· y_2x ⋅ y2 are given by ‖y2‖y1‖(kcos(θ)±sin(θ)s2‖y1‖2−k2)normsubscript2normsubscript1plus-or-minussuperscript2superscriptnormsubscript12superscript2 \|y_2\|\|y_1\| (k (θ)± (θ) s^2\|y_1% \|^2-k^2 )divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG ( k cos ( θ ) ± sin ( θ ) square-root start_ARG s2 ∥ y1 ∥2 - k2 end_ARG ) where θ is the angle between y1subscript1y_1y1 and y2subscript2y_2y2. Before proving Theorem 2, we will prove a quick lemma. Lemma 2. Let SS be a hypersphere with radius r and center c. Then for a given vector y, the mean squared distance from y to the sphere, s∈[‖y−c‖2]subscriptdelimited-[]superscriptnorm2E_s [ \|y-c \|^2]blackboard_Es ∈ S [ ∥ y - c ∥2 ], is given by ‖y−c‖2+r2superscriptnorm2superscript2 \|y-c \|^2+r^2∥ y - c ∥2 + r2. Proof. Without loss of generality, assume that SS is centered at the origin (so ‖y−c‖2=‖y‖2superscriptnorm2superscriptnorm2 \|y-c \|^2= \|y \|^2∥ y - c ∥2 = ∥ y ∥2). Induct on the dimension of the SS. As our base case, let SS be the 0-sphere consisting of a point in ℝ1superscriptℝ1R^1blackboard_R1 at −r-r- r and a point at r. Then s∈[|y−s|2]=(y−r)2+(y−(−r))22=y2+r2subscriptdelimited-[]superscript2superscript2superscript22superscript2superscript2E_s [ |y-s |^2]= (y-r)^2+(y-(-r))^2% 2=y^2+r^2blackboard_Es ∈ S [ | y - s |2 ] = divide start_ARG ( y - r )2 + ( y - ( - r ) )2 end_ARG start_ARG 2 end_ARG = y2 + r2. For our inductive step, assume the inductive hypothesis for spheres of dimension d−22d-2d - 2; we will prove the theorem of spheres of dimension d−11d-1d - 1 in an ambient space of dimension d. Without loss of generality, let y lie on the x-axis, so that we have y=[y100…]Tsuperscriptmatrixsubscript100…y= bmatrixy_1&0&0&… bmatrix^Ty = [ start_ARG start_ROW start_CELL y1 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL … end_CELL end_ROW end_ARG ]T. Next, divide SS into slices along the x-axis. Denote the slice at position x=x0subscript0x=x_0x = x0 as x0subscriptsubscript0S_x_0Sitalic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Then x0subscriptsubscript0S_x_0Sitalic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is a (d−22d-2d - 2)-sphere centered at [x000…]Tsuperscriptmatrixsubscript000… bmatrixx_0&0&0&… bmatrix^T[ start_ARG start_ROW start_CELL x0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL … end_CELL end_ROW end_ARG ]T, and has radius r2−x02superscript2superscriptsubscript02 r^2-x_0^2square-root start_ARG r2 - x02 end_ARG. Now, by the law of total expectation, s∈[‖y−s‖2]=−r≤x≤r[s′∈x[‖y−s′‖2]]subscriptdelimited-[]superscriptnorm2subscriptdelimited-[]subscriptsuperscript′subscriptdelimited-[]superscriptnormsuperscript′2E_s [ \|y-s \|^2]=E_-r≤ x≤ r% [E_s _x [ \|y-s % \|^2 ] ]blackboard_Es ∈ S [ ∥ y - s ∥2 ] = blackboard_E- r ≤ x ≤ r [ blackboard_Es′ ∈ S start_POSTSUBSCRIPT x end_POSTSUBSCRIPT [ ∥ y - s′ ∥2 ] ]. We then have that s′∈x[‖y−s′‖2]subscriptsuperscript′subscriptdelimited-[]superscriptnormsuperscript′2 _s _x [ \|y-s % \|^2 ]blackboard_Es′ ∈ S start_POSTSUBSCRIPT x end_POSTSUBSCRIPT [ ∥ y - s′ ∥2 ] =[(y1−x)2+s22+s32+⋯]absentdelimited-[]superscriptsubscript12superscriptsubscript22superscriptsubscript32⋯ =E [(y_1-x)^2+s_2^2+s_3^2+·s ]= blackboard_E [ ( y1 - x )2 + s22 + s32 + ⋯ ] =(y1−x)2+[s22+s32+⋯]absentsuperscriptsubscript12delimited-[]superscriptsubscript22superscriptsubscript32⋯ =(y_1-x)^2+E [s_2^2+s_3^2+·s ]= ( y1 - x )2 + blackboard_E [ s22 + s32 + ⋯ ] Once again, xsubscriptS_xSitalic_x is a (d−22d-2d - 2)-sphere defined by s22+s32+⋯=r2−x2superscriptsubscript22superscriptsubscript32⋯superscript2superscript2s_2^2+s_3^2+·s=r^2-x^2s22 + s32 + ⋯ = r2 - x2. This means that by the inductive hypothesis, we have [s22+s32+⋯]=r2−x2delimited-[]superscriptsubscript22superscriptsubscript32⋯superscript2superscript2E [s_2^2+s_3^2+·s ]=r^2-x^2blackboard_E [ s22 + s32 + ⋯ ] = r2 - x2. Thus, we have s′∈x[‖y−s′‖2]subscriptsuperscript′subscriptdelimited-[]superscriptnormsuperscript′2 _s _x [ \|y-s % \|^2 ]blackboard_Es′ ∈ S start_POSTSUBSCRIPT x end_POSTSUBSCRIPT [ ∥ y - s′ ∥2 ] =(y1−x)2+r2−x2absentsuperscriptsubscript12superscript2superscript2 =(y_1-x)^2+r^2-x^2= ( y1 - x )2 + r2 - x2 s′∈x[‖y−s′‖2]subscriptsuperscript′subscriptdelimited-[]superscriptnormsuperscript′2 _s _x [ \|y-s % \|^2 ]blackboard_Es′ ∈ S start_POSTSUBSCRIPT x end_POSTSUBSCRIPT [ ∥ y - s′ ∥2 ] =(y1−x)2+r2−x2absentsuperscriptsubscript12superscript2superscript2 =(y_1-x)^2+r^2-x^2= ( y1 - x )2 + r2 - x2 s∈[‖y−s‖2]subscriptdelimited-[]superscriptnorm2 _s [ \|y-s \|^2]blackboard_Es ∈ S [ ∥ y - s ∥2 ] =−r≤x≤r[(y1−x)2+r2−x2]absentsubscriptdelimited-[]superscriptsubscript12superscript2superscript2 =E_-r≤ x≤ r [(y_1-x)^2+r^2-x^2 ]= blackboard_E- r ≤ x ≤ r [ ( y1 - x )2 + r2 - x2 ] =12r∫−r(y1−x)2+r2−x2dxabsent12superscriptsubscriptsuperscriptsubscript12superscript2superscript2 = 12r _-r^r(y_1-x)^2+r^2-x^2dx= divide start_ARG 1 end_ARG start_ARG 2 r end_ARG ∫- ritalic_r ( y1 - x )2 + r2 - x2 d x =r2+y12absentsuperscript2superscriptsubscript12 =r^2+y_1^2= r2 + y12 ∎ We are now ready to begin the main proof. Proof. First, assume that ‖x‖=1norm1 \|x \|=1∥ x ∥ = 1. Now, the intersection of the (d−1)1(d-1)( d - 1 )-sphere defined by ‖x‖=1norm1 \|x \|=1∥ x ∥ = 1 and the hyperplane x⋅y1=k⋅subscript1x· y_1=kx ⋅ y1 = k is a unit hypersphere of dimension (d−2)2(d-2)( d - 2 ), oriented in the hyperplane x⋅y1=k⋅subscript1x· y_1=kx ⋅ y1 = k, and centered at c1y1subscript1subscript1c_1y_1c1 y1 where c1=k/‖y1‖2subscript1superscriptnormsubscript12c_1=k/ \|y_1 \|^2c1 = k / ∥ y1 ∥2. Denote this (d−2)2(d-2)( d - 2 )-sphere as SS, and denote its radius by r. Next, define c2=ky2⋅y1subscript2⋅subscript2subscript1c_2= ky_2· y_1c2 = divide start_ARG k end_ARG start_ARG y2 ⋅ y1 end_ARG. Then cy2⋅y1=k⋅subscript2subscript1cy_2· y_1=kc y2 ⋅ y1 = k, so c2y2subscript2subscript2c_2y_2c2 y2 lies in the same hyperplane as SS. Additionally, because c1y1subscript1subscript1c_1y_1c1 y1 is in this hyperplane, and c1y1subscript1subscript1c_1y_1c1 y1 is also the normal vector for this hyperplane, we have that the vectors c1y1subscript1subscript1c_1y_1c1 y1, c2y2subscript2subscript2c_2y_2c2 y2, and c1y1−c2y2subscript1subscript1subscript2subscript2c_1y_1-c_2y_2c1 y1 - c2 y2 form a right triangle, where c2y2subscript2subscript2c_2y_2c2 y2 is the hypotenuse and c1y1−c2y2subscript1subscript1subscript2subscript2c_1y_1-c_2y_2c1 y1 - c2 y2 is the leg opposite of the angle θ between y1subscript1y_1y1 and y2subscript2y_2y2. As such, we have that ‖c1y1−c2y2‖=sin(θ)‖c2y2‖normsubscript1subscript1subscript2subscript2normsubscript2subscript2 \|c_1y_1-c_2y_2 \|= (θ) \|c_2y_2 \|∥ c1 y1 - c2 y2 ∥ = sin ( θ ) ∥ c2 y2 ∥. Furthermore, we have that c1y1⋅c2y2=k2‖y1‖2⋅subscript1subscript1subscript2subscript2superscript2superscriptnormsubscript12c_1y_1· c_2y_2= k^2 \|y_1 \|^2c1 y1 ⋅ c2 y2 = divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG, that ‖c1y1‖=|k|‖y1‖2normsubscript1subscript1superscriptnormsubscript12 \|c_1y_1 \|= |k | \|y_1 \|^2∥ c1 y1 ∥ = divide start_ARG | k | end_ARG start_ARG ∥ y1 ∥2 end_ARG, and that ‖c2y2‖=|k|‖y1‖|cosθ|normsubscript2subscript2normsubscript1 \|c_2y_2 \|= |k | \|y_1 \| | % θ |∥ c2 y2 ∥ = divide start_ARG | k | end_ARG start_ARG ∥ y1 ∥ | cos θ | end_ARG We will now begin to prove that the maximum and minimum values of y2⋅x⋅subscript2y_2· xy2 ⋅ x are given by ‖y2‖y1‖(kcos(θ)±|sin(θ)|s2‖y1‖2−k2)normsubscript2normsubscript1plus-or-minussuperscript2superscriptnormsubscript12superscript2 \|y_2\|\|y_1\| (k (θ)±| (θ)| s^2\|y_% 1\|^2-k^2 )divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG ( k cos ( θ ) ± | sin ( θ ) | square-root start_ARG s2 ∥ y1 ∥2 - k2 end_ARG ). To start, note that the nearest point on SS to c2y2subscript2subscript2c_2y_2c2 y2 and the farthest point on SS from c2y2subscript2subscript2c_2y_2c2 y2 are located at the intersection of SS with the line between c2y2subscript2subscript2c_2y_2c2 y2 and c1y1subscript1subscript1c_1y_1c1 y1. To see this, let x+subscriptx_+x+ be the at the intersection of SS and the line between c2y2subscript2subscript2c_2y_2c2 y2 and c1y1subscript1subscript1c_1y_1c1 y1. We will show that x+subscriptx_+x+ is the nearest point on SS to c2y2subscript2subscript2c_2y_2c2 y2. Let x+′∈≠x+subscriptsuperscript′subscriptx _+ ≠ x_+x′+ ∈ S ≠ x+. Then we have the following cases: • Case 1: c2y2subscript2subscript2c_2y_2c2 y2 is outside of SS. Then ‖c2y2−c1y1‖=‖c2y2−x+‖+‖x+−c1y1‖normsubscript2subscript2subscript1subscript1normsubscript2subscript2subscriptnormsubscriptsubscript1subscript1 \|c_2y_2-c_1y_1 \|= \|c_2y_2-x_+ \|+ \|x_% +-c_1y_1 \|∥ c2 y2 - c1 y1 ∥ = ∥ c2 y2 - x+ ∥ + ∥ x+ - c1 y1 ∥, because c2y2subscript2subscript2c_2y_2c2 y2, x+subscriptx_+x+, and c1y1subscript1subscript1c_1y_1c1 y1 are collinear – so ‖c2y2−c1y1‖=‖c2y2−x+‖+rnormsubscript2subscript2subscript1subscript1normsubscript2subscript2subscript \|c_2y_2-c_1y_1 \|= \|c_2y_2-x_+ \|+r∥ c2 y2 - c1 y1 ∥ = ∥ c2 y2 - x+ ∥ + r (because x+∈subscriptx_+ + ∈ S). By the triangle inequality, we have ‖c2y2−c1y1‖≤‖c2y2−x+′‖+‖x+′−c1y1‖=‖c2y2−x+′‖+rnormsubscript2subscript2subscript1subscript1normsubscript2subscript2superscriptsubscript′normsuperscriptsubscript′subscript1subscript1normsubscript2subscript2superscriptsubscript′ \|c_2y_2-c_1y_1 \|≤ \|c_2y_2-x_+ % \|+ \|x_+ -c_1y_1 \|= \|c_2y_2-x_+ % \|+r∥ c2 y2 - c1 y1 ∥ ≤ ∥ c2 y2 - x+′ ∥ + ∥ x+′ - c1 y1 ∥ = ∥ c2 y2 - x+′ ∥ + r. But this means that ‖c2y2−x+‖≤‖c2y2−x+′‖normsubscript2subscript2subscriptnormsubscript2subscript2superscriptsubscript′ \|c_2y_2-x_+ \|≤ \|c_2y_2-x_+ \|∥ c2 y2 - x+ ∥ ≤ ∥ c2 y2 - x+′ ∥. • Case 2: c2y2subscript2subscript2c_2y_2c2 y2 is inside of SS. Then ‖c2y2−c1y1‖=‖x+−c1y1‖−‖c2y2−x+‖normsubscript2subscript2subscript1subscript1normsubscriptsubscript1subscript1normsubscript2subscript2subscript \|c_2y_2-c_1y_1 \|= \|x_+-c_1y_1 \|- \|c_% 2y_2-x_+ \|∥ c2 y2 - c1 y1 ∥ = ∥ x+ - c1 y1 ∥ - ∥ c2 y2 - x+ ∥, because c2y2subscript2subscript2c_2y_2c2 y2, x+subscriptx_+x+, and c1y1subscript1subscript1c_1y_1c1 y1 are collinear – so ‖c2y2−c1y1‖=r−‖c2y2−x+‖normsubscript2subscript2subscript1subscript1normsubscript2subscript2subscript \|c_2y_2-c_1y_1 \|=r- \|c_2y_2-x_+ \|∥ c2 y2 - c1 y1 ∥ = r - ∥ c2 y2 - x+ ∥. By the triangle inequality, we have ‖x+′−c1y1‖≤‖c2y2−x+′‖+‖c2y2−c1y1‖normsuperscriptsubscript′subscript1subscript1normsubscript2subscript2superscriptsubscript′normsubscript2subscript2subscript1subscript1 \|x_+ -c_1y_1 \|≤ \|c_2y_2-x_+ % \|+ \|c_2y_2-c_1y_1 \|∥ x+′ - c1 y1 ∥ ≤ ∥ c2 y2 - x+′ ∥ + ∥ c2 y2 - c1 y1 ∥, so ‖x+′−c1y1‖≤‖c2y2−x+′‖+r−‖c2y2−x+‖normsuperscriptsubscript′subscript1subscript1normsubscript2subscript2superscriptsubscript′normsubscript2subscript2subscript \|x_+ -c_1y_1 \|≤ \|c_2y_2-x_+ % \|+r- \|c_2y_2-x_+ \|∥ x+′ - c1 y1 ∥ ≤ ∥ c2 y2 - x+′ ∥ + r - ∥ c2 y2 - x+ ∥. But since ‖x+′−c1y1‖=rnormsuperscriptsubscript′subscript1subscript1 \|x_+ -c_1y_1 \|=r∥ x+′ - c1 y1 ∥ = r, this means that ‖c2y2−x+‖≤‖c2y2−x+′‖normsubscript2subscript2subscriptnormsubscript2subscript2superscriptsubscript′ \|c_2y_2-x_+ \|≤ \|c_2y_2-x_+ \|∥ c2 y2 - x+ ∥ ≤ ∥ c2 y2 - x+′ ∥. A similar argument will show that x−subscriptx_-x-, the farthest point on SS from c2y2subscript2subscript2c_2y_2c2 y2, is also located at the intersection of SS with the line between c2y2subscript2subscript2c_2y_2c2 y2 and c1y1subscript1subscript1c_1y_1c1 y1. Now, let us find the values of x+subscriptx_+x+ and x−subscriptx_-x-. The line between c2y2subscript2subscript2c_2y_2c2 y2 and c1y1subscript1subscript1c_1y_1c1 y1 can be parameterized by a scalar t as c1y1+t(c2y2−c1y1)subscript1subscript1subscript2subscript2subscript1subscript1c_1y_1+t(c_2y_2-c_1y_1)c1 y1 + t ( c2 y2 - c1 y1 ). Then x+subscriptx_+x+ and x−subscriptx_-x- are given by c1y1+t∗(c2y2−c1y1)subscript1subscript1superscriptsubscript2subscript2subscript1subscript1c_1y_1+t^*(c_2y_2-c_1y_1)c1 y1 + t∗ ( c2 y2 - c1 y1 ), where t∗superscriptt^*t∗ are the solutions to the equation ‖c1y1+t(c2y2−c1y1)‖=1normsubscript1subscript1subscript2subscript2subscript1subscript11 \|c_1y_1+t(c_2y_2-c_1y_1) \|=1∥ c1 y1 + t ( c2 y2 - c1 y1 ) ∥ = 1. We have the following: 11 11 =‖c1y1+t(c2y2−c1y1)‖absentnormsubscript1subscript1subscript2subscript2subscript1subscript1 = \|c_1y_1+t(c_2y_2-c_1y_1) \|= ∥ c1 y1 + t ( c2 y2 - c1 y1 ) ∥ =‖c1y1‖2+2t(c1y1⋅(c2y2−c1y1))+t2‖c2y2−c1y1‖2absentsuperscriptnormsubscript1subscript122⋅subscript1subscript1subscript2subscript2subscript1subscript1superscript2superscriptnormsubscript2subscript2subscript1subscript12 = \|c_1y_1 \|^2+2t(c_1y_1·(c_2y_2-c_1% y_1))+t^2 \|c_2y_2-c_1y_1 \|^2= ∥ c1 y1 ∥2 + 2 t ( c1 y1 ⋅ ( c2 y2 - c1 y1 ) ) + t2 ∥ c2 y2 - c1 y1 ∥2 =‖c1y1‖2+2t((c1y1⋅c2y2)−‖c1y1‖2)+t2‖c2y2‖2sin2θabsentsuperscriptnormsubscript1subscript122⋅subscript1subscript1subscript2subscript2superscriptnormsubscript1subscript12superscript2superscriptnormsubscript2subscript22superscript2 = \|c_1y_1 \|^2+2t((c_1y_1· c_2y_2)-% \|c_1y_1 \|^2)+t^2 \|c_2y_2 \|^2 ^2θ= ∥ c1 y1 ∥2 + 2 t ( ( c1 y1 ⋅ c2 y2 ) - ∥ c1 y1 ∥2 ) + t2 ∥ c2 y2 ∥2 sin2 θ =k2‖y1‖2+2t(k2‖y1‖2−k2‖y1‖2)+t2k2‖y1‖2cos2θsin2θabsentsuperscript2superscriptnormsubscript122superscript2superscriptnormsubscript12superscript2superscriptnormsubscript12superscript2superscript2superscriptnormsubscript12superscript2superscript2 = k^2 \|y_1 \|^2+2t ( k^2% \|y_1 \|^2- k^2 \|y_1 \|^2 )+t^2% k^2 \|y_1 \|^2 ^2θ ^2θ= divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG + 2 t ( divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG - divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ) + t2 divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 cos2 θ end_ARG sin2 θ =k2‖y1‖2(t2tan2θ+1)absentsuperscript2superscriptnormsubscript12superscript2superscript21 = k^2 \|y_1 \|^2(t^2 ^2θ+1)= divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ( t2 tan2 θ + 1 ) Thus, solving for t, we have that t∗=±‖y1‖2−k2|k|tanθsuperscriptplus-or-minussuperscriptnormsubscript12superscript2t^*= ± \|y_1 \|^2-k^2 |k | θt∗ = divide start_ARG ± square-root start_ARG ∥ y1 ∥2 - k2 end_ARG end_ARG start_ARG | k | tan θ end_ARG. Therefore, we have that x+,x−subscriptsubscript x_+,x_-x+ , x- =c1y1+t∗(c2y2−c1y1)absentsubscript1subscript1superscriptsubscript2subscript2subscript1subscript1 =c_1y_1+t^*(c_2y_2-c_1y_1)= c1 y1 + t∗ ( c2 y2 - c1 y1 ) =c1y1+(k2‖y1‖2(t2tan2θ+1))(c2y2−c1y1)absentsubscript1subscript1superscript2superscriptnormsubscript12superscript2superscript21subscript2subscript2subscript1subscript1 =c_1y_1+ ( k^2 \|y_1 \|^2(t^2% ^2θ+1) )(c_2y_2-c_1y_1)= c1 y1 + ( divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ( t2 tan2 θ + 1 ) ) ( c2 y2 - c1 y1 ) =ky1‖y1‖2+(±‖y1‖2−k2|k|tanθ)(ky2y1⋅y2−ky1‖y1‖2)absentsubscript1superscriptnormsubscript12plus-or-minussuperscriptnormsubscript12superscript2subscript2⋅subscript1subscript2subscript1superscriptnormsubscript12 = ky_1 \|y_1 \|^2+ ( ± % \|y_1 \|^2-k^2 |k | θ ) ( % ky_2y_1· y_2- ky_1 \|y_1 \|^2 )= divide start_ARG k y1 end_ARG start_ARG ∥ y1 ∥2 end_ARG + ( divide start_ARG ± square-root start_ARG ∥ y1 ∥2 - k2 end_ARG end_ARG start_ARG | k | tan θ end_ARG ) ( divide start_ARG k y2 end_ARG start_ARG y1 ⋅ y2 end_ARG - divide start_ARG k y1 end_ARG start_ARG ∥ y1 ∥2 end_ARG ) =k[y1‖y1‖2±(‖y1‖2−k2|k|tanθ)(y2y1⋅y2−y1‖y1‖2)]absentdelimited-[]plus-or-minussubscript1superscriptnormsubscript12superscriptnormsubscript12superscript2subscript2⋅subscript1subscript2subscript1superscriptnormsubscript12 =k [ y_1 \|y_1 \|^2± ( % \|y_1 \|^2-k^2 |k | θ ) (% y_2y_1· y_2- y_1 \|y_1 \|^2 ) ]= k [ divide start_ARG y1 end_ARG start_ARG ∥ y1 ∥2 end_ARG ± ( divide start_ARG square-root start_ARG ∥ y1 ∥2 - k2 end_ARG end_ARG start_ARG | k | tan θ end_ARG ) ( divide start_ARG y2 end_ARG start_ARG y1 ⋅ y2 end_ARG - divide start_ARG y1 end_ARG start_ARG ∥ y1 ∥2 end_ARG ) ] y2⋅x+,y2⋅x−⋅subscript2subscript⋅subscript2subscript y_2· x_+,y_2· x_-y2 ⋅ x+ , y2 ⋅ x- =y2⋅k[y1‖y1‖2±(‖y1‖2−k2|k|tanθ)(y2y1⋅y2−y1‖y1‖2)]absent⋅subscript2delimited-[]plus-or-minussubscript1superscriptnormsubscript12superscriptnormsubscript12superscript2subscript2⋅subscript1subscript2subscript1superscriptnormsubscript12 =y_2· k [ y_1 \|y_1 \|^2± % ( \|y_1 \|^2-k^2 |k | θ )% ( y_2y_1· y_2- y_1 \|y_1 \|^2% ) ]= y2 ⋅ k [ divide start_ARG y1 end_ARG start_ARG ∥ y1 ∥2 end_ARG ± ( divide start_ARG square-root start_ARG ∥ y1 ∥2 - k2 end_ARG end_ARG start_ARG | k | tan θ end_ARG ) ( divide start_ARG y2 end_ARG start_ARG y1 ⋅ y2 end_ARG - divide start_ARG y1 end_ARG start_ARG ∥ y1 ∥2 end_ARG ) ] =ky1⋅y2‖y1‖2±(cotθ‖y1‖2−k2)(y2⋅y2y1⋅y2−y1⋅y1‖y1‖2)absentplus-or-minus⋅subscript1subscript2superscriptnormsubscript12superscriptnormsubscript12superscript2⋅subscript2subscript2⋅subscript1subscript2⋅subscript1subscript1superscriptnormsubscript12 = ky_1· y_2 \|y_1 \|^2± ( % θ \|y_1 \|^2-k^2 ) ( y_2· y_2% y_1· y_2- y_1· y_1 \|y_1 \|^2 )= divide start_ARG k y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ± ( cot θ square-root start_ARG ∥ y1 ∥2 - k2 end_ARG ) ( divide start_ARG y2 ⋅ y2 end_ARG start_ARG y1 ⋅ y2 end_ARG - divide start_ARG y1 ⋅ y1 end_ARG start_ARG ∥ y1 ∥2 end_ARG ) =ky1⋅y2‖y1‖2±(cotθ‖y1‖2−k2)(‖y2‖y1‖cosθ−‖y2‖cosθ‖y1‖)absentplus-or-minus⋅subscript1subscript2superscriptnormsubscript12superscriptnormsubscript12superscript2normsubscript2normsubscript1normsubscript2normsubscript1 = ky_1· y_2 \|y_1 \|^2± ( % θ \|y_1 \|^2-k^2 ) ( \|y_2% \| \|y_1 \| θ- \|y_2 \| θ% \|y_1 \| )= divide start_ARG k y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ± ( cot θ square-root start_ARG ∥ y1 ∥2 - k2 end_ARG ) ( divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ cos θ end_ARG - divide start_ARG ∥ y2 ∥ cos θ end_ARG start_ARG ∥ y1 ∥ end_ARG ) =[ky1⋅y2‖y1‖2±(cotθ‖y1‖2−k2)‖y2‖y1‖(1cosθ−cosθ)]absentdelimited-[]plus-or-minus⋅subscript1subscript2superscriptnormsubscript12superscriptnormsubscript12superscript2normsubscript2normsubscript11 = [ ky_1· y_2 \|y_1 \|^2± % ( θ \|y_1 \|^2-k^2 ) \|y_2% \| \|y_1 \| ( 1 θ- θ ) ]= [ divide start_ARG k y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ± ( cot θ square-root start_ARG ∥ y1 ∥2 - k2 end_ARG ) divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG ( divide start_ARG 1 end_ARG start_ARG cos θ end_ARG - cos θ ) ] =ky1⋅y2‖y1‖2±(cotθ‖y1‖2−k2)‖y2‖y1‖sinθtanθabsentplus-or-minus⋅subscript1subscript2superscriptnormsubscript12superscriptnormsubscript12superscript2normsubscript2normsubscript1 = ky_1· y_2 \|y_1 \|^2± ( % θ \|y_1 \|^2-k^2 ) \|y_2 \|% \|y_1 \| θ θ= divide start_ARG k y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ± ( cot θ square-root start_ARG ∥ y1 ∥2 - k2 end_ARG ) divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG sin θ tan θ =ky1⋅y2‖y1‖2±‖y2‖y1‖sinθ‖y1‖2−k2absentplus-or-minus⋅subscript1subscript2superscriptnormsubscript12normsubscript2normsubscript1superscriptnormsubscript12superscript2 = ky_1· y_2 \|y_1 \|^2± % \|y_2 \| \|y_1 \| θ \|y_1 \|^2% -k^2= divide start_ARG k y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG ± divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG sin θ square-root start_ARG ∥ y1 ∥2 - k2 end_ARG =‖y2‖y1‖(kcos(θ)±sin(θ)‖y1‖2−k2)absentnormsubscript2normsubscript1plus-or-minussuperscriptnormsubscript12superscript2 = \|y_2\|\|y_1\| (k (θ)± (θ)% \|y_1\|^2-k^2 )= divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG ( k cos ( θ ) ± sin ( θ ) square-root start_ARG ∥ y1 ∥2 - k2 end_ARG ) We will now prove that [y2⋅x]=y1⋅y2‖y1‖2delimited-[]⋅subscript2⋅subscript1subscript2superscriptnormsubscript12E [y_2· x ]= y_1· y_2 \|y_1 % \|^2blackboard_E [ y2 ⋅ x ] = divide start_ARG y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG. Before we do, note that we can also use our value of t∗superscriptt^*t∗ to determine the squared radius of SS. We have that the squared radius of SS is given by r2superscript2 r^2r2 =‖t∗(c2y2−c1y1)‖2absentsuperscriptnormsuperscriptsubscript2subscript2subscript1subscript12 = \|t^*(c_2y_2-c_1y_1) \|^2= ∥ t∗ ( c2 y2 - c1 y1 ) ∥2 =(t∗)2‖(c2y2−c1y1)‖2absentsuperscriptsuperscript2superscriptnormsubscript2subscript2subscript1subscript12 =(t^*)^2 \|(c_2y_2-c_1y_1) \|^2= ( t∗ )2 ∥ ( c2 y2 - c1 y1 ) ∥2 =(t∗)2sin2θ‖c2y2‖2absentsuperscriptsuperscript2superscript2superscriptnormsubscript2subscript22 =(t^*)^2 ^2θ \|c_2y_2 \|^2= ( t∗ )2 sin2 θ ∥ c2 y2 ∥2 =sin2(θ)k2/(‖y1‖2cos2θ)k2tanθ(‖y1‖2−k2)absentsuperscript2superscript2superscriptnormsubscript12superscript2superscript2superscriptnormsubscript12superscript2 = ^2(θ)k^2/ ( \|y_1 \|^2 ^% 2θ )k^2 θ ( \|y_1 \|^2-k^2 )= divide start_ARG sin2 ( θ ) k2 / ( ∥ y1 ∥2 cos2 θ ) end_ARG start_ARG k2 tan θ end_ARG ( ∥ y1 ∥2 - k2 ) =1−k2‖y1‖2absent1superscript2superscriptnormsubscript12 =1- k^2 \|y_1 \|^2= 1 - divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG We will use this result soon. Now, on to the main event. Begin by noting that y2⋅x=‖y2‖‖x‖cos(y2,x)=‖y2‖cos(y2,x)⋅subscript2normsubscript2normsubscript2normsubscript2subscript2y_2· x= \|y_2 \| \|x \| (y_2,x)= \|y_2% \| (y_2,x)y2 ⋅ x = ∥ y2 ∥ ∥ x ∥ cos ( y2 , x ) = ∥ y2 ∥ cos ( y2 , x ), where cos(y2,x)subscript2 (y_2,x)cos ( y2 , x ) is the cosine of the angle between y2subscript2y_2y2 and x. Now, cos(y2,x)=signum(c2)cos(c2y2,x)subscript2signumsubscript2subscript2subscript2 (y_2,x)=signum(c_2) (c_2y_2,x)cos ( y2 , x ) = signum ( c2 ) cos ( c2 y2 , x ). And we have that ‖x−cy2‖2=‖x‖2+‖cy2‖2−2‖x‖‖c2y2‖cos(cy2,x)=1+‖c2y2‖2−2‖c2y2‖cos(c2y2,x)superscriptnormsubscript22superscriptnorm2superscriptnormsubscript222normnormsubscript2subscript2subscript21superscriptnormsubscript2subscript222normsubscript2subscript2subscript2subscript2 \|x-cy_2 \|^2= \|x \|^2+ \|cy_2 \|^2-2% \|x \| \|c_2y_2 \| (cy_2,x)=1+ \|c_2y_2% \|^2-2 \|c_2y_2 \| (c_2y_2,x)∥ x - c y2 ∥2 = ∥ x ∥2 + ∥ c y2 ∥2 - 2 ∥ x ∥ ∥ c2 y2 ∥ cos ( c y2 , x ) = 1 + ∥ c2 y2 ∥2 - 2 ∥ c2 y2 ∥ cos ( c2 y2 , x ). Therefore, we have cos(y2,x)subscript2 (y_2,x)cos ( y2 , x ) =signum(c2)cos(c2y2,x)absentsignumsubscript2subscript2subscript2 =signum(c_2) (c_2y_2,x)= signum ( c2 ) cos ( c2 y2 , x ) =signum(c2)‖x−c2y2‖2−1−‖c2y2‖2−2‖c2y2‖absentsignumsubscript2superscriptnormsubscript2subscript221superscriptnormsubscript2subscript222normsubscript2subscript2 =signum(c_2) \|x-c_2y_2 \|^2% -1- \|c_2y_2 \|^2-2 \|c_2y_2 \|= signum ( c2 ) divide start_ARG ∥ x - c2 y2 ∥2 - 1 - ∥ c2 y2 ∥2 end_ARG start_ARG - 2 ∥ c2 y2 ∥ end_ARG =signum(c2)1+‖c2y2‖2−‖x−c2y2‖22‖c2y2‖absentsignumsubscript21superscriptnormsubscript2subscript22superscriptnormsubscript2subscript222normsubscript2subscript2 =signum(c_2) 1+ \|c_2y_2 \|^2% - \|x-c_2y_2 \|^22 \|c_2y_2 \|= signum ( c2 ) divide start_ARG 1 + ∥ c2 y2 ∥2 - ∥ x - c2 y2 ∥2 end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG y2⋅x⋅subscript2 y_2· xy2 ⋅ x =‖y2‖cos(y2,x)absentnormsubscript2subscript2 = \|y_2 \| (y_2,x)= ∥ y2 ∥ cos ( y2 , x ) =signum(c2)‖y2‖1+‖c2y2‖2−‖x−c2y2‖22‖c2y2‖absentsignumsubscript2normsubscript21superscriptnormsubscript2subscript22superscriptnormsubscript2subscript222normsubscript2subscript2 =signum(c_2) \|y_2 \| 1+ \|c% _2y_2 \|^2- \|x-c_2y_2 \|^22 \|c_2y_2% \|= signum ( c2 ) ∥ y2 ∥ divide start_ARG 1 + ∥ c2 y2 ∥2 - ∥ x - c2 y2 ∥2 end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG [y2⋅x]delimited-[]⋅subscript2 [y_2· x ]blackboard_E [ y2 ⋅ x ] =[signum(c2)‖y2‖1+‖c2y2‖2−‖x−c2y2‖22‖c2y2‖]absentdelimited-[]signumsubscript2normsubscript21superscriptnormsubscript2subscript22superscriptnormsubscript2subscript222normsubscript2subscript2 =E [signum(c_2) \|y_2 \|% 1+ \|c_2y_2 \|^2- \|x-c_2y_2 \|^22 % \|c_2y_2 \| ]= blackboard_E [ signum ( c2 ) ∥ y2 ∥ divide start_ARG 1 + ∥ c2 y2 ∥2 - ∥ x - c2 y2 ∥2 end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG ] =signum(c2)‖y2‖1+‖c2y2‖2−[‖x−c2y2‖2]2‖c2y2‖absentsignumsubscript2normsubscript21superscriptnormsubscript2subscript22delimited-[]superscriptnormsubscript2subscript222normsubscript2subscript2 =signum(c_2) \|y_2 \| 1+ \|c% _2y_2 \|^2-E [ \|x-c_2y_2 \|^2 ]% 2 \|c_2y_2 \|= signum ( c2 ) ∥ y2 ∥ divide start_ARG 1 + ∥ c2 y2 ∥2 - blackboard_E [ ∥ x - c2 y2 ∥2 ] end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG =signum(c2)‖y2‖1+‖c2y2‖2−(1−k2‖y1‖2+‖c1y1−c2y2‖2)2‖c2y2‖absentsignumsubscript2normsubscript21superscriptnormsubscript2subscript221superscript2superscriptnormsubscript12superscriptnormsubscript1subscript1subscript2subscript222normsubscript2subscript2 =signum(c_2) \|y_2 \| 1+ \|c% _2y_2 \|^2- (1- k^2 \|y_1 \|^2+ \|c_% 1y_1-c_2y_2 \|^2 )2 \|c_2y_2 \|= signum ( c2 ) ∥ y2 ∥ divide start_ARG 1 + ∥ c2 y2 ∥2 - ( 1 - divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG + ∥ c1 y1 - c2 y2 ∥2 ) end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG This last line uses Lemma 2: c1y1subscript1subscript1c_1y_1c1 y1 is the center of SS, so the expected squared distance between c2y2subscript2subscript2c_2y_2c2 y2 and a point on SS is given by 1−k2‖y1‖2+‖c1y1−c2y2‖21superscript2superscriptnormsubscript12superscriptnormsubscript1subscript1subscript2subscript221- k^2 \|y_1 \|^2+ \|c_1y_1-c_2y_2 \|^% 21 - divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG + ∥ c1 y1 - c2 y2 ∥2, where 1−k2‖y1‖21superscript2superscriptnormsubscript121- k^2 \|y_1 \|^21 - divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG is the squared radius of SS and ‖c1y1−c2y2‖2superscriptnormsubscript1subscript1subscript2subscript22 \|c_1y_1-c_2y_2 \|^2∥ c1 y1 - c2 y2 ∥2 is the squared distance from c2y2subscript2subscript2c_2y_2c2 y2 to the center. We can use this lemma because c2y2subscript2subscript2c_2y_2c2 y2 is in the same hyperplane as SS, so we can treat this situation as being set in a space of dimension d−11d-1d - 1. Now, continue to simplify: [y2⋅x]delimited-[]⋅subscript2 [y_2· x ]blackboard_E [ y2 ⋅ x ] =signum(c2)‖y2‖1+‖c2y2‖2−(1−k2‖y1‖2+‖c1y1−c2y2‖2)2‖c2y2‖absentsignumsubscript2normsubscript21superscriptnormsubscript2subscript221superscript2superscriptnormsubscript12superscriptnormsubscript1subscript1subscript2subscript222normsubscript2subscript2 =signum(c_2) \|y_2 \| 1+ \|c% _2y_2 \|^2- (1- k^2 \|y_1 \|^2+ \|c_% 1y_1-c_2y_2 \|^2 )2 \|c_2y_2 \|= signum ( c2 ) ∥ y2 ∥ divide start_ARG 1 + ∥ c2 y2 ∥2 - ( 1 - divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG + ∥ c1 y1 - c2 y2 ∥2 ) end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG =signum(c2)‖y2‖‖c2y2‖2+k2‖y1‖2−sin2θ‖c2y2‖22‖c2y2‖absentsignumsubscript2normsubscript2superscriptnormsubscript2subscript22superscript2superscriptnormsubscript12superscript2superscriptnormsubscript2subscript222normsubscript2subscript2 =signum(c_2) \|y_2 \| \|c_% 2y_2 \|^2+ k^2 \|y_1 \|^2- ^2θ % \|c_2y_2 \|^22 \|c_2y_2 \|= signum ( c2 ) ∥ y2 ∥ divide start_ARG ∥ c2 y2 ∥2 + divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG - sin2 θ ∥ c2 y2 ∥2 end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG =signum(c2)‖y2‖‖c2y2‖2cos2θ+k2‖y1‖22‖c2y2‖absentsignumsubscript2normsubscript2superscriptnormsubscript2subscript22superscript2superscript2superscriptnormsubscript122normsubscript2subscript2 =signum(c_2) \|y_2 \| \|c_% 2y_2 \|^2 ^2θ+ k^2 \|y_1 \|^22% \|c_2y_2 \|= signum ( c2 ) ∥ y2 ∥ divide start_ARG ∥ c2 y2 ∥2 cos2 θ + divide start_ARG k2 end_ARG start_ARG ∥ y1 ∥2 end_ARG end_ARG start_ARG 2 ∥ c2 y2 ∥ end_ARG =signum(c2)‖y2‖12(‖c2y2‖cos2θ+|k|cosθ‖y1‖)absentsignumsubscript2normsubscript212normsubscript2subscript2superscript2normsubscript1 =signum(c_2) \|y_2 \| 12 % ( \|c_2y_2 \| ^2θ+ |k | θ % \|y_1 \| )= signum ( c2 ) ∥ y2 ∥ divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ∥ c2 y2 ∥ cos2 θ + divide start_ARG | k | cos θ end_ARG start_ARG ∥ y1 ∥ end_ARG ) =signum(c2)‖y2‖12(|k||cosθ|‖y1‖+|k||cosθ|‖y1‖)absentsignumsubscript2normsubscript212normsubscript1normsubscript1 =signum(c_2) \|y_2 \| 12 % ( |k | | θ | \|y_1 \|+ % |k | | θ | \|y_1 \| )= signum ( c2 ) ∥ y2 ∥ divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( divide start_ARG | k | | cos θ | end_ARG start_ARG ∥ y1 ∥ end_ARG + divide start_ARG | k | | cos θ | end_ARG start_ARG ∥ y1 ∥ end_ARG ) =signum(c2)|k|‖y2‖y1‖|cosθ|absentsignumsubscript2normsubscript2normsubscript1 =signum(c_2) |k | \|y_2% \| \|y_1 \| | θ |= signum ( c2 ) | k | divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG | cos θ | =k‖y2‖y1‖cosθabsentnormsubscript2normsubscript1 =k \|y_2 \| \|y_1 \| θ= k divide start_ARG ∥ y2 ∥ end_ARG start_ARG ∥ y1 ∥ end_ARG cos θ =ky1⋅y2‖y1‖2absent⋅subscript1subscript2superscriptnormsubscript12 =k y_1· y_2 \|y_1 \|^2= k divide start_ARG y1 ⋅ y2 end_ARG start_ARG ∥ y1 ∥2 end_ARG The last thing to do is to note that the above formulas are only valid when ‖x‖=1norm1 \|x \|=1∥ x ∥ = 1. But if ‖x‖=snorm \|x \|=s∥ x ∥ = s, this is equivalent to the case when ‖x‖=1norm1 \|x \|=1∥ x ∥ = 1 if we scale y1subscript1y_1y1 and y2subscript2y_2y2 by s. Scaling those two vectors by s gives us the final formulas in Theorem 2. ∎ Appendix I Top Activating Tokens on 1M Tokens from The Pile for nbiassubscriptbiasn_biasnbias and nsubjsubscriptsubjn_subjnsubj 6::6 Feature Vectors In §4.2, in order to confirm that the feature vectors that we found for attention head 6::6 corresponded to notions of gender, we looked at the tokens from a dataset of 1M tokens from The Pile (see Appendix A) that maximally and minimally activated these feature vectors. I.1 nbiassubscriptbiasn_biasnbias Feature Vector The thirty highest-activating tokens, along with the prompts from which they came, and their scores, are given below: 1. Highest-activating token #1: • Excerpt from prompt: "son Tower** and in front of it a beautiful statue of St Edmund by Dame Elisabeth Frink (1976). The rest of the abbey spreads eastward like a r" • Token: "abeth" • Score: 18.372 2. Highest-activating token #2: • Excerpt from prompt: " a gorgeous hammerbeam roof and a striking sculpture of the crucified Christ by Dame Elisabeth Frink in the north transept. impressive entrance porch has a" • Token: "abeth" • Score: 17.388 3. Highest-activating token #3: • Excerpt from prompt: " the elaborate Portuguese silver service or the impressive Egyptian service, a divorce present from Napoleon to Josephine" • Token: "ine" • Score: 16.815 4. Highest-activating token #4: • Excerpt from prompt: " rocky beach of **Priest’s Cove**, while nearby are the ruins of **St Helen’s Oratory**, supposedly one of the first Christian chapels built in West Cornwall" • Token: " Helen" • Score: 16.309 5. Highest-activating token #5: • Excerpt from prompt: ", and opened in 1892, this brainchild of his Parisian actress wife, Josephine, was built by French architect Jules Pellechet to display a collection the Bow" • Token: "ine" • Score: 16.267 6. Highest-activating token #6: • Excerpt from prompt: " the film _Bridget Jones’s Diary;_ a local house was used as Bridget’s parents’ home. 1Sights TowerTOWER" • Token: "idget" • Score: 16.171 7. Highest-activating token #7: • Excerpt from prompt: ") by his side and a loyal band of followers in support. Arthur went on to slay Rita Gawr, a giant who butchered" • Token: " Rita" • Score: 16.079 8. Highest-activating token #8: • Excerpt from prompt: " for the fact that Sir Robert Walpole’s grandson sold the estate’s splendid art collection to Catherine the Great of Russia to stave off debts – those paintings formed the foundation of the" • Token: " Catherine" • Score: 16.039 9. Highest-activating token #9: • Excerpt from prompt: " Highlights include the magnificent gold coach of 1762 and the 1910 Glass Coach (Prince William and Catherine Middleton actually used the 1902 State Landau for their wedding in 2011). " • Token: " Catherine" • Score: 15.967 10. Highest-activating token #10: • Excerpt from prompt: " by Canaletto, El Greco and Goya as well as 55 paintings by Josephine herself. Among the 15,000 other objets d’art are incredible dresses from" • Token: "ine" • Score: 15.906 11. Highest-activating token #11: • Excerpt from prompt: " looks like something from a children’s storybook (a fact not unnoticed by the author Antonia Barber, who set her much-loved fairy-tale _The Mousehole Cat" • Token: "ia" • Score: 15.582 12. Highest-activating token #12: • Excerpt from prompt: ". Precious little now remains save for a few nave walls, the ruined **St Mary’s chapel**, and the crossing arches, which may" • Token: " Mary" • Score: 15.443 13. Highest-activating token #13: • Excerpt from prompt: ". northern terminus of the Welsh Highland Railway is on St Helen’s Rd. Trains run to Porthmadog (£35 return, 2½" • Token: " Helen" • Score: 15.374 14. Highest-activating token #14: • Excerpt from prompt: "2 ### KING RICHARD I ’s an amazing story. Philippa Langley, a member of the Richard I Society, spent four-and-a" • Token: "a" • Score: 15.358 15. Highest-activating token #15: • Excerpt from prompt: " pit (which can still be seen) from the granary above. In 1566, Mary, Queen of Scots famously visited the wounded tenant of the castle, Lord Bothwell," • Token: " Mary" • Score: 15.312 16. Highest-activating token #16: • Excerpt from prompt: " Richard I, Henry VIII and Charles I. It is most famous as the home of Catherine Parr (Henry VIII’s widow) and her second husband, Thomas Seymour. Princess" • Token: " Catherine" • Score: 15.275 17. Highest-activating token #17: • Excerpt from prompt: " Peninsula #### Bodmin Moor #### Isles of Scilly #### St Mary’s #### Tresco #### Bryher #### St Martin" • Token: " Mary" • Score: 15.246 18. Highest-activating token #18: • Excerpt from prompt: "’. the cathedral’s eastern end is the grave of the WWI heroine Edith" • Token: "ith" • Score: 15.182 19. Highest-activating token #19: • Excerpt from prompt: " many people visit for the region’s literary connections; William Wordsworth, Beatrix Potter, Arthur Ransome and John Ruskin all found inspiration here. " • Token: "rix" • Score: 15.135 20. Highest-activating token #20: • Excerpt from prompt: " Peninsula #### Bodmin Moor #### Isles of Scilly #### St Mary’s #### Tresco #### Bryher #### St Martin" • Token: " Mary" • Score: 15.111 21. Highest-activating token #21: • Excerpt from prompt: " _Mayor of Casterbridge_ locations hidden among modern Dorchester. They include **Lucetta’s House**, a grand Georgian affair with ornate door posts in Trinity St," • Token: "etta" • Score: 15.053 22. Highest-activating token #22: • Excerpt from prompt: " leads down to this little cove and the remains of the small Tudor fort of **St Catherine’s Castle**. BeachBEACH ( G" • Token: " Catherine" • Score: 15.051 23. Highest-activating token #23: • Excerpt from prompt: "-century **St Catherine’s Lighthouse** and its 14th-century counterpart, **St Catherine’s Or" • Token: " Catherine" • Score: 14.979 24. Highest-activating token #24: • Excerpt from prompt: " ) ; Castle Yard) stands behind a 15th-century gate near the church of St Mary de Castro ( MAP GOOGLE MAP ) ; Castle St)," • Token: " Mary" • Score: 14.968 25. Highest-activating token #25: • Excerpt from prompt: " the Glasgow School of Art. It was there that he met the also influential artist and designer Margaret Macdonald, whom he married; they collaborated on many projects and were major influences on" • Token: " Margaret" • Score: 14.930 26. Highest-activating token #26: • Excerpt from prompt: " Nov-Mar ) raising of the 16th-century warship the _Mary Rose_ in 1982 was an extraordinary feat of marine archaeology. Now the new £" • Token: "Mary" • Score: 14.699 27. Highest-activating token #27: • Excerpt from prompt: " was claimed by the Boleyn family and passed through the generations to Thomas, father of Anne Boleyn. Anne was executed by her husband Henry VIII in 1533, who" • Token: " Anne" • Score: 14.686 28. Highest-activating token #28: • Excerpt from prompt: ". The village has literary cachet too – Wordsworth went to school here, and Beatrix Potter’s husband, William Heelis, worked here as a solicitor for" • Token: "rix" • Score: 14.658 29. Highest-activating token #29: • Excerpt from prompt: " are William MacTaggart’s Impressionistic Scottish landscapes and a gem by Thomas Millie Dow. There’s also a special collection of James McNeill Whistler’s lim" • Token: "ie" • Score: 14.626 30. Highest-activating token #30: • Excerpt from prompt: " Stay House Fell Rosa Hotel ## Yorkshire Highlights" • Token: "aina" • Score: 14.578 The thirty lowest-activating tokens, along with the prompts from which they came, and their scores, are given below: 1. Lowest-activating token #1: • Excerpt from prompt: " recounted the sighting of a disturbance in the loch by Mrs Aldie Mackay and her husband: ’There the creature disported itself, rolling and plunging for fully a minute" • Token: " husband" • Score: -12.129 2. Lowest-activating token #2: • Excerpt from prompt: " paid time off work during menstruation • (often from male workers, who viewed the employment of women as competition) women should not be employed in" • Token: " male" • Score: -11.344 3. Lowest-activating token #3: • Excerpt from prompt: " family was devastated, but things quickly got worse. Emily fell ill with tuberculosis soon after her brother’s funeral; she never left the house again, and died on 19 December. Anne" • Token: " brother" • Score: -11.146 4. Lowest-activating token #4: • Excerpt from prompt: " handsome Jacobean town house belonging to Shakespeare’s daughter Susanna and her husband, respected doctor John Hall, stands south of the centre. The exhibition offers fascinating insights" • Token: " husband" • Score: -11.016 5. Lowest-activating token #5: • Excerpt from prompt: " hall was home to the 16th-century’s second-most powerful woman, Elizabeth, Countess of Shrewsbury – known to all as Bess of Hardwick –" • Token: " Count" • Score: -10.793 6. Lowest-activating token #6: • Excerpt from prompt: " haunted places, with spectres from a phantom funeral to Lady Mary Berkeley seeking her errant husband. Owner Sir Humphrey Wakefield has passionately restored the castle’s extravagant medieval stater" • Token: " husband" • Score: -10.682 7. Lowest-activating token #7: • Excerpt from prompt: " Windsor Castle in 1861, Queen Victoria ordered its elaborate redecoration as a tribute to her husband. A major feature of the restoration is the magnificent vaulted roof, whose gold mosaic" • Token: " husband" • Score: -10.577 8. Lowest-activating token #8: • Excerpt from prompt: "Ornate Plas Newydd was home to Lady Eleanor Butler and Miss Sarah Ponsonby, two society ladies who ran away from Ireland to Wales disguised as men, and" • Token: "onson" • Score: -10.503 9. Lowest-activating token #9: • Excerpt from prompt: " with DVD players, with tremendous views across the bay from the largest two. Bridget and Derek really give this place a ’home away from home’ ambience, and can arrange" • Token: " Derek" • Score: -10.483 10. Lowest-activating token #10: • Excerpt from prompt: " of adultery, debauchery, crime and edgy romance, and is filled with Chaucer’s witty observations about human nature. ’s past" • Token: "cer" • Score: -10.296 11. Lowest-activating token #11: • Excerpt from prompt: " the city in 1645. Legend has it that the disease-ridden inhabitants of **Mary King’s Close** (a lane on the northern side of the Royal Mile on the site" • Token: " King" • Score: -10.294 12. Lowest-activating token #12: • Excerpt from prompt: " manor was founded in 1552 by the formidable Bess of Hardwick and her second husband, William Cavendish, who earned grace and favour by helping Henry VIII dissolve the English" • Token: " husband" • Score: -10.251 13. Lowest-activating token #13: • Excerpt from prompt: " Apartments** is the bedchamber where Mary, Queen of Scots gave birth to her son James VI, who was to unite the crowns of Scotland and England in 1603" • Token: " son" • Score: -10.148 14. Lowest-activating token #14: • Excerpt from prompt: "s at the behest of Queen Victoria, the monarch grieved here for many years after her husband’s death. Extravagant rooms include the opulent Royal Apartments and Dur" • Token: " husband" • Score: -10.112 15. Lowest-activating token #15: • Excerpt from prompt: "am-5pm Mar-Oct) ambitious three-dimensional interpretation of Chaucer’s classic tales using jerky animatronics and audioguides is certainly entertaining" • Token: "cer" • Score: -10.053 16. Lowest-activating token #16: • Excerpt from prompt: " his death, in the hard-to-decipher Middle English of the day, Chaucer’s _Tales_ is an unfinished series of 24 vivid stories told by a party" • Token: "cer" • Score: -10.050 17. Lowest-activating token #17: • Excerpt from prompt: " especially in **Poets’ Corner**, where you’l find the resting places of Chaucer, Dickens, Hardy, Tennyson, Dr Johnson and Kipling, as well as" • Token: "cer" • Score: -10.033 18. Lowest-activating token #18: • Excerpt from prompt: " her? She’s up here saying his intent was this. ¶ 35 Trujillo objected on the basis" • Token: " his" • Score: -10.031 19. Lowest-activating token #19: • Excerpt from prompt: " lived here happily with his sister Dorothy, wife Mary and three children John, Dora and Thomas until 1808, when the family moved to another nearby house at Allen Bank, and" • Token: " Thomas" • Score: -9.934 20. Lowest-activating token #20: • Excerpt from prompt: " home of Queen Isabella, who (allegedly) arranged the gruesome murder of her husband, Edward I. Hall" • Token: " husband" • Score: -9.932 21. Lowest-activating token #21: • Excerpt from prompt: " Saturday, four on Sunday). Victoria bought Sandringham in 1862 for her son, the Prince of Wales (later Edward VII), and the features and furnishings remain" • Token: " son" • Score: -9.883 22. Lowest-activating token #22: • Excerpt from prompt: " the palace, which contains Mary’s Bed Chamber, connected by a secret stairway to her husband’s bedroom, and ends with the ruins of Holyrood Abbey. " • Token: " husband" • Score: -9.824 23. Lowest-activating token #23: • Excerpt from prompt: " holidays. two-hour tour includes the **Throne Room**, with his-and-hers pink chairs initialed ’ER’ and ’P’. Access is" • Token: " his" • Score: -9.717 24. Lowest-activating token #24: • Excerpt from prompt: " is packed with all manner of Highland memorabilia. Look out for the secret portrait of Bonnie Prince Charlie – after the Jacobite rebellions all things Highland were banned, including pictures of" • Token: " Prince" • Score: -9.691 25. Lowest-activating token #25: • Excerpt from prompt: " the last college to let women study there; when they were finally admitted in 1988, some male students wore black armbands and flew the college flag at half mast. " • Token: " male" • Score: -9.652 26. Lowest-activating token #26: • Excerpt from prompt: "oh, Michael Bond’s Paddington Bear, Beatrix Potter’s Peter Rabbit, Roald Dahl’s Willy Wonka and JK Rowling’s Harry Potter are perennially popular" • Token: "ald" • Score: -9.613 27. Lowest-activating token #27: • Excerpt from prompt: " one of the rooms. In 2003 the close was opened to the public as the Real Mary King’s Close. ### SCOTTISH PARLIAMENT BUILDING " • Token: " King" • Score: -9.405 28. Lowest-activating token #28: • Excerpt from prompt: ", Mary, Dorothy and all three children. Samuel Taylor Coleridge’s son Hartley is also buried here. " • Token: " Samuel" • Score: -9.372 29. Lowest-activating token #29: • Excerpt from prompt: ", the town became northern Europe’s most important pilgrimage destination, which in turn prompted Geoffrey Chaucer’s _The Canterbury Tales,_ one of the most outstanding works in English literature." • Token: "cer" • Score: -9.351 30. Lowest-activating token #30: • Excerpt from prompt: " Queen Isabella, who (allegedly) arranged the gruesome murder of her husband, Edward I. Hall" • Token: " Edward" • Score: -9.272 I.2 nsubjsubscriptsubjn_subjnsubj Feature Vector The thirty highest-activating tokens, along with the prompts from which they came, and their scores, are given below: 1. Highest-activating token #1: • Excerpt from prompt: "son Tower** and in front of it a beautiful statue of St Edmund by Dame Elisabeth Frink (1976). The rest of the abbey spreads eastward like a r" • Token: "abeth" • Score: 18.372 2. Highest-activating token #2: • Excerpt from prompt: " a gorgeous hammerbeam roof and a striking sculpture of the crucified Christ by Dame Elisabeth Frink in the north transept. impressive entrance porch has a" • Token: "abeth" • Score: 17.388 3. Highest-activating token #3: • Excerpt from prompt: " the elaborate Portuguese silver service or the impressive Egyptian service, a divorce present from Napoleon to Josephine" • Token: "ine" • Score: 16.815 4. Highest-activating token #4: • Excerpt from prompt: " rocky beach of **Priest’s Cove**, while nearby are the ruins of **St Helen’s Oratory**, supposedly one of the first Christian chapels built in West Cornwall" • Token: " Helen" • Score: 16.309 5. Highest-activating token #5: • Excerpt from prompt: ", and opened in 1892, this brainchild of his Parisian actress wife, Josephine, was built by French architect Jules Pellechet to display a collection the Bow" • Token: "ine" • Score: 16.267 6. Highest-activating token #6: • Excerpt from prompt: " the film _Bridget Jones’s Diary;_ a local house was used as Bridget’s parents’ home. 1Sights TowerTOWER" • Token: "idget" • Score: 16.171 7. Highest-activating token #7: • Excerpt from prompt: ") by his side and a loyal band of followers in support. Arthur went on to slay Rita Gawr, a giant who butchered" • Token: " Rita" • Score: 16.079 8. Highest-activating token #8: • Excerpt from prompt: " for the fact that Sir Robert Walpole’s grandson sold the estate’s splendid art collection to Catherine the Great of Russia to stave off debts – those paintings formed the foundation of the" • Token: " Catherine" • Score: 16.039 9. Highest-activating token #9: • Excerpt from prompt: " Highlights include the magnificent gold coach of 1762 and the 1910 Glass Coach (Prince William and Catherine Middleton actually used the 1902 State Landau for their wedding in 2011). " • Token: " Catherine" • Score: 15.967 10. Highest-activating token #10: • Excerpt from prompt: " by Canaletto, El Greco and Goya as well as 55 paintings by Josephine herself. Among the 15,000 other objets d’art are incredible dresses from" • Token: "ine" • Score: 15.906 11. Highest-activating token #11: • Excerpt from prompt: " looks like something from a children’s storybook (a fact not unnoticed by the author Antonia Barber, who set her much-loved fairy-tale _The Mousehole Cat" • Token: "ia" • Score: 15.582 12. Highest-activating token #12: • Excerpt from prompt: ". Precious little now remains save for a few nave walls, the ruined **St Mary’s chapel**, and the crossing arches, which may" • Token: " Mary" • Score: 15.443 13. Highest-activating token #13: • Excerpt from prompt: ". northern terminus of the Welsh Highland Railway is on St Helen’s Rd. Trains run to Porthmadog (£35 return, 2½" • Token: " Helen" • Score: 15.374 14. Highest-activating token #14: • Excerpt from prompt: "2 ### KING RICHARD I ’s an amazing story. Philippa Langley, a member of the Richard I Society, spent four-and-a" • Token: "a" • Score: 15.358 15. Highest-activating token #15: • Excerpt from prompt: " pit (which can still be seen) from the granary above. In 1566, Mary, Queen of Scots famously visited the wounded tenant of the castle, Lord Bothwell," • Token: " Mary" • Score: 15.312 16. Highest-activating token #16: • Excerpt from prompt: " Richard I, Henry VIII and Charles I. It is most famous as the home of Catherine Parr (Henry VIII’s widow) and her second husband, Thomas Seymour. Princess" • Token: " Catherine" • Score: 15.275 17. Highest-activating token #17: • Excerpt from prompt: " Peninsula #### Bodmin Moor #### Isles of Scilly #### St Mary’s #### Tresco #### Bryher #### St Martin" • Token: " Mary" • Score: 15.246 18. Highest-activating token #18: • Excerpt from prompt: "’. the cathedral’s eastern end is the grave of the WWI heroine Edith" • Token: "ith" • Score: 15.182 19. Highest-activating token #19: • Excerpt from prompt: " many people visit for the region’s literary connections; William Wordsworth, Beatrix Potter, Arthur Ransome and John Ruskin all found inspiration here. " • Token: "rix" • Score: 15.135 20. Highest-activating token #20: • Excerpt from prompt: " Peninsula #### Bodmin Moor #### Isles of Scilly #### St Mary’s #### Tresco #### Bryher #### St Martin" • Token: " Mary" • Score: 15.111 21. Highest-activating token #21: • Excerpt from prompt: " _Mayor of Casterbridge_ locations hidden among modern Dorchester. They include **Lucetta’s House**, a grand Georgian affair with ornate door posts in Trinity St," • Token: "etta" • Score: 15.053 22. Highest-activating token #22: • Excerpt from prompt: " leads down to this little cove and the remains of the small Tudor fort of **St Catherine’s Castle**. BeachBEACH ( G" • Token: " Catherine" • Score: 15.051 23. Highest-activating token #23: • Excerpt from prompt: "-century **St Catherine’s Lighthouse** and its 14th-century counterpart, **St Catherine’s Or" • Token: " Catherine" • Score: 14.979 24. Highest-activating token #24: • Excerpt from prompt: " ) ; Castle Yard) stands behind a 15th-century gate near the church of St Mary de Castro ( MAP GOOGLE MAP ) ; Castle St)," • Token: " Mary" • Score: 14.968 25. Highest-activating token #25: • Excerpt from prompt: " the Glasgow School of Art. It was there that he met the also influential artist and designer Margaret Macdonald, whom he married; they collaborated on many projects and were major influences on" • Token: " Margaret" • Score: 14.930 26. Highest-activating token #26: • Excerpt from prompt: " Nov-Mar ) raising of the 16th-century warship the _Mary Rose_ in 1982 was an extraordinary feat of marine archaeology. Now the new £" • Token: "Mary" • Score: 14.699 27. Highest-activating token #27: • Excerpt from prompt: " was claimed by the Boleyn family and passed through the generations to Thomas, father of Anne Boleyn. Anne was executed by her husband Henry VIII in 1533, who" • Token: " Anne" • Score: 14.686 28. Highest-activating token #28: • Excerpt from prompt: ". The village has literary cachet too – Wordsworth went to school here, and Beatrix Potter’s husband, William Heelis, worked here as a solicitor for" • Token: "rix" • Score: 14.658 29. Highest-activating token #29: • Excerpt from prompt: " are William MacTaggart’s Impressionistic Scottish landscapes and a gem by Thomas Millie Dow. There’s also a special collection of James McNeill Whistler’s lim" • Token: "ie" • Score: 14.626 30. Highest-activating token #30: • Excerpt from prompt: " Stay House Fell Rosa Hotel ## Yorkshire Highlights" • Token: "aina" • Score: 14.578 The thirty lowest-activating tokens, along with the prompts from which they came, and their scores, are given below: 1. Lowest-activating token #1: • Excerpt from prompt: " family was devastated, but things quickly got worse. Emily fell ill with tuberculosis soon after her brother’s funeral; she never left the house again, and died on 19 December. Anne" • Token: " brother" • Score: -11.732 2. Lowest-activating token #2: • Excerpt from prompt: " recounted the sighting of a disturbance in the loch by Mrs Aldie Mackay and her husband: ’There the creature disported itself, rolling and plunging for fully a minute" • Token: " husband" • Score: -11.608 3. Lowest-activating token #3: • Excerpt from prompt: " paid time off work during menstruation • (often from male workers, who viewed the employment of women as competition) women should not be employed in" • Token: " male" • Score: -11.324 4. Lowest-activating token #4: • Excerpt from prompt: "Ornate Plas Newydd was home to Lady Eleanor Butler and Miss Sarah Ponsonby, two society ladies who ran away from Ireland to Wales disguised as men, and" • Token: "onson" • Score: -11.228 5. Lowest-activating token #5: • Excerpt from prompt: " of adultery, debauchery, crime and edgy romance, and is filled with Chaucer’s witty observations about human nature. ’s past" • Token: "cer" • Score: -11.007 6. Lowest-activating token #6: • Excerpt from prompt: " Apartments** is the bedchamber where Mary, Queen of Scots gave birth to her son James VI, who was to unite the crowns of Scotland and England in 1603" • Token: " son" • Score: -10.971 7. Lowest-activating token #7: • Excerpt from prompt: " handsome Jacobean town house belonging to Shakespeare’s daughter Susanna and her husband, respected doctor John Hall, stands south of the centre. The exhibition offers fascinating insights" • Token: " husband" • Score: -10.884 8. Lowest-activating token #8: • Excerpt from prompt: " his death, in the hard-to-decipher Middle English of the day, Chaucer’s _Tales_ is an unfinished series of 24 vivid stories told by a party" • Token: "cer" • Score: -10.854 9. Lowest-activating token #9: • Excerpt from prompt: " especially in **Poets’ Corner**, where you’l find the resting places of Chaucer, Dickens, Hardy, Tennyson, Dr Johnson and Kipling, as well as" • Token: "cer" • Score: -10.794 10. Lowest-activating token #10: • Excerpt from prompt: "am-5pm Mar-Oct) ambitious three-dimensional interpretation of Chaucer’s classic tales using jerky animatronics and audioguides is certainly entertaining" • Token: "cer" • Score: -10.793 11. Lowest-activating token #11: • Excerpt from prompt: " haunted places, with spectres from a phantom funeral to Lady Mary Berkeley seeking her errant husband. Owner Sir Humphrey Wakefield has passionately restored the castle’s extravagant medieval stater" • Token: " husband" • Score: -10.696 12. Lowest-activating token #12: • Excerpt from prompt: " Windsor Castle in 1861, Queen Victoria ordered its elaborate redecoration as a tribute to her husband. A major feature of the restoration is the magnificent vaulted roof, whose gold mosaic" • Token: " husband" • Score: -10.673 13. Lowest-activating token #13: • Excerpt from prompt: " hall was home to the 16th-century’s second-most powerful woman, Elizabeth, Countess of Shrewsbury – known to all as Bess of Hardwick –" • Token: " Count" • Score: -10.617 14. Lowest-activating token #14: • Excerpt from prompt: " Saturday, four on Sunday). Victoria bought Sandringham in 1862 for her son, the Prince of Wales (later Edward VII), and the features and furnishings remain" • Token: " son" • Score: -10.556 15. Lowest-activating token #15: • Excerpt from prompt: " is packed with all manner of Highland memorabilia. Look out for the secret portrait of Bonnie Prince Charlie – after the Jacobite rebellions all things Highland were banned, including pictures of" • Token: " Prince" • Score: -10.424 16. Lowest-activating token #16: • Excerpt from prompt: " beautiful, time-worn rooms hold fascinating relics, including the cradle used by Mary for her son, James VI of Scotland (who also became James I of England), and fascinating letters" • Token: " son" • Score: -10.266 17. Lowest-activating token #17: • Excerpt from prompt: ", the town became northern Europe’s most important pilgrimage destination, which in turn prompted Geoffrey Chaucer’s _The Canterbury Tales,_ one of the most outstanding works in English literature." • Token: "cer" • Score: -10.250 18. Lowest-activating token #18: • Excerpt from prompt: " the city in 1645. Legend has it that the disease-ridden inhabitants of **Mary King’s Close** (a lane on the northern side of the Royal Mile on the site" • Token: " King" • Score: -10.177 19. Lowest-activating token #19: • Excerpt from prompt: " with DVD players, with tremendous views across the bay from the largest two. Bridget and Derek really give this place a ’home away from home’ ambience, and can arrange" • Token: " Derek" • Score: -10.124 20. Lowest-activating token #20: • Excerpt from prompt: " her? She’s up here saying his intent was this. ¶ 35 Trujillo objected on the basis" • Token: " his" • Score: -10.113 21. Lowest-activating token #21: • Excerpt from prompt: " the last college to let women study there; when they were finally admitted in 1988, some male students wore black armbands and flew the college flag at half mast. " • Token: " male" • Score: -10.058 22. Lowest-activating token #22: • Excerpt from prompt: "s at the behest of Queen Victoria, the monarch grieved here for many years after her husband’s death. Extravagant rooms include the opulent Royal Apartments and Dur" • Token: " husband" • Score: -10.018 23. Lowest-activating token #23: • Excerpt from prompt: " home of Queen Isabella, who (allegedly) arranged the gruesome murder of her husband, Edward I. Hall" • Token: " husband" • Score: -9.989 24. Lowest-activating token #24: • Excerpt from prompt: ", Van Dyck, Vermeer, El Greco, Poussin, Rembrandt, Gainsborough, Turner, Constable, Monet, Pissarro," • Token: "brand" • Score: -9.937 25. Lowest-activating token #25: • Excerpt from prompt: " 24 vivid stories told by a party of pilgrims journeying between London and Canterbury. Chaucer successfully created the illusion that the pilgrims, not Chaucer (though he appears in the" • Token: "cer" • Score: -9.909 26. Lowest-activating token #26: • Excerpt from prompt: " the palace, which contains Mary’s Bed Chamber, connected by a secret stairway to her husband’s bedroom, and ends with the ruins of Holyrood Abbey. " • Token: " husband" • Score: -9.862 27. Lowest-activating token #27: • Excerpt from prompt: " lived here happily with his sister Dorothy, wife Mary and three children John, Dora and Thomas until 1808, when the family moved to another nearby house at Allen Bank, and" • Token: " Thomas" • Score: -9.842 28. Lowest-activating token #28: • Excerpt from prompt: " 19 prime ministers, countless princes, kings and maharajahs, famous explorers, authors and" • Token: " prime" • Score: -9.733 29. Lowest-activating token #29: • Excerpt from prompt: " held court in the Palace of Holyroodhouse for six brief years, but when her son James VI succeeded to the English throne in 1603, he moved his court to London" • Token: " son" • Score: -9.711 30. Lowest-activating token #30: • Excerpt from prompt: ", Mary, Dorothy and all three children. Samuel Taylor Coleridge’s son Hartley is also buried here. " • Token: " Samuel" • Score: -9.654