Paper deep dive
Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation
Jonathan Jacobi, Gal Niv
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 95%
Last extracted: 3/12/2026, 5:36:26 PM
Summary
Superscopes is a novel mechanistic interpretability technique that amplifies weak, superposed features in LLM internal representations (MLP outputs and hidden states) to enable human-readable self-interpretation. By applying a technique analogous to Classifier-Free Guidance (CFG) from diffusion models, Superscopes allows researchers to extract meaningful explanations from internal activations that previously yielded nonsensical results, without requiring additional training.
Entities (5)
Relation Signals (3)
Superscopes → extends → Patchscopes
confidence 95% · Superscopes, a method that extends existing self-interpretation techniques
Superscopes → interprets → MLP
confidence 95% · enabling the interpretation of internal representations that previous methods failed to explain
Superscopes → inspiredby → Classifier-Free Guidance
confidence 90% · Inspired by the 'features as directions' perspective and the Classifier-Free Guidance (CFG) approach
Cypher Suggestions (2)
Identify components interpreted by Superscopes · confidence 90% · unvalidated
MATCH (s:Method {name: 'Superscopes'})-[:INTERPRETS]->(c:Component) RETURN cFind all methods related to mechanistic interpretability · confidence 85% · unvalidated
MATCH (m:Method)-[:USED_IN]->(f:Field {name: 'Mechanistic Interpretability'}) RETURN mAbstract
Abstract:Understanding and interpreting the internal representations of large language models (LLMs) remains an open challenge. Patchscopes introduced a method for probing internal activations by patching them into new prompts, prompting models to self-explain their hidden representations. We introduce Superscopes, a technique that systematically amplifies superposed features in MLP outputs (multilayer perceptron) and hidden states before patching them into new contexts. Inspired by the "features as directions" perspective and the Classifier-Free Guidance (CFG) approach from diffusion models, Superscopes amplifies weak but meaningful features, enabling the interpretation of internal representations that previous methods failed to explain-all without requiring additional training. This approach provides new insights into how LLMs build context and represent complex concepts, further advancing mechanistic interpretability.
Tags
Links
PDF not stored locally. Use the link above to view on the source site.
Full Text
59,795 characters extracted from source content.
Expand or collapse full text
Superscopes: Amplifying Internal Feature Representations for Language Model Interpretation Jonathan Jacobi Independent Researchers Gal Niv Independent Researchers Abstract Understanding and interpreting the internal representations of large language models (LLMs) remains an open challenge. Patchscopes (Ghandeharioun et al. (2024)) introduced a method for probing internal activations by patching them into new prompts, prompting models to self-explain their hidden representations. We introduce Superscopes, a technique that systematically amplifies superposed features in MLP outputs (multilayer perceptron) and hidden states before patching them into new contexts. Inspired by the “features as directions” perspective (Elhage et al. (2022)) and the Classifier-Free Guidance (CFG) approach from diffusion models, Superscopes amplifies weak but meaningful features, enabling the interpretation of internal representations that previous methods failed to explain—all without requiring additional training. This approach provides new insights into how LLMs build context and represent complex concepts, further advancing mechanistic interpretability. strip Figure 1: Using Superscopes to interpret a hidden state of the token "Wales" in an advanced layer. Step 1: Using Patchscopes (magnifying glass) to extract the residual stream’s meaning before applying the MLP, a sensible explanation is extracted. Step 2: Applying Patchscopes to inspect the meaning of the MLP output, yields a nonsensical explanation ("Tesla, Car Company"). Step 3: Amplifying the MLP output ("Superscoping") and inspecting it using Patchscopes yields "Royalty Title of a Woman" - a logical explanation given the examined prompt. Step 4: The transformer adds the MLP output to the residual stream. Step 5: Using Patchscopes to interpret the new residual stream reveals that the resulting hidden state encodes the concept "British Princess". This demonstrates how: "Wales" + "Royalty Title of a Woman" = "British Princess", thereby clarifying the model’s inner contextualization and thinking process. 1 Introduction Over the past few years, large language models (LLMs) have been researched and advanced significantly, leading to technological advancements that previously seemed impossible. However, due to the black-box nature of the models, there has been a lot of focus (Casper et al. (2022); Madsen et al. (2022); Patel & Pavlick (2021); Nanda et al. (2023)) on understanding the inner-workings of the models and achieving greater clarity on their internal decision-making process. Interpretation of a model’s inner workings proved to be critical for various reasons, such as AI alignment and safety (Bereska et al. (2024)), and understanding the underlying logic behind a model’s reasoning (Arkoudas et al. (2023)). Many different methods to tackle interpretability were invented over the past few years, such as training-based interpretability methods that train linear classifiers (probes) from internal hidden states (Belinkov & Glass (2019); Belinkov (2022)), projecting hidden states to the vocabulary space (nostalgebraist (2020); Belrose et al. (2023)), and interfering mid-computation to identify hidden states with significant effect on predictions (Meng et al. (2022a); Wallat et al. (2020); Wang et al. (2022); Geva et al. (2023)). Although these methods were successful in showing great progress, research continued and researchers came up with a new novel idea of models self-interpreting their internal representations. The core concept behind self-interpreting is that different parts of models can be given as an input to the models in certain ways that would allow translating the components to natural language. A key piece of research in the field is Patchscopes (Ghandeharioun et al. (2024)), introducing a framework for patching hidden states from different layers of a source prompt into a target prompt, enabling the extraction of meaning from the patched-in hidden states. Another significant research is Speaking Probes (Dar (2023)), which introduced the idea of patching and interpreting the model’s weights, specifically the feed-forward keys, which shows meaning behind certain parameters of the model. An additional approach taken was SelfIE (Chen et al. (2023)), which introduced ways to also leverage language-based interpretability method in various different applications. Building on the concepts behind Patchscopes, Speaking Probes, and SelfIE (we call those "Self Interpreting Methods"), it is natural to apply these techniques to the outputs of the MLP and Attention modules, as they are added to the residual stream and constitute the majority of the hidden state’s value. Existing self-interpretation techniques, such as Patchscopes, often yield good results in interpreting hidden states. However, further research reveals that they sometimes fail. For instance, applying these methods to the MLP and Attention outputs almost never produces valid explanations, and in some cases, they also fail to interpret normal hidden states. In this work, we introduce Superscopes, a method that extends existing self-interpretation techniques, and enables the extraction of meaning from various internal representations, such as MLP outputs and hidden states, where previous methods have failed. Our proposed method treats internal representations as instances of features as directions (Elhage et al. (2022)). We suggest that in many cases where existing self-interpretation methods fail, the underlying reason is that the vectors we aim to interpret contain features that are too weak, meaning the model does not consider them significant enough—resulting in incorrect self-interpretation. Superscopes addresses this by amplifying internal feature representations, significantly improving self-interpretation methods and enabling the model to successfully explain even seemingly weak features. Figure §1 shows Superscopes’s interpretation process of an MLP output. Moreover, we suggest that our method of amplification closely resembles Classifier-Free Guidance (CFG) (Ho et al. (2022)), a widely used technique in Diffusion Models. We conduct a series of experiments (§4) to evaluate the benefits and effectiveness of Superscopes compared to prior methods, demonstrating its success across various prompts, layers, and amplifiers. Leveraging the new capabilities introduced by Superscopes, we developed a highly flexible framework (§5) for interpreting different types of internal representations—including MLP outputs, the post-attention-pre-MLP residual stream, and hidden states. The Superscopes framework introduces several key features that enhance interpretability by providing greater flexibility and control over internal representations inspection in language models. One of the core capabilities of the framework is the automatic identification of the ideal amplifier. This feature scans multiple amplification configurations and selects the most effective one for revealing meaningful features within the model’s internal activations. By automating this process, researchers can focus on interpretation without the need for manual tuning, ensuring optimal results with minimal effort. In addition to automatic selection, the framework allows for manual inspection of interpretations with different amplifiers. Researchers can explore how different amplification strengths and configurations affect interpretation results, providing deeper insights into the influence of specific activations. This feature is particularly useful for understanding how sensitive a model is to various internal signals and for validating findings across different amplifier settings. Another powerful aspect of the framework is the ability to patch activations into different layers of the target prompt. Similarly to Patchscopes, our layer-patching mechanism enables researchers to test how amplifying specific representations affects downstream computations. By injecting amplified signals into different layers, researchers can investigate how information flows through the model and how different levels of amplification contribute to final predictions. Finally, the framework includes a flexible selection mechanism for choosing specific tokens and layers to interpret. Researchers can dynamically configure which layer’s outputs they wish to analyze and which token representations to focus on. This fine-grained control allows for targeted investigations into specific behaviors, making it easier to study phenomena such as token interactions, attention dynamics, and the role of hidden state transformations at different processing stages. Together, these features make Superscopes a powerful tool for mechanistic interpretability, providing both automation and hands-on control to uncover hidden structures in large language models. To conclude, Superscopes’s perspective, amplification techniques, and observations, opens up opportunities for further novel inspection techniques. 2 Related Work 2.1 Transformer Interpretability Transformer interpretability has become an increasingly prominent research focus as large language models continue to grow in complexity. Early efforts centered on analyzing attention patterns, but recent methods study deeper mechanisms within hidden states to illuminate how transformers process and store information (Elhage et al. (2021)). Understanding these internal representations is crucial for explaining model predictions and mitigating undesirable behaviors. Additional techniques included training linear classifiers, called probes, on top of hidden representations (Alain & Bengio (2017); Belinkov & Glass (2019); Belinkov (2022)), other approaches included intervening mid-computation in order to identify whether a representation is critical for certain predictions (Meng et al. (2022a); Wallat et al. (2020); Wang et al. (2022); Conmy et al. (2023); Geva et al. (2023)). 2.2 Vocabulary Space Projection A notable direction in interpretability is vocabulary space projection, where hidden layers are mapped back to the token distribution. Logit Lens (nostalgebraist (2020)) popularized this idea by inspecting intermediate logits, while Tuned Lens (Belrose et al. (2023)) refined it with learned transformations to make these projections more accurate. Variants of such approaches have shown how tracking evolving token-level distributions across layers provides insights into the gradual construction of meaning. 2.3 Activation Patching and Self-Interpretation Methods Another line of work explores activation patching, where researchers modify or swap hidden states to examine the causal role of specific activations. Patchscopes (Ghandeharioun et al. (2024)) introduced this by allowing the model to express patched-in hidden representations in human-like natural language, rather than restricting them to probability distributions, proving to be an effective method for interpretation. SeLFIE (Chen et al. (2023)) further develops this approach, demonstrating its applicability across various tasks. Speaking Probes (Dar (2023)) takes a related approach by patching model parameters—specifically, the MLP keys (sub-components of the first matrix in the MLP component)—and interpreting them in natural language. Patchscopes introduced the mapping function f, which is applied to hidden states before patching them into the target prompt. As mentioned earlier, Superscopes can be used to interpret hidden states, serving as Patchscope’s mapping function in this context. Unlike existing mapping functions, Superscopes follows the “features as directions” approach, amplifying features rather than merely projecting them. Beyond hidden states, Superscopes also effectively interprets MLP outputs—a use case that Patchscopes and other self-interpreting methods were not originally designed for and rarely succeed in, despite its resemblance to their intended functionality. 2.4 Features as Directions and The Superposition Hypothesis The features-as-directions perspective has gained increasing attention in the field of Mechanistic Interpretability, proposing that concepts within models correspond to directions in high-dimensional embeddings (Elhage et al. (2022)). When multiple conceptual directions combine, they form "superposed" representations, complicating simple linear interpretations. The Superposition Hypothesis suggests that neural networks “want to represent more features than they have neurons”, so they exploit a property of high-dimensional spaces to simulate a model with many more neurons ("Superposition"; Arora et al. (2018); Olah et al. (2020)) - a behavior that differentiates it from classic PCA (Principal Component Analysis) approaches that rely on orthogonality. Understanding how these directional features combine and interfere is essential for disentangling how models encode and blend various attributes in a single activation vector. Additional significant research from Anthropic has explored using Sparse Autoencoders to interpret models, particularly targeting MLP outputs, also viewing the vectors as composition of features (Bricken et al. (2023) and features as decomposition). While Sparse Autoencoders yield strong results, they require extensive additional training—something Superscopes does not. 2.5 Classifier-Free Guidance In diffusion models, Classifier-Free Guidance (CFG) (Ho et al. (2022)) enhances the alignment of generated outputs with a given condition (such as a text prompt). It is widely used in diffusion-based models (Chen et al. (2024)). CFG operates by amplifying the direction that represents the condition, where the difference between conditional and unconditional predictions defines this direction in the model’s latent space. By amplifying this directional shift, the model is effectively steered toward outputs that better align with the given condition. Reapplying this shift further reinforces alignment, emphasizing the semantic meaning embedded in the conditional guidance. This technique is particularly useful in text-to-image generation models like Stable Diffusion and GLIDE, as it improves adherence to prompts without requiring an external classifier. 2.6 Diffusion Models Interpretability Diffusion models have also attracted significant interest from interpretability researchers. In recent years, various approaches have been proposed to analyze the latent space of diffusion models (Haas et al (2024)). Further work has focused on interpreting latent directions within diffusion models and leveraging these interpretations for diverse applications, such as alignment and safety (Li et al (2024)), and utilizing diffusion models in order to decode textual concepts to their main components (Chefer et al (2023)). 3 Superscopes In this section, we introduce Superscopes and demonstrate how it amplifies feature meanings in internal representations, enhancing interpretability. Additionally, we show how Superscopes parallels techniques from diffusion models and can be viewed as their counterpart in mechanistic interpretability. 3.1 Recap: Patchscopes The core idea behind Patchscopes is to leverage the human-like generation capabilities of LLMs to "translate" internal representations into human-readable text. More specifically, Patchscopes extracts a hidden representation from one forward pass and patches it into a different inference pass using a prompt that encourages the model to interpret the patched-in representation. While Patchscopes supports cross-model patching, in Superscopes, we focus on same-model patching—where the hidden state is injected into a forward pass of the same model but with a different input. Formally, given an input sequence of n tokens S=⟨s1,…,sn⟩subscript1…subscriptS= s_1,...,s_n = ⟨ s1 , … , sitalic_n ⟩ and a model ℳMM with L layers, iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ denotes the hidden representation obtained at layer ℓ∈[1,…,L]ℓ1… ∈[1,…,L]ℓ ∈ [ 1 , … , L ] and position i∈[1,…,n]1…i∈[1,…,n]i ∈ [ 1 , … , n ], when running ℳMM on S. In order to extract the meaning of iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ, we consider a separate inference pass of the model ℳMM on a target sequence T=⟨t1,…,tm⟩subscript1…subscriptT= t_1,…,t_m = ⟨ t1 , … , titalic_m ⟩ of m tokens. (Patchscopes allows using a different model as the target model. Recall, we focus on same-model patching). The target sequence is designed to encourage the model to "translate" the token at position i∗∈[1,…,m]superscript1…i^*∈[1,…,m]i∗ ∈ [ 1 , … , m ]. A simple example of a target prompt is: "The meaning of X is:", where i∗superscripti^*i∗ corresponds to the token representing "X" in the sentence. To perform interpretation, Patchscopes selects a hidden state ¯i∗ℓ∗superscriptsubscript¯superscriptsuperscriptℓ h_i^* ^*over¯ start_ARG italic_h end_ARGi∗roman_ℓ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT at layer ℓ∗∈[1,…,L]superscriptℓ1… ^*∈[1,…,L]ℓ∗ ∈ [ 1 , … , L ] during the execution of ℳMM on T. During inference, it dynamically replaces ¯i∗ℓ∗superscriptsubscript¯superscriptsuperscriptℓ h_i^* ^*over¯ start_ARG italic_h end_ARGi∗roman_ℓ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT with iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ. As ℳMM continues generating tokens on T, the structure of our target prompt encourages it to produce human-readable text that translates the hidden state iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ into words. In addition, Patchscopes introduces a mapping function f(;):ℝd→ℝd:→superscriptℝsuperscriptℝf( h; θ):R^d ^df ( italic_h ; italic_θ ) : blackboard_Rd → blackboard_Rd parameterized by θitalic_θ that operates on hidden representations of ℳMM, which can be applied to iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ before patching it into the target prompt. Patchscopes suggests that this function can be the identity function, a linear or affine transformation learned from task-specific representation pairs, or a more complex function incorporating additional data sources. 3.2 Challenges of Self-Interpretation Methods for MLP Outputs vs. Hidden States A natural approach to take, following Patchscopes and other self-interpreting techniques, could be to apply the techniques to other types of internal representations, namely the Attention and MLP outputs. In this work we focus on MLP outputs. Formally, we denote mlpiℓsuperscriptsubscriptℓmlp_i m l pitalic_iroman_ℓ as the MLP output before it is added to the residual stream, where ℓ∈[1,…,L]ℓ1… ∈[1,…,L]ℓ ∈ [ 1 , … , L ] indicates the layer from which this output is taken, and i∈[1,…,n]1…i∈[1,…,n]i ∈ [ 1 , … , n ] represents the position of the token for which this MLP output is generated in the source prompt. Furthermore, we denote PreMLPiℓ=iℓ−mlpiℓsuperscriptsubscriptsubscriptPreMLPℓsuperscriptsubscriptℓsuperscriptsubscriptℓ h_PreMLP_i = h_i -mlp_i italic_hPreMLPitalic_iroman_ℓ = italic_hitalic_iroman_ℓ - m l pitalic_iroman_ℓ as the value of the residual stream before the addition of the MLP output. The naive approach of applying Patchscopes to MLP outputs turns out to be ineffective, as the interpretation almost never appears to be meaningfully related. Selecting the appropriate token as the target for MLP-output interpretation follows existing approaches (Meng et al. (2022a); Ghandeharioun et al. (2024)) for analyzing entity resolution. These studies suggest that the model constructs a subject representation at the final token of the entity name. Examining the residual stream before the addition of MLP outputs (PreMLPiℓsuperscriptsubscriptsubscriptPreMLPℓ h_PreMLP_i italic_hPreMLPitalic_iroman_ℓ) at the final token of the entity name reveals an interesting phenomenon: applying Patchscopes to PreMLPiℓsuperscriptsubscriptsubscriptPreMLPℓ h_PreMLP_i italic_hPreMLPitalic_iroman_ℓ yields a meaningful explanation. Similarly, applying Patchscopes to the hidden state of the same layer, iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ, also produces a meaningful explanation—one that is much more contextualized than the Pre-MLP residual stream. However, applying Patchscopes to the MLP output, mlpiℓsuperscriptsubscriptℓmlp_i m l pitalic_iroman_ℓ, results in a meaningless interpretation—a rather odd outcome, given that we know this MLP output has altered PreMLPiℓsuperscriptsubscriptsubscriptPreMLPℓ h_PreMLP_i italic_hPreMLPitalic_iroman_ℓ in a way that changes its meaning. See (§4) for further analysis. This phenomenon, where the MLP output clearly influences the hidden state’s meaning yet lacks an interpretable meaning on its own, is noteworthy. In this work, we analyze this behavior in depth, explore its implications, and demonstrate how to interpret the MLP output itself. 3.3 Features as Directions and Superpositions To examine the hypothesis underlying MLP output interpretation through self-interpretation methods, we first recap the concept of “features as directions” and the idea behind “The Superposition Hypothesis” (Elhage et al. (2022); Arora et al. (2018); Olah et al. (2020)). The core concept behind “features as directions” is that features of hidden states are represented as directions in activation space. Although this is not a trivial claim, research has made significant progress in supporting it. Several major results and approaches reinforce this idea. A particularly notable one is vector arithmetic on word embeddings. Many are already familiar with the famous work of Mikolov et al. (2013), which demonstrates that vector arithmetic applies to word embeddings. Specifically, the following identity holds (see also Levy et al. (2014)): V("King")−V("Man")+V("Woman")=V("Queen")""""""""V("King")-V("Man")+V("Woman")=V("Queen")V ( " K i n g " ) - V ( " M a n " ) + V ( " W o m a n " ) = V ( " Q u e e n " ). Some other major works include: similar "vector arithmetic" and interpretable direction results found in generative adversarial networks (Radford et al. (2016)); a significant body of research identifying neurons with interpretable behavior in RNNs (Karpathy et al. (2015)), CNNs (Zhou et al. (2015)), and GANs (Bau et al. (2020)). Additionally, other approaches explore the concept of universality—the idea that analogous neurons responding to the same properties can be found across different networks (Schubert et al. (2021))—alongside many more findings in the field. The concept of Superposition suggests that models encode more features than available dimensions, forcing some concepts and features to overlap or interfere with one another. 3.4 Superscopes: Amplifying Weak Features Given that different internal representations can be viewed as instances of features as directions and superposition theory, our work suggests that the failed self-interpretation of MLP outputs can be explained by the following hypothesis: MLP outputs encode changes that are exceedingly weak, making it impossible for the model to translate them on their own. To address this, we propose amplifying the effect of these features, thereby increasing the significance of each direction, meaning that each feature carries more weight. Formally, we define the amplification of the MLP outputs we aim to interpret as: mlp+iℓ=α⋅mlpiℓsuperscriptsubscriptsuperscriptℓ⋅superscriptsubscriptℓmlp^+_i =α·mlp_i m l p+iroman_ℓ = α ⋅ m l pitalic_iroman_ℓ Using Patchscopes now involves replacing ¯i∗ℓ∗superscriptsubscript¯superscriptsuperscriptℓ h_i^* ^*over¯ start_ARG italic_h end_ARGi∗roman_ℓ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT with mlp+iℓsuperscriptsubscriptsuperscriptℓmlp^+_i m l p+iroman_ℓ, the amplified MLP output. Our results demonstrate that amplifying internal representations produces interpretable meanings. See §4 for further analysis. Similarly, applying amplification to certain hidden states that Patchscopes fails to interpret also produces interpretable meanings. Formally, this means that we replace ¯i∗ℓ∗superscriptsubscript¯superscriptsuperscriptℓ h_i^* ^*over¯ start_ARG italic_h end_ARGi∗roman_ℓ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT with α⋅iℓ⋅superscriptsubscriptℓα· h_i α ⋅ italic_hitalic_iroman_ℓ. The selection of α parallels the choice of the guidance scale in Classifier-Free Guidance (CFG)—a larger α enhances feature emphasis. However, setting α too high may distort the vector, leading to less interpretable and distorted results. The Superscopes framework automatically determines α values based on cosine similarity. A detailed analysis of our scaling approach is provided in §4. 3.5 Superscopes as a variation of Classifier-Free Guidance (CFG) Classifier-Free Guidance (CFG) (Ho et al. (2022)) is a widely used technique in diffusion models that improves the alignment of generated samples with a given condition, such as a text prompt in text-to-image generation. It has been broadly adopted in diffusion-based models (Chen et al. (2024)). CFG operates by first computing an unconditional output, followed by a conditioned one. It then determines the difference between the conditional and unconditional outputs, defining a direction in the model’s latent space. By amplifying this direction, the model is effectively steered toward outputs that better align with the given condition. Increasing the scale factor further reinforces alignment, emphasizing the semantic meaning embedded in the conditional guidance. The CFG amplification is expressed as: ϵ=ϵuncond+w⋅(ϵcond−ϵuncond)italic-ϵsubscriptitalic-ϵuncond⋅subscriptitalic-ϵcondsubscriptitalic-ϵuncondε= _uncond+w·( _cond- _% uncond)ϵ = ϵuncond + w ⋅ ( ϵcond - ϵuncond ) where: • ϵuncondsubscriptitalic-ϵuncond _uncondϵuncond represents the model’s noise prediction when no conditioning information is provided. • ϵcondsubscriptitalic-ϵcond _condϵcond represents the noise prediction when conditioning information (e.g., a text prompt) is included. • w is the guidance scale, which controls the strength of conditioning in the generated output. The equation essentially performs an amplified directional shift in prediction space, pushing the generated output closer to the conditioned estimate. By adjusting w: • If w=00w=0w = 0, the model generates samples unconditionally, without any reliance on the condition. • If w=11w=1w = 1, the model follows the standard conditional generation process. • If w>11w>1w > 1, the model exaggerates the influence of the condition, leading to more precise alignment with the input prompt. The core idea is that when given a condition (e.g., a text prompt like "blue fluffy dog") and comparing the output to an unconditioned one, subtracting the two produces a direction that represents "blue fluffy dog". By repeatedly adding this direction (controlled by the guidance scale factor), the model emphasizes the condition more strongly, resulting in an output that better aligns with the given prompt. Figure 2: An example of Stable Diffusion with the text guidance "dog" and varying CFG guidance scales. As shown in Figure 2, a higher guidance scale strengthens the emphasis on the "dog." However, an excessively high guidance scale is also suboptimal, as demonstrated in the figure. A similar example from Superscopes is presented below, where we examine the amplification of the MLP output for the token "ppers" in the prompt "Red Hot Chili Peppers": Interpreting it with a low amplification factor yields a nonsensical interpretation: Superscopes(mlp,α=3,l=5)=Superscopes(mlp,α=3,l=5)=S u p e r s c o p e s ( m l p , α = 3 , l = 5 ) = "Chemical Compound" Increasing the amplification factor allows it to capture more meaning: Superscopes(mlp,α=9,l=5)=Superscopes(mlp,α=9,l=5)=S u p e r s c o p e s ( m l p , α = 9 , l = 5 ) ="Rock band" Further increasing it encapsulates even more meaning: Superscopes(mlp,α=12,l=5)=Superscopes(mlp,α=12,l=5)=S u p e r s c o p e s ( m l p , α = 12 , l = 5 ) ="Band from California" Examining this further, we observe a clear similarity between CFG and Superscopes particularly in the interpretation of MLP outputs. MLP outputs play a significant role in shaping meaning, analogous to the directional shift induced by guidance in diffusion models. This behavior is notably similar, as increasing the scaling factor further emphasizes the features of the amplified vector. We also observe a formal similarity. As previously mentioned, Patchscopes commonly succeeds in interpreting PreMLPiℓsuperscriptsubscriptsubscriptPreMLPℓ h_PreMLP_i italic_hPreMLPitalic_iroman_ℓ and iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ but fails to interpret mlpiℓsuperscriptsubscriptℓmlp_i m l pitalic_iroman_ℓ. Thus, we define mlpiℓsuperscriptsubscriptℓmlp_i m l pitalic_iroman_ℓ as: mlpiℓ=iℓ−PreMLPiℓsuperscriptsubscriptℓsuperscriptsubscriptℓsuperscriptsubscriptsubscriptPreMLPℓmlp_i = h_i - h_PreMLP_i m l pitalic_iroman_ℓ = italic_hitalic_iroman_ℓ - italic_hPreMLPitalic_iroman_ℓ This formulation essentially views mlpiℓsuperscriptsubscriptℓmlp_i m l pitalic_iroman_ℓ as the "directional shift" needed to move from PreMLPiℓsuperscriptsubscriptsubscriptPreMLPℓ h_PreMLP_i italic_hPreMLPitalic_iroman_ℓ to get to iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ. Conceptually, this closely resembles the directional shift used in CFG: ϵcond−ϵuncondsubscriptitalic-ϵcondsubscriptitalic-ϵuncond _cond- _uncondϵcond - ϵuncond This resemblance highlights both the logical and formal similarity between Superscopes and Classifier-Free Guidance, as both amplify meaning by scaling a directional shift in a similar manner. 4 Experiments Figure 3: A graph measuring the amount of successful interpretations of MLP outputs, by Superscopes and Patchscopes, over different layers. See §4.1 for further analysis. Layer Original Hidden State Interpretation Original MLP Output Interpretation ✓/✗ Amplified MLP Output Interpretation ✓/✗ 2 (Live) Aid: A charity single recorded by.. shortened form of the word "privile… (incorrect interpretation) ✗ A live television variety show (α amplifier = 15) ✓ 3 American sketch comedy and variety show – – – – Table 1: An example of applying Patchscopes to the token “Live” from the prompt “Saturday Night Live” at layer 2, before hidden state contextualization occurs (first observed at layer 3). The raw MLP output appears nonsensical; however, once amplified, it correctly reflects the contextual meaning of the sentence. 1 Ahead of one’s time Time period after the present (incorrect interpretation) ✗ 1985 film, Mart… (α amplifier = 9) ✓ 2 Science fiction film trilogy, and the … – – – – Table 2: Another example of MLP output interpretation. This example applies Patchscopes to the token “Future” from the prompt “Back to the Future” at layer 1, before hidden state contextualization occurs (first observed at layer 2). The raw MLP output relates to "Future" but lacks contextualized meaning. However, once amplified, it correctly reflects the contextual meaning of the sentence—referring to the 1985 science fiction movie Back to the Future, starring Marty McFly. Layer Original Hidden State Interpretation ✓/✗ Amplified Hidden State Interpretation ✓/✗ 4 Barack Obama: 44th President (incorrect interpretation) ✗ Ancient Greek king (α amplifier = 3) ✓ 5 Barack Obama: American politician (incorrect interpretation) ✗ Ancient Greek king of Macedon (α amplifier = 3) ✓ 6 Ancient Greek king of Macedon ✓ – – Table 3: An example of applying Patchscopes to the token “Great” from the prompt “Alexander the Great” at layers 4, yields "Barack Obama", while the Superscoped interpretation yields a correct interpretation ("Ancient Greek King"). Similarly, at layer 5, Patchscopes yields a "Barack Obama" while Superscopes yields an even more precise interpretation("of Macedon"). This section presents an evaluation of the Superscopes technique by examining the degree of contextualization in MLP outputs (§4.1) and hidden state representations (§4.2) when these components are amplified, as compared to the original internal representation. To further substantiate the versatility of the Superscopes technique, we patched the amplified internal representations into various target layers. Specifically, patches were applied to both the initial layer (ℓ∗=0superscriptℓ0 ^*=0ℓ∗ = 0) and the originating layer of the internal representation we wish to interpret (ℓ=ℓ∗ℓsuperscriptℓ = ^*ℓ = ℓ∗). Note that we use Patchscopes as our self-interpretation method of choice, but similar approaches, such as SelfIE, can also be used. 4.1 Amplifying MLP Outputs In this experiment, we demonstrate that amplified MLP outputs, as interpreted through Patchscopes, exhibit meaningful contextualization even when the original MLP output yields nonsensical results. Notably, we show that in some cases, these amplified outputs carry contextualized meaning before the first layer where the hidden state encodes the resolved entity. Methods We evaluated various amplification levels of MLP outputs, including a non-amplified baseline, to identify the most interpretable vectors using Patchscopes. As detailed in §3.4, let ℓcsubscriptℓ _cℓitalic_c denote the first layer where the hidden state encodes the resolved entity (as identified by Patchscopes); We interpret mlp+iℓsubscriptsuperscriptsuperscriptℓmlp^+ _im l p+ℓitalic_i starting from ℓ=ℓc−1ℓsubscriptℓ1 = _c-1ℓ = ℓitalic_c - 1. To determine an optimal amplification value, we conducted our tests using α=1,3,6,9,12,1513691215α=\1,3,6,9,12,15\α = 1 , 3 , 6 , 9 , 12 , 15 , in addition to testing intermediate values between them. To derive ℓcsubscriptℓ _cℓitalic_c, we followed the methodology from Patchscopes, analyzing how large language models contextualize entity names. This involved crafting a target prompt to generate a subject description and applying it to the hidden representation at the last subject position in the source prompt—where the model forms the subject representation (Geva et al. (2023); Hernandez et al. (2023)). This approach allows us to observe how the model describes the subject at each layer and identify the first layer where contextualization occurs. Measurement Metrics To measure the performance of Superscopes on MLP outputs, we designed two distinct experiments. In the first experiment, we sought manual results that emphasize the interpretability of amplified MLP outputs, specifically examining how, in certain prompts, nearly the entire context of a sentence can be inferred from an amplified MLP output prior to hidden state contextualization. In the second experiment, we aimed to demonstrate the superiority of Superscopes over Patchscopes using a larger set of examples. We applied Superscopes to each prompt for l=1…71…7l=\1… 7\l = 1 … 7 with the previously defined α values, where α=11α=1α = 1 corresponds to Patchscopes. Performance was evaluated using all-MiniLM-L6-v2, a model trained for semantic similarity and sentence representations. When computing the cosine similarity score, we set a threshold of 0.3 as a reasonable indicator of success, based on empirical observations. While this value works well in our experiments, other thresholds can be chosen to achieve similar results. Results Using a set of short prompts ("Diana, Princess of Wales", "Back to the Future", "Saturday Night Live"…), we demonstrate that Superscopes enables the interpretation of MLP outputs that initially appear nonsensical before amplification. Table §1 and table §2 illustrate two examples of a hidden state in a layer before contextualization, while also showing how the fully contextualized meaning is successfully extracted from the amplified MLP output at the same layer. Additionally, the illustration on the front page (Figure §1) and the Superscopes application (§5) serve as further examples of successful Superscopes interpretations. Through the second experiment, we demonstrated significantly superior results (see §3), thereby validating the effectiveness of the Superscopes method. 4.2 Amplifying Hidden States Representations Using the Superscopes on hidden states representations also yields exciting results. Patchscopes (§3.1) also introduced a mapping function, f, and suggests that this function can be the identity function, a linear or affine transformation learned from task-specific representation pairs, or a more complex function incorporating additional data sources. We suggest that this function can also simply be the amplification of hidden states, as introduced by Superscopes. In addition to successfully interpreting MLP outputs, Superscopes also improves the ability to interpret hidden representations, which we examine in this section. Methods Similarly to the MLP output evaluation, we evaluated various amplification levels of hidden states, including a non-amplified baseline, to identify the most interpretable vectors using Patchscopes. As detailed in §3.4, let ℓcsubscriptℓ _cℓitalic_c denote the first layer where the hidden state encodes the resolved entity (as identified by Patchscopes); We interpret iℓsuperscriptsubscriptℓ h_i italic_hitalic_iroman_ℓ, starting from ℓ=ℓc−1ℓsubscriptℓ1 = _c-1ℓ = ℓitalic_c - 1 and going backwards in the layers. To identify the optimal amplification value, we conducted our tests across a wider range of α values, highlighting the precision necessary for effectiveness in hidden state Superscopes interpretation. To derive ℓcsubscriptℓ _cℓitalic_c, we followed the same methodology from the MLP output evaluation (§4.1), by crafting a prompt that encourages the model to "translate" a hidden representation (see §3.1 further details). Measurement Metrics To evaluate the performance of Superscopes on hidden state representations, we examined various short prompts and attempted to interpret hidden state representations from earlier layers than the ones interpreted with broader context through Patchscopes. A successful attempt was defined as one in which we were able to interpret layers preceding the contextualized layer analyzed using Patchscopes. Results Similarly to our evaluation of the MLP output amplification, using a set of short prompts ("Alexander The Great", "Red Hot Chili Peppers", "Florence and the Machine"), we demonstrate that Superscopes also enables the interpretation of hidden states that previously appear nonsensical before amplification. Table §3 presents an example of hidden states representations successfully interpreted using Superscopes two layers prior to contextualization. 5 Applications 5.1 The Superscopes Framework Building on our findings, we recognize the potential of using amplification for early exits and improving the understanding of Language Models’ thought process and contextualization. To support this, we introduce Superscopes: an all-in-one, fully open-source utility designed to evaluate outcomes across various amplification levels, target layers, and prompts Functionality and Features The application specifically focuses on the pre-MLP residual stream (also referred to as the "Post-Attention-Pre-MLP residual stream"), MLP outputs, and final hidden-state values. Earlier, we demonstrated (§4.1 and §4.2) that amplified MLP outputs and hidden representations can be interpreted more effectively when Superscoped. To facilitate this, we developed the Superscopes framework to identify and "fine-tune" such cases. Superscopes allows adjustment of the following parameters: 1. Inspected prompt – Fully adjustable and configurable. 2. Inspected token – Any input token can be examined using Superscopes. 3. Source layer range – Fully adjustable and configurable. 4. Target layer – Can be configured between same-layer patching and patch-to-layer-zero configurations. 5. Automatic amplifier detection – Using the methods described in §4.1 (based on cosine similarity scores), Superscopes automatically identifies the best amplifier for each scenario while also displaying all tested amplifiers. Figure 4: The Superscopes Framework Key Examples Leveraging the flexibility of our framework, we uncovered several key observations. One notable finding emerged for the source prompt "Diana, Princess of Wales". By using the "Show Best Results" option compared to standard Patchscopes, we found that Superscopes successfully revealed (§6) meaning for both the hidden state and the MLP output—interpretations that were otherwise inaccessible (§5). Figure 5: Using an amplifier of α=11α=1α = 1 (Patchscopes), we obtain partial & incorrect interpretations for both the Pre-MLP residual and the MLP output. Figure 6: Using Superscopes’s automatic amplifier detection, we obtain correct interpretations for both the Pre-MLP residual and the MLP output. The Superscopes framework provides an accessible platform for testing various amplification levels, target layers, and prompts, enabling a deeper exploration of their impact on different interpretations. 6 Conclusion and Future Work In this paper, we introduced Superscopes, a method that extends Patchscopes by amplifying hidden states and MLP outputs before patching. We demonstrated that even hidden states previously considered uninterpretable (e.g., MLP residual outputs in large Transformer layers) can reveal coherent semantic content when appropriately scaled. Our results reinforce the concept of features as directions in large language models, highlighting that interpretability can be improved by controlling the magnitude of these directions. Furthermore, we show that Superscopes can also be understood as a variant of Classifier-Free Guidance (CFG), exhibiting similar behaviors in internal representations. We also introduce the Superscopes Framework to facilitate further research and exploration of the effects of amplification on different components of a model. Further research directions include extracting meaning from attention outputs, which initially appear to exhibit a "denser form" of superposition (partially due to attention heads). Another avenue for future work is refining the Superscopes methodology—one possibility is developing a more effective approach for selecting the optimal amplifier, while others involve exploring alternative ways to amplify meaning within Superscopes. Additionally, similar to Tuned Lens (Belrose et al. (2023)), there may be a more suitable "base" vector than directly patching the MLP output. Other directions for future work include searching for better ways to patch different internal representations (e.g., MLP outputs). Our work also explored adding MLP outputs across multiple layers in a single inference pass; however, future research may uncover new methods to make this approach more effective. 7 Acknowledgments We would like to thank Guy Dar, Shahar Satamkar, Hila Chefer, Yossi Gandelsman and Dr. Eli David for their valuable feedback and support throughout our work. The code for this project is available on GitHub at Superscopes. References (1) Chen et al. (2023) H. Chen, C. Vondrick, and C. Mao. SelfIE: A self-interpreting neural architecture. URL https://arxiv.org/pdf/2403.10949. Dar (2023) G. Dar. Speaking probes: Self-interpreting models. URL https://medium.com/towards-data-science/speaking-probes-self-interpreting-models-7a3dc6cb33d6. Ghandeharioun et al. (2024) A. Ghandeharioun, A. Caciularu, A. Pearce, L. Dixon, and M. Geva. Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models In ICML 2024 , 2024. URL https://arxiv.org/abs/2401.06102. Elhage et al. (2022) Elhage, Nelson and Hume, Tristan and Olsson, Catherine and Schiefer, Nicholas and Henighan, Tom and Kravec, Shauna and Hatfield-Dodds, Zac and Lasenby, Robert and Drain, Dawn and Chen, Carol and Grosse, Roger and McCandlish, Sam and Kaplan, Jared and Amodei, Dario and Wattenberg, Martin and Olah, Christopher "Toy Models of Superposition", Transformer Circuits Thread, 2022. URL https://transformer-circuits.pub/2022/toy_model/index.html. Casper et al. (2022) Casper, S., Rauker, T., Ho, A., and Hadfield-Menell, D. Sok: Toward transparent ai: A survey on interpreting the inner structures of deep neural networks. In First IEEE Conference on Secure and Trustworthy Machine Learning, 2022. URL https://arxiv.org/abs/2207.13243. Madsen et al. (2022) Madsen, A., Reddy, S., and Chandar, S. Post-hoc interpretability for neural nlp: A survey. ACM Computing Surveys, 55(8):1–42, 2022. URL https://arxiv.org/abs/2108.04840. Patel & Pavlick (2021) Patel, R. and Pavlick, E. Mapping language models to grounded conceptual spaces. In International Conference on Learning Representations, 2021. URL https://w.semanticscholar.org/paper/Mapping-Language-Models-to-Grounded-Conceptual-Patel-Pavlick/57db9833549247241decf442fcc30f6eb414981b. Nanda et al. (2023) Nanda, N., Lee, A., and Wattenberg, M. Emergent linear representations in world models of self-supervised sequence models. In Belinkov, Y., Hao, S., Jumelet, J., Kim, N., McCarthy, A., and Mohebbi, H. (eds.), Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, p. 16–30, Singapore, December 2023. Association for Computational Linguistics. URL https://aclanthology.org/2023.blackboxnlp-1.2. Bereska et al. (2024) L. Bereska and E. Gavves Mechanistic Interpretability for AI Safety – A Review URL https://arxiv.org/abs/2404.14082. Arkoudas et al. (2023) K. Arkoudas GPT-4 Can’t Reason URL https://arxiv.org/abs/2308.03762. Belinkov & Glass (2019) Belinkov, Y. and Glass, J. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49–72, 2019. URL https://aclanthology.org/Q19-1004/. Belinkov (2022) Belinkov, Y. Probing classifiers: Promises, shortcomings, and advances. Computational Linguistics, 48(1):207–219, 2022. URL https://aclanthology.org/2022.cl-1.7/. nostalgebraist (2020) nostalgebraist. interpreting gpt: the logit lens. LessWrong, 2020. URL https://w.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens. Belrose et al. (2023) Belrose, N., Furman, Z., Smith, L., Halawi, D., Ostrovsky, I., McKinney, L., Biderman, S., and Steinhardt, J. Eliciting latent predictions from transformers with the tuned lens. arXiv preprint arXiv:2303.08112, 2023. URL https://arxiv.org/abs/2303.08112. Meng et al. (2022a) Meng, K., Bau, D., Andonian, A., and Belinkov, Y. Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372, 2022a. URL https://arxiv.org/abs/2202.05262. Wallat et al. (2020) Wallat, J., Singh, J., and Anand, A. BERTnesia: Investigating the capture and forgetting of knowledge in BERT. URL https://aclanthology.org/2020.blackboxnlp-1.17/. Wang et al. (2022) Wang, K. R., Variengien, A., Conmy, A., Shlegeris, B., and Steinhardt, J. Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. In The Eleventh International Conference on Learning Representations, 2022. URL https://arxiv.org/abs/2211.00593. Geva et al. (2023) Geva, M., Bastings, J., Filippova, K., and Globerson, A. Dissecting recall of factual associations in auto-regressive language models. In Bouamor, H., Pino, J., and Bali, K. (eds.), Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, p. 12216–12235, Singapore, December 2023. Association for Computational Linguistics. emph10.18653/v1/2023.emnlp-main.751. URL https://aclanthology.org/2023.emnlp-main.751. Elhage et al. (2021) Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y., Chen, A., Conerly, T., et al. (2021). A Mathematical Framework for Transformer Circuits. Transformer Circuits Thread. URL https://transformer-circuits.pub/2021/framework/index.html. Alain & Bengio (2017) Alain, G. and Bengio, Y. Understanding intermediate layers using linear classifier probes. 5th International Conference on Learning Representations, Workshop Track Proceedings, 2017. URL https://arxiv.org/abs/1610.01644. Conmy et al. (2023) Conmy, A., Mavor-Parker, A. N., Lynch, A., Heimersheim, S., and Garriga-Alonso, A. Towards automated circuit discovery for mechanistic interpretability. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://arxiv.org/abs/2304.14997. Bricken et al. (2023) T. Bricken, A. Templeton, J. Batson, B. Chen, A. Jermyn, T. Conerly, N. Turner, C. Anil, C. Denison, A. Askell, R. Lasenby, Y. Wu, S. Kravec, N. Schiefer, T. Maxwell, N. Joseph, Z. Hatfield-Dodds, A. Tamkin, K. Nguyen, B. McLean, J. E. Burke, T. Hume, S. Carter, T. Henighan, and C. Olah. Towards Monosemanticity: Decomposing Language Models with Dictionary Learning. Transformer Circuits Thread, 2023. Available at: https://transformer-circuits.pub/2023/monosemantic-features/index.html. Ho et al. (2022) J. Ho and T. Salimans. Classifier-Free Diffusion Guidance. arXiv preprint, arXiv:2207.12598, 2022. Available at: https://arxiv.org/abs/2207.12598. Chen et al. (2024) M. Chen, S. Mei, J. Fan, and M. Wang. An Overview of Diffusion Models: Applications, Guided Generation, Statistical Rates, and Optimization. arXiv preprint, arXiv:2404.07771, 2024. Available at: https://arxiv.org/abs/2404.07771. Haas et al (2024) R. Haas, I. Huberman-Spiegelglas, R. Mulayoff, S. Graßhof, S. S. Brandt, and T. Michaeli. Discovering Interpretable Directions in the Semantic Latent Space of Diffusion Models. arXiv preprint, arXiv:2303.11073, 2024. Available at: https://arxiv.org/abs/2303.11073. Li et al (2024) H. Li, C. Shen, P. Torr, V. Tresp, and J. Gu. Self-Discovering Interpretable Diffusion Latent Directions for Responsible Text-to-Image Generation. arXiv preprint, arXiv:2311.17216, 2024. Available at: https://arxiv.org/abs/2311.17216. Chefer et al (2023) H. Chefer, O. Lang, M. Geva, V. Polosukhin, A. Shocher, M. Irani, I. Mosseri, and L. Wolf. The Hidden Language of Diffusion Models. arXiv preprint, arXiv:2306.00966, 2023. URL https://arxiv.org/abs/2306.00966. Arora et al. (2018) S. Arora, Y. Li, Y. Liang, T. Ma, and A. Risteski. Linear Algebraic Structure of Word Senses, with Applications to Polysemy. arXiv preprint, arXiv:1601.03764, 2018. Available at: https://arxiv.org/abs/1601.03764. Olah et al. (2020) C. Olah, N. Cammarata, L. Schubert, G. Goh, M. Petrov, and S. Carter. Zoom In: An Introduction to Circuits. Distill, 2020. https://distill.pub/2020/circuits/zoom-in. 10.23915/distill.00024.001. Mikolov et al. (2013) T. Mikolov, W.-t. Yih, and G. Zweig. Linguistic Regularities in Continuous Space Word Representations. In L. Vanderwende, H. Daumé I, and K. Kirchhoff (Eds.), Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p. 746–751, June 2013. Atlanta, Georgia: Association for Computational Linguistics. https://aclanthology.org/N13-1090/. Levy et al. (2014) O. Levy and Y. Goldberg. Linguistic Regularities in Sparse and Explicit Word Representations. In R. Morante and S. W.-t. Yih (Eds.), Proceedings of the Eighteenth Conference on Computational Natural Language Learning, p. 171–180, June 2014. Ann Arbor, Michigan: Association for Computational Linguistics. https://aclanthology.org/W14-1618/. 10.3115/v1/W14-1618. Radford et al. (2016) A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv preprint, arXiv:1511.06434, 2016. Available at: https://arxiv.org/abs/1511.06434. Karpathy et al. (2015) A. Karpathy, J. Johnson, and L. Fei-Fei. Visualizing and Understanding Recurrent Networks. arXiv preprint, arXiv:1506.02078, 2015. Available at: https://arxiv.org/abs/1506.02078. Zhou et al. (2015) B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Object Detectors Emerge in Deep Scene CNNs. arXiv preprint, arXiv:1412.6856, 2015. Available at: https://arxiv.org/abs/1412.6856. Bau et al. (2020) D. Bau, J. Zhu, H. Strobelt, A. Lapedriza, B. Zhou, and A. Torralba. Understanding the Role of Individual Units in a Deep Neural Network. Proceedings of the National Academy of Sciences (PNAS), 117(48), p. 30071–30078, 2020. 10.1073/pnas.1907375117. URL https://w.pnas.org/doi/10.1073/pnas.1907375117. Schubert et al. (2021) L. Schubert, C. Voss, N. Cammarata, G. Goh, and C. Olah. High-Low Frequency Detectors. Distill, 2021. Available at: https://distill.pub/2020/circuits/frequency-edges. 10.23915/distill.00024.005. Hernandez et al. (2023) Hernandez, E., Li, B. Z., and Andreas, J. Measuring and manipulating knowledge representations in language models. arXiv preprint arXiv:2304.00740, 2023a. URL https://arxiv.org/abs/2304.00740.