← Back to papers

Paper deep dive

SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models

Quentin Guimard, Federico Bartsch, Simone Caldarella, Rahaf Aljundi, Elisa Ricci, Massimiliano Mancini

Year: 2026Venue: arXiv preprintArea: cs.CVType: PreprintEmbeddings: 75

Intelligence

Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 96%

Last extracted: 3/22/2026, 6:12:03 AM

Summary

The paper introduces Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework for vision-language models like CLIP. By leveraging Sparse Autoencoders (SAEs) to decompose dense, entangled CLIP embeddings into a sparse, disentangled latent space, SEM enables precise, neuron-level interventions to mitigate social and spurious biases while preserving semantic fidelity.

Entities (5)

CLIP · vision-language-model · 100%SAE · neural-architecture · 100%SEM · framework · 100%Matryoshka SAE · neural-architecture · 95%BendVLM · debiasing-method · 90%

Relation Signals (3)

SEM debiases CLIP

confidence 100% · SEM, a post-hoc, zero-shot debiasing framework... for post-hoc debiasing of vision-language models.

SEM operatesin SAE

confidence 100% · SEM... operates in a Sparse Autoencoder (SAE) latent space.

SEM complements BendVLM

confidence 90% · we show that SEM can further improve the results of BendVLM

Cypher Suggestions (2)

Identify the architecture used by the SEM framework. · confidence 95% · unvalidated

MATCH (s:Framework {name: 'SEM'})-[:OPERATES_IN]->(a:Architecture) RETURN a.name

Find all debiasing methods and the models they target. · confidence 90% · unvalidated

MATCH (m:Model)<-[:DEBIASES]-(d:Method) RETURN m.name, d.name

Abstract

Abstract:Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models.

Tags

ai-safety (imported, 100%)cscv (suggested, 92%)preprint (suggested, 88%)

Links

Your browser cannot display the PDF inline. Open PDF directly →

Full Text

74,531 characters extracted from source content.

Expand or collapse full text

SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models Quentin Guimard 1, * Federico Bartsch 1, * Simone Caldarella 1 Rahaf Aljundi 2 Elisa Ricci 1,3 Massimiliano Mancini 1 1 University of Trento 2 Toyota Motor Europe 3 Fondazione Bruno Kessler https://sparse-embedding-modulation.github.io/ Abstract Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, un- curated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without de- grading semantic fidelity. In this work, we propose SPARSE EMBEDDING MODULATION (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models. 1. Introduction Contrastive vision-language models [26,31] have become foundational tools in multimodal AI, learning a shared em- bedding space that aligns visual and textual semantics. Their text embeddings are a versatile interface for downstream tasks like cross-modal retrieval and classification. Despite their capabilities, the large-scale, uncurated na- ture of their training data introduces profound biases [6]. Consequently, models trained on this data inherit and amplify societal stereotypes and other spurious correla- tions [2,14,17]. This leads to critical failures: models associate ‘doctor’ with ‘male’ and ‘nurse’ with ‘female’ [14], link concepts like ‘criminal’ or ‘thief’ with specific ethnici- ties [14], or become over-reliant on context, correctly identi- * Equal contribution A photo of a blackdoctor CLIP Debiasing (a) CLIP-space debiasing CLIP Debiasing A photo of a blackdoctor (b) SAE-space debiasing Figure 1. SAEs decompose entangled embeddings for precise intervention. (a) Standard methods operate directly on the dense, entangled CLIP embedding space. (b) Our SEM first projects the embedding into a sparse, disentangled latent space via an SAE. This enables a precise intervention on specific features, resolving the limitations of dense-space manipulation. fying a “fire hydrant” in a “street scene” but failing to see it in an unusual context like a warehouse [17]. Worse, the mere presence of a “street scene” can cause models to hallucinate a fire hydrant that isn’t there [17]. These failures degrade model reliability and fairness in downstream applications, raising concerns on their wide adoption. Existing bias mitigation methods are often impracti- cal or insufficient. Methods that involve retraining the model, either fully [3] or through fine-tuning on balanced, group-annotated data [27], are computationally prohibitive and not feasible for practitioners using pre-trained models. Other post-hoc methods, while more flexible, still require training additional, complex modules on top of the frozen 1 arXiv:2603.19028v1 [cs.CV] 19 Mar 2026 VLM [16,19,29]. This approach introduces significant train- ing overhead, is not zero-shot, and may require retraining for new tasks or biases. We focus on debiasing the text embed- dings, which is highly efficient for text-to-image retrieval. This text-only approach is effective, with performance com- parable to methods debiasing image embeddings [9,11,16]. While zero-shot methods [1,9] offer greater flexibility, they typically identify a single bias subspace and remove it via orthogonal projection. This approach assumes that a sin- gle linear direction can model a complex, high-dimensional bias, an oversimplification for concepts like gender or ethnic- ity. This coarse-grained manipulation, acting on the entire dense embedding, fails to disentangle bias from content. This is reflected in our experiments (Tab. 3), where these methods struggle to improve performance for the most biased subgroups (i.e., worst-group accuracy) and show inconsis- tent fairness gains (Tab. 2). This highlights the fundamental limitation of intervening on dense, entangled embeddings. To overcome this challenge, our method leverages a Sparse Autoencoder (SAE) [18,30] to decompose CLIP text embeddings into a high-dimensional, sparse feature space (Fig. 1). As confirmed by a preliminary analysis (Sec. 3.1), this sparse latent space is significantly more disentangled than the original dense embeddings, isolating concepts into more separable, individual features. This decomposition enables a precise, non-linear intervention at the feature level, moving beyond the limitations of single-subspace projection. Building on this, we propose SPARSE EMBEDDING MODULATION (SEM), a novel post-hoc debiasing frame- work. SEM is zero-shot, requiring no task-specific fine- tuning. It relies on a single, pre-trained SAE (trained only once on a general-purpose text corpus) to perform its inter- vention. A key strength of SEM is its flexibility; it operates in three distinct settings based on the available information: •SEM i (Bias-Agnostic): Uses paraphrases generated with large language models (LLMs) to obtain a robust estima- tion of content-relevant neurons and then attenuates all other (likely spurious) features. •SEM b (Bias-Aware): Uses a list of bias prompts to per- form structured, bias-specific neuron identification. • SEM bi (Full): Combines both approaches. We validate SEM on two CLIP backbones across four challenging datasets, covering both social (ethnicity, gen- der) and spurious (background) biases. Our results show significant fairness gains in retrieval and zero-shot classi- fication. Specifically, our method substantially improves worst-group accuracy, resolving the fairness–performance trade-off at the subgroup level where prior approaches often fall short. Moreover, its benefits are complementary to other approaches: we show that SEM can further improve the results of BendVLM [11], demonstrating its modularity. Our contribution is threefold: • We propose SEM, a new post-hoc, zero-shot debiasing framework that leverages SAE to perform precise, neuron- level intervention on CLIP text embeddings. •We demonstrate the versatility of SEM through three dis- tinct variants (SEM i , SEM b , SEM bi ) that adapt to differ- ent levels of available information. Ours is modular, and can complement other methods to improve their results. • We show that our approach overcomes a key limitation of previous zero-shot methods, achieving a significant improvement in worst-group accuracy (Tab. 3). 2. Related Work Bias discovery. The presence of societal biases in machine learning models is a well-documented problem, with foun- dational work identifying significant gender and ethnic dis- parities in NLP and computer vision [7,8,15]. These biases are particularly pronounced in large-scale Vision-Language Models, which inherit and often amplify malignant stereo- types from uncurated web-scale data [2,6,14]. Given the opaque nature of these models, a significant line of work has focused on bias detection, e.g., using large language mod- els and visual question answering to audit Text-to-Image models [10] or performing unsupervised bias detection in classifiers [13] to uncover structured biases in the form of attributes and classes (e.g., ‘gender’: ‘male’, ‘female’). Our work builds on this structured understanding of bias, moving from detection to intervention. Debiasing Vision-Language Models. Approaches to mit- igate bias in VLMs can be broadly categorized by their point of intervention. Training-Time debiasing meth- ods modify the model’s training process. This includes classical group robustness techniques that require group- labeled data [21,27] or model-specific retraining [3,23]. Other approaches reduce computational burden by training lightweight modules on top of a frozen VLM, e.g., with ad- versarial learning [5], counterfactual data [32], or predefined bias corpora [16,19,29]. PRISM [24] learns a linear projec- tion using only LLM-generated data, but requires training a new projection for every specific task and bias, limiting its scalability. To overcome computational burdens, a more flexible alternative is Post-Hoc Intervention on pre-trained models. The most common approaches are training-free and operate directly on the embeddings. For example, projection- based debiasing [9] uses “biased prompts” to identify a sin- gle bias subspace, which is then removed via orthogonal projection. Similarly, RoboShot [1] uses LLM-generated prompts to identify and remove “harmful” conceptual fea- tures. While simple, these methods treat the embedding as an uninterpretable vector and assume the bias is linearly sep- arable. This coarse-grained manipulation, which operates on the entire dense embedding, struggles to disentangle bias from content. This is reflected in our experiments, where these methods show only marginal improvements for the most biased subgroups (i.e., worst-group accuracy) and have 2 inconsistent fairness gains. BendVLM [11] attempts to re- fine this but introduces a significant constraint by requiring a labeled reference set of images at test time. Our work, SEM, is a post-hoc, zero-shot method that overcomes the limitations of prior projection methods. In- stead of treating the embedding as an entangled vector, SEM first decomposes it into a sparse set of high-dimensional fea- tures. This enables a precise, non-linear intervention at the neuron level, which is critical for addressing entangled bi- ases and significantly improving worst-group performance where linear methods show limited gains (Sec. 4). Sparse Autoencoders for Feature Decomposition. Our method is enabled by Sparse Autoencoders (SAEs), a tool for learning disentangled representations in an unsuper- vised manner. An SAE is trained to reconstruct a model’s dense embedding from a high-dimensional, sparse latent vector [18]. This approach forces the SAE to learn a sparse dictionary of features that represent the original embedding as a sparse, non-linear combination of its dictionary atoms. This decomposition of a dense, entangled embedding into a sparse set of features is powerful because it allows for the identification and targeted modulation of specific features in a way that is not possible in the original dense space. While much SAE work focuses on exploring the internal activations of LLMs, we operate on the final text embed- dings of CLIP. We specifically employ a Matryoshka SAE (MSAE) [30], a hierarchical architecture designed to learn representations at multiple granularities. This model estab- lishes a state-of-the-art Pareto frontier between reconstruc- tion quality and sparsity, which is essential for our method: it provides a high-fidelity decomposition of the CLIP embed- ding that is safe to intervene on. While concurrent work has begun to explore SAEs for fairness [4,28], our work, SEM, is the first to propose a principled, post-hoc intervention framework based on this technique. 3. Sparse Embedding Modulation In this section, we introduce SPARSE EMBEDDING MODU- LATION (SEM), a post-hoc debiasing method that operates on the latent activations of a Sparse Autoencoder. We be- gin with a motivating analysis supporting SAEs as a tool for disentanglement (Sec. 3.1), then formalize the problem (Sec. 3.2). We next describe our neuron-scoring framework for content relevance (Sec. 3.3) and bias sensitivity (Sec. 3.4), followed by our steering algorithm that produces debiased embeddings (Sec. 3.5). 3.1. Motivation: Quantifying Disentanglement Before detailing our method, we first motivate our choice of Sparse Autoencoders (SAEs) as the foundational repre- sentation for debiasing. A primary challenge in post-hoc debiasing is that semantic concepts (e.g., ‘profession’) and RaceGender 0% 10% 20% 30% 40% 50% 60% Disentanglement Score ViT-B/16 RaceGender ViT-L/14@336px Method CLIP (Baseline) CLIP+SAE (Ours) Figure 2. SAEs Significantly Improve Feature Disentanglement. We plot our Disentanglement Score (higher is better) , which mea- sures a profession probe’s ability to avoid capturing bias. Standard CLIP embeddings (blue) show low disentanglement, while our SAE latent space (orange) consistently increases the score. bias attributes (e.g., ‘race’ or ‘gender’) are often entangled in the original embedding space of models like CLIP. To quantify this, we conduct a study on concept entan- glement (details in Supp. Mat.) where, for fairness, we ensure the training set for all probes is perfectly balanced (i.e., each profession has an equal number of samples from each bias class). Furthermore, we first verify that both the main task (‘profession’) and the bias attributes are equally and near-perfectly decodable from both the CLIP and SAE spaces (see Supp. Mat.), establishing a valid baseline. We first train a linear probe (P p ) to predict ‘profession’ from a set of features (either standard CLIP embeddings or SAE latents). We then train a second sequential probe (P b←p ) to predict a ‘bias attribute’ using only the logits of P p as input. We then propose a Disentanglement Score D ∈ [0, 1], where 1 signifies perfect disentanglement (the profession logits contain no bias information) and 0 signifies perfect entanglement (the profession logits contain all the bias information that was originally available in the features): D = 1− acc b←p − acc chance b acc b − acc chance b (1) whereacc b←p is the sequential probe’s accuracy,acc b is the accuracy of a probe trained directly on the features, and acc chance b is the random-guess baseline. As illustrated in Fig. 2, the original CLIP embeddings are highly entangled, with Disentanglement Scores remaining low (as low as 5-15%). In contrast, the SAE latent space improves disentanglement by 1.7-2.6×for the Gender at- tribute and by 5.6-5.7×for the more complex, multi-class Race attribute. This demonstrates that the SAE successfully disentangles the profession features from the bias features, enabling a targeted intervention. We therefore build our debiasing method on this SAE latent space, as formally in- troduced in the following section. 3 CLIP E txt W e A photo of a man A photo of a woman ... A photo of a doctor P bias Q An image of a doctor A portrait of a doctor ... P q S b > S c ⟹ M(j) < 1 Bias Neuron S c S c S b S c S c S b j k z S b < S c ⟹ M(k) > 1 Content Neuron (a) Scoring Neurons (Secs. 3.3 and 3.4) h deb = h q ⊙M + (1﹣M)⊙m div M b = (1 + S concept ﹣S bias )² M bi = (1 + S concept ﹣S bias )² W d z deb (b) Steering Neurons (Sec. 3.5) Figure 3. Overview of the SEM framework. Our method operates in two stages: (a) Scoring: The CLIP embedding is projected into the SAE latent space. Neurons are then scored for content relevance (Sec. 3.3) and bias sensitivity (Sec. 3.4) by comparing their activations to pre-computed prompt sets. (b) Steering: The scores are combined into a modulation coefficientMthat attenuates bias neurons and boosts content neurons (Sec. 3.5). The final, debiased embedding is reconstructed from this modulated latent vector. 3.2. Problem Formulation Given a prompt, our goal is to modify the model’s behavior toward fairness, reducing biases. Formally, let us consider a contrastive VLM (i.e., CLIP [26]) as a dual encoder archi- tecture, withE txt being the text encoder andE vis the visual one. The two encoders map images in the spaceIand text in the spaceT, to a shared multimodal spaceR d , i.e., E txt :T →R d andE vis :I →R d . Moreover, let us define withC a =c a 1 ,· , c a n a set ofnbias classes (e.g., ‘male’, ‘female’) belonging to the bias attribute a (e.g., gender). Let us assume that for each classc a i , we have a test dataset D a i (e.g., images of male people). Critically, we assume these datasets are otherwise identical, e.g., they contain the same distribution of semantic concepts (like professions). Assuming that we can measure performance on the down- stream task with a metricA, our desired behavior is: A(E txt , E vis , D i ) =A(E txt , E vis , D j ), ∀i, j ∈C a ,(2) i.e., performance is equal regardless of the input’s bias class. Unfortunately, this does not happen in practice, due to the biased nature of the large-scale datasets the VLM was trained on. Therefore, we seek to modify the VLM in such a way that it can perform consistently across bias classes. Following previous works [9,11,16], we seek to achieve this desideratum by modifying the output text embeddings z = E txt (x)in a post-hoc manner, leaving the pretrained en- codersE txt andE vis frozen. A key challenge, however, is that the dimensions of the original embedding spaceR d represent entangled semantics. Simply steering these representations directly can compromise their core semantic structure. To side-step this issue, we first project the embeddings into a high-dimensional, sparse latent space using a Sparse Au- toencoder (SAE) [18,30], perform our manipulation in that space, and then reconstruct the embedding. Sparse Autoencoders. Given a text encoderE txt and an in- putx∈T, we first obtain its embeddingz = E txt (x)∈R d . A trained Sparse Autoencoder (in our case, a Matryoshka SAE [30]),S, maps this embedding into a high-dimensional, sparse latent representationh ∈R s (wheres ≫ d) via an encoder W e and a centering bias b pre : h =S enc (z) = ReLU(W e (z− b pre )).(3) The encoder weightsW e and biasb pre are trained to mini- mize a reconstruction loss (e.g.,L 2 ) while enforcing sparsity on the activationsh, either via anL 1 penalty or, in the case of MSAE, a TopK ReLU at different granularities. The orig- inal embedding can then be approximately reconstructed via a linear decoder W d : ˆz =S dec (h) = W d h + b pre .(4) Our method operates by computing a modified latent vec- torh debias and reconstructing a new, debiased embedding z debias = S dec (h debias ). As illustrated in Fig. 3, this process has two main stages. First, (Fig. 3a) we analyze the SAE la- tent space to score neurons based on their content relevance (Sec. 3.3) and bias sensitivity (Sec. 3.4). Second, (Fig. 3b) we use these scores to modulate the latent activations, an algorithm we detail as score-aware steering (Sec. 3.5). 3.3. Scoring Neurons: Content Relevance The first step in our method is to identify which SAE neurons are semantically relevant to the input queryq(e.g., ‘person’ or ‘doctor’). To isolate these ”content” neurons, we must distinguish their activation from a baseline. We establish this baseline by pre-computing the latent activationsh p | p∈P div for a set of diverse, neutral promptsP div . This set contains a wide variety of neutral sentences, allowing us to estimate the generic activation patterns of the neurons. Leth q = S enc (E txt (q))be the query’s latent representa- tion. We quantify the relevance of a neuronjby computing its percentile rank relative to the diverse activations: S concept (j) = 1 |P div | X p∈P div 1 h q (j) > h p (j) .(5) 4 whereh p =S enc (E txt (p))and1is the indicator function. A highS concept (j)score indicates that the neuron’s high activa- tion is ”anomalous” for this specific query, suggesting it is semantically relevant to the query’s core content. Exploiting Augmentations. The score from Eq. (5) can be sensitive to the specific phrasing of the queryq. To create a more robust estimate, we can augment the query with a set of LLM-generated paraphrases,P q , akin to prior work, e.g., [1]. Specifically, we compute the latent activations for all paraphrases,H q = S enc (E txt (p)) | p ∈ P q , and ex- tract a single content vectorm q as the element-wise median: m q (j) = median(H q (j)). The vectorm q is then used in place ofh q in Eq. (5). This strategy provides a more stable content estimation, less sensitive to linguistic variations, and better capturing the core semantics of the query. 3.4. Scoring Neurons: Bias Sensitivity While the score in Eq. (5) identifies content-relevant neu- rons, it is bias-agnostic. However, we may refine this score provided a set of prompts [9]P bias , that describe the specific attributes we wish to mitigate. For instance, to mitigate the bias attribute ‘gender’, the prompts inP bias will explicitly refer to the bias classes (e.g., ‘male’) of that attribute (e.g., “a photo of a man.”). We believe that when comparing activa- tions, the structure within a bias (i.e., classes and attributes) is crucial. Comparing activations of one class against the others permits distinguishing a specific bias neuron (e.g., activating only for ‘male’) from a general-concept neuron (e.g., activating for ‘person’, and thus all classes within ‘gen- der’). This structured formulation finds neurons that are both strongly active for and specific to a given bias class. Following the notation in Sec. 3.2, for each classc∈C a , we define its set of prompts asP c ⊂ P bias . We compute their latent activationsH c = S enc (E txt (p)) | p ∈ P c and define a bias signaturem c as the element-wise median of these activations:m c (j) = median(H c (j)). This signature captures the expected activation for that specific bias class. From this signature, we compute two scores. The first is the general score,S c gen , measuring how the bias signaturem c activates relative to the neutral promptsP div : S c gen (j) = 1 |P div | X p∈P div 1 m c (j) > h p (j) .(6) The second is the specific score,S c spec , which measures how stronglym c activates relative to all other bias classes inP bias , capturing the neuron’s specificity: S c spec (j) = 1 |P ̄c | X p∈P ̄c 1 m c (j) > h p (j) ,(7) whereP ̄c =P bias c . Our goal is to isolate neurons that are highly active for a specific bias class but not for other bias classes or general concepts. We therefore combine these two scores using a minimum operation. The final bias sensitivity for a neuron j, S bias (j), is its highest score across any bias class: S bias (j) = max c∈C min(S c gen (j), S c spec (j)).(8) Theminoperation ensures we only select neurons that are both generally strong (vs. neutral) and specific (vs. other biases), while themaxoperation identifies any neuron that is specific to any of the bias classes. 3.5. Steering via Activation Modulation The scores from Sec. 3.3 and Sec. 3.4 are combined into a final modulation coefficientM(j)for each neuronj. This coefficient is designed to amplify content-relevant neurons and attenuate bias-specific ones. The computation ofM(j) depends on the available information. Bias-Agnostic Modulation (SEM i ). In the bias-agnostic setting (using onlyP div andP q ), we can only compute S concept (j). The modulation coefficient is thus defined to preserve high-relevance neurons and attenuate low-relevance (and thus, likely spurious) ones: M(j) = S concept (j) 2 .(9) We denote this version as SEM i . As noted in Sec. 3.3, it exclusively uses the augmented content score (derived from m q ) forS concept (j). The importance of this attenuation is validated in our ablation study (Tab. 4), which shows that removing it causes a severe drop in worst-group accuracy. Bias-Aware Modulation (SEM b and SEM bi ). WhenP bias is available, we compute both scores and merge them into our full modulation coefficient: M(j) = 1 + S concept (j)− S bias (j) 2 .(10) This formulation naturally handles all cases: it amplifies neurons whereS concept > S bias (M > 1), attenuates neurons whereS concept < S bias (M < 1), and preserves neurons whereS concept ≈ S bias (M = 1). As shown in our ablations (Tab. 4), the content-boosting term (+S concept ) is critical for preventing performance collapse on challenging spurious correlation tasks like Waterbirds, as it preserves essential, entangled content features. We denote this as SEM b when using the baseS concept (fromh q ) and SEM bi when using the augmented S concept (from m q ). Steering and Reconstruction. FromM(j), we compute the final debiased latent representation h debias via interpolation: h debias = h q ⊙ M + (1− M)⊙ m div ,(11) where⊙is the element-wise product andm div = median(S enc (E txt (p)) | p ∈ P div )is the pre-computed 5 0.40.20.00.20.4 Principal Component 1 0.4 0.2 0.0 0.2 0.4 Principal Component 2 Original CLIP 0.40.20.00.20.4 Principal Component 1 Principal Component 2 Orth-Proj 0.40.20.00.20.4 Principal Component 1 Principal Component 2 SEM b (ours) Gender Female Male Neutral Figure 4. Visualizing Debiasing on Entangled Concepts. (a) A 2D PCA of original CLIP embeddings for 100 professions. Gender clusters (‘female’, ‘male’) are clearly separated, but the ‘neutral’ and ‘male’ ones incorrectly overlap. (b) ORTH-PROJ achieves a partial overlap between ‘male’ and ‘female’ clusters, but fails to merge the ‘neutral’ cluster and appears to disrupt the data’s underlying structure. (c) SEM b successfully merges all three clusters (‘male’, ‘female’, and ‘neutral’) into a cohesive distribution with a consistent structure. Table 1. Quantitative analysis of debiasing methods. Ideally, methods should have high Content Preservation and high Bias Neutralization. ORTH-PROJ fails at content preservation. MethodContent Preservation (↑)Bias Neutralization (↑) ORTH-PROJ0.4150.916 SEM b 0.8780.974 median activation of the diverse prompts. Thism div acts as a neutral activation vector, replacing the activations of attenuated neurons. As a final implementation detail, for the SEM i variant, we found it beneficial to replace theh q in Eq. (11) with the robust median activationm q . For SEM b and SEM bi , we use the originalh q . Once steered, the de- biased embeddingz debias is reconstructed using the SAE decoder as defined in Eq. (4), z debias =S dec (h debias ). 4. Experiments 4.1. Experimental Setup Models and Baselines. We evaluate our method on two pre- trained CLIP backbones: ViT-B/16 and ViT-L/14@336px. We compare SEM against state-of-the-art post-hoc debiasing methods, grouped by the information they require at test time: (i) ROBOSHOT [1] being bias-agnostic and using input- specific prompts; (i) ORTH-PROJ [9] and PRISM-MINI [24] using bias prompts only; (i) ORTH-CALI [9], using both bias and input-specific prompts; (iv) BENDVLM [11] using both prompts as well as labeled images. Tasks and Datasets. We evaluate all methods on two tasks across four standard benchmarks. For cross-modal retrieval, we follow the protocol from Gerych et al.[11], using Stereo- type Queries (e.g., “a photo of a criminal”) on FairFace [20], UTKFace [33], and CelebA [22], and Hair Color Queries on CelebA. For zero-shot classification, we evaluate on the “Blond Hair” attribute of CelebA and on the Waterbirds [27] spurious correlation benchmark. Metrics. For retrieval, we report KL Divergence@500 (KL, ↓), MaxSkew@500 (MS,↓), and Precision@500 (Prec.,↑). For zero-shot classification, we report Accuracy (Acc.,↑), Worst-Group Accuracy (WG,↑), and Gap (↓). Evaluation Protocol. Following Gerych et al.[11], all re- sults are averaged over 10-fold cross-validation. Each fold’s test set is randomly split into a 50% reference set (for meth- ods requiring it, like BendVLM) and a 50% evaluation set. SAE Training. We train a Matryoshka Sparse Autoencoder (MSAE) [30] for each CLIP backbone on 8.5M captions from the C12M-cleaned dataset [25]. The SAEs use a la- tent dimension of16384. Full details on the architecture, training objective (L 2 loss with reverse weighting), and hy- perparameters are provided in the Supp. Mat.. 4.2. Qualitative Study: Entanglement Before presenting our main quantitative results, we first conduct a targeted study to analyze how different methods handle explicitly entangled prompts. This analysis provides a concrete illustration of the limitations of operating directly on the dense, entangled embedding space. Study-Specific Setup. We use a set of 100 profession prompts, each paired with a gender (e.g., “a photo of a female doctor”) and a neutral counterpart (e.g., “a photo of a doctor”). We compare the PCA of base embeddings (ViT/B- 16), ORTH-PROJ [9], and our SEM b with the content score (S class ) from neutral profession prompt (see Supp. Mat.). Visual Analysis. As shown in Fig. 4, the original CLIP space is clearly biased, with the ‘neutral’ profession embeddings overlapping the ‘male’ cluster. ORTH-PROJ achieves a large overlap between the ‘male’ and ‘female’ clusters but fails to properly merge the ‘neutral’ concepts, which remain sepa- rated. Furthermore, the three distributions have dissimilar structures. In contrast, our SEM b successfully achieves an almost full overlap between all three clusters. Crucially, all groups now share a similar overlapping structure, hinting that the underlying profession was better preserved. 6 Table 2. Measuring race and gender bias for Stereotype queries on FairFace and UTKFace. Bold: Best in setting (row group) and better than BASE CLIP. Underline: Best in setting, but not improving over BASE CLIP. Gray: Method is not zero-shot. Method FairFaceUTKFace ViT-B/16ViT-L/14@336pxViT-B/16ViT-L/14@336px RACEGENDERRACEGENDERRACEGENDERRACEGENDER KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓) BASE CLIP0.2370.7950.1390.3460.2440.7980.1140.3260.1240.4750.1340.3210.1240.4610.0400.185 Bias-agnostic + input-specific prompts ROBOSHOT0.3270.8910.3490.5080.3040.9260.3240.5190.2200.6810.2470.3960.2360.7420.2690.467 SEM i 0.1700.6910.0870.2680.1460.6240.1220.3280.0960.4070.0640.2410.0580.4510.0330.186 Bias prompts only ORTH-PROJ0.3130.8180.3350.5210.2130.7830.0340.1640.2810.5410.1960.3870.2000.4930.0500.220 PRISM-MINI0.3010.8050.3400.5220.2090.7790.0350.1650.2760.5380.1970.3890.1970.4920.0510.222 SEM b 0.2310.7490.0970.2770.1940.7060.0970.2980.145 0.5010.1240.3200.1370.4460.0470.201 ZSDEBIAS0.1980.7850.1230.3200.1780.6930.1130.3220.1290.6270.0700.2470.1650.4780.1120.332 Bias prompts + input-specific prompts ORTH-CALI0.2670.7870.4150.5960.1690.6570.0520.2060.2420.5170.2660.4570.1800.5270.0400.201 SEM bi 0.2170.7490.0880.2560.1550.6240.1090.2990.1370.4980.1190.3190.1180.4190.0550.217 PRISM0.1520.6430.0850.2840.1470.6140.0510.2300.1420.5080.0930.2930.1590.5430.0380.198 Bias prompts + input-specific prompts + labeled images BENDVLM0.0980.4940.0090.1050.1060.5770.0050.0800.0990.4160.0090.1010.0890.4840.0090.106 BENDSEM bi 0.0550.4360.0070.0920.0630.4360.0070.0870.0540.4220.0050.0780.0450.3300.0060.081 Quantitative Analysis. We complement our visual analysis with a quantitative evaluation. In particular, a successful debiasing method should achieve two goals: (1) Content Preservation: It must preserve the high cosine similarity of gender prompts (e.g., ”female doctor”) to the neutral concept (”doctor”). (2) Bias Neutralization: It must push the cosine similarity of opposite gender prompts (e.g., ”female doctor”, ”male doctor”) similarity above those of the original model (ideally towards 1.0). In Tab. 1, we quantitatively evaluate ORTH-PROJ and SEM b against these two goals. ORTH-PROJ exhibits a severe degradation in content preservation, with its similarity to the neutral concept drop- ping to 0.415. Furthermore, it fails the debiasing objective, as the similarity between gendered pairs (0.916) is even lower than the original baseline (0.956). In contrast, our SEM b retains a high degree of content similarity (0.878) while simultaneously succeeding at the debiasing goal, in- creasing the similarity between the ‘female’ and ‘male’ ver- sions of a profession to 0.974. 4.3. Main Quantitative Results and Discussion We present our main quantitative results in Tab. 2 (Retrieval) and Tab. 3 (Zero-Shot Classification). The tables are grouped by the information required by each method at test time, allowing for a fair comparison of our SEM variants against the baselines in each category. We include PRISM [24] and ZSDEBIAS [19] (both in gray) for reference, but note that these methods require bias-specific training (not zero-shot). SEM Significantly Improves Zero-Shot Robustness. A primary failure of prior zero-shot methods is their inability to resolve strong spurious correlations. This is evident in Tab. 3 on the Waterbirds dataset, where methods like ORTH- PROJ and ROBOSHOT offer only marginal gains over BASE CLIP in Worst-Group (WG) accuracy. Our SEM variants, in contrast, provide a substantial improvement. For instance, on Waterbirds (ViT-L/14), our SEM bi improves WG accuracy from 0.396 (BASE CLIP) to 0.676, a 28-point gain that effectively addresses the core spurious correlation problem. Similarly, on the CelebA social bias task, SEM b and SEM bi consistently achieve the highest WG accuracy and the lowest Gap among all zero-shot methods. SEM Achieves SOTA Fairness in Retrieval. On the Stereo- type retrieval tasks shown in Tab. 2, SEM demonstrates strong and consistent fairness improvements. Our bias- agnostic SEM i is the state-of-the-art in its category, sig- nificantly outperforming ROBOSHOT, which often degrades fairness. For example, on FairFace Race (ViT-B/16), SEM i improves the KL divergence from 0.237 to 0.170, while ROBOSHOT worsens it to 0.327. Our bias-aware methods, SEM b and SEM bi , are also highly competitive, outperform- ing or matching other zero-shot methods on 12 out of 16 social bias metrics. As shown in the Supp. Mat., our methods also achieve the highest retrieval precision on the CelebA Hair Color query, demonstrating that we improve fairness without sacrificing semantic accuracy. SEM is Modular and Complementary. Finally, our feature-level intervention is modular and can be combined with other methods. The last row in each table shows the result of feeding our debiased SEM bi embedding into BEND- VLM [11], a method that requires a reference image set. This combined BENDSEM bi approach sets a new state-of- the-art, outperforming BENDVLM alone in 24 out of 28 metrics. The improvements are significant: on Waterbirds (ViT-L/14), WG accuracy increases from 0.416 to 0.745 7 Table 3. Measuring zero-shot classification fairness on CelebA and Waterbirds. Bold: Best in setting (row group) and better than BASE CLIP. Underline: Best in setting, but not improving over BASE CLIP. Gray: Method is not zero-shot. Method CelebA (Gender)Waterbirds (Background) ViT-B/16ViT-L/14@336pxViT-B/16ViT-L/14@336px ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓) BASE CLIP0.7480.6120.1360.8690.7800.0900.8290.2500.5790.8620.3960.466 Bias-agnostic + input-specific prompts ROBOSHOT0.7880.6930.0950.8480.8120.0360.8060.2620.5450.8620.4850.377 SEM i 0.7360.6100.1250.7910.7440.0470.8010.4960.3050.8320.5230.309 Bias prompts only ORTH-PROJ0.7430.6090.1340.8610.7850.0760.8170.2880.5290.8580.4770.381 PRISM-MINI0.7430.6090.1340.861 0.7850.0760.8170.2880.5290.8580.4770.381 SEM b 0.7960.7090.0860.8560.8240.0320.8250.4300.3950.8550.6310.225 ZSDEBIAS0.7130.4980.2150.8290.7330.0960.8120.2220.5900.8380.3500.488 Bias prompts + input-specific prompts ORTH-CALI0.7460.6190.1260.8520.8140.0370.8260.3710.4560.8440.4820.362 SEM bi 0.7940.7180.0760.8510.8200.0310.8070.5450.2620.8350.6840.151 PRISM0.7720.6790.0930.8630.8350.0280.8860.6030.2830.9180.6570.261 Bias prompts + input-specific prompts + labeled images BENDVLM0.7500.6840.0660.8360.7620.0740.816 0.2400.5760.8190.4210.398 BENDSEM bi 0.7960.7470.0490.8460.8270.0190.7800.6360.1440.8080.7410.067 Table 4. Ablation study on zero-shot classification tasks (ViT- L/14@336px). Our full methods (SEM i and SEM b ) provide the most robust, balanced performance. Bold: Best in setting. Method Variant CelebA (Gender)Waterbirds (Background) ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓) SEM i Variants (Bias-Agnostic) SEM i (Full)0.7910.7450.0460.8320.5230.309 – M(j) = 10.7290.6400.0890.8720.3570.515 – median CLIP0.6870.5580.1290.8790.4000.479 SEM b Variants (Bias-Aware) SEM b (Full)0.8560.8240.0320.8550.6240.231 – M(j) = (1− S bias ) 2 0.8330.8120.0210.8480.4450.403 – S bias = S gen only0.8460.8180.0280.8560.6470.209 – S bias = S spec only0.8530.8220.0310.8490.6620.187 (+32.9 points), and on UTKFace Race (ViT-L/14), KL di- vergence is reduced by 50.6% (from 0.087 to 0.043). This proves our approach is not a standalone competitor but a complementary framework that can enhance other methods. 4.4. Ablation Study To validate our design choices, we conduct an ablation study focusing on our two zero-shot classification tasks (CelebA and Waterbirds) with the ViT-L/14@336px backbone. We present the results in Tab. 4 and provide full results for all tasks and backbones in the Supp. Mat. Analysis of SEM i . As shown in Tab. 4, our full SEM i method provides the best overall performance in its category. Removing our relevance-based attenuation (“M(j) = 1”) leads to a significant degradation in Worst-Group (WG) ac- curacy on both CelebA (from 0.745 to 0.640) and Waterbirds (from 0.523 to 0.357). This confirms that simply using the median activation is insufficient; our relevance-based attenu- ation is critical. Removing the SAE entirely and operating on dense CLIP embeddings (“median CLIP”) results in an even larger performance drop on the CelebA task. Analysis of SEM b . The ablations for SEM b highlight the importance of our full, balanced formulation. The variant without content boosting (“M(j) = (1− S bias ) 2 ”) shows a severe drop in WG accuracy on the Waterbirds task, falling from 0.624 to 0.445. This indicates that our content-boosting term (S class ) is critical for preserving entangled content features. Furthermore, while the “specific only” variant achieves the best WG performance on Waterbirds (0.662), it does so at the cost of WG accuracy on CelebA (0.822 vs. our 0.824). Similarly, the “general only” variant is strong on Waterbirds but weaker on CelebA. Our full SEM b , using both biases and content boosting, provides the most robust performance, achieving the highest WG accuracy on the so- cial bias task (CelebA) while remaining highly competitive on the spurious correlation one (Waterbirds). 5. Conclusion We introduced SPARSE EMBEDDING MODULATION (SEM), a flexible, post-hoc, and zero-shot framework for mitigat- ing biases in Vision-Language Models. SEM decomposes text embeddings into a high-dimensional, disentangled latent space using a Sparse Autoencoder, enabling precise, non- linear interventions. We presented three variants (SEM i , SEM b , SEM bi ) that adapt to different levels of available in- formation, from bias-agnostic to bias-aware settings. Across four benchmarks, SEM consistently improves fairness and worst-group accuracy, resolving a key failure of prior meth- ods. Finally, we demonstrate its modularity by combining it with BENDVLM to further improve its results, highlighting the benefits of sparse, feature-level debiasing. 8 Acknowledgments. The authors acknowledge the CINECA award under the ISCRA initiative for the availability of high- performance computing resources and support. This work was supported by the EU Horizon ELIAS (No. 101120237), ELLIOT (No. 101214398), and TURING (No. 101215032) projects. References [1] Dyah Adila, Changho Shin, Linrong Cai, and Frederic Sala. Zero-Shot Robustification of Zero-Shot Models. In ICLR, 2024. 2, 5, 6 [2]Sandhini Agarwal, Gretchen Krueger, Jack Clark, Alec Rad- ford, Jong Wook Kim, and Miles Brundage. Evaluating clip: towards characterization of broader capabilities and down- stream implications. arXiv preprint arXiv:2108.02818, 2021. 1, 2 [3]Ibrahim Alabdulmohsin, Xiao Wang, Andreas Peter Steiner, Priya Goyal, Alexander D’Amour, and Xiaohua Zhai. CLIP the Bias: How Useful is Balancing Data in Multimodal Learn- ing? In ICLR, 2024. 1, 2 [4]Antonio Barbalau, Cristian Daniel Paduraru, Teodor Poncu, Alexandru Tifrea, and Elena Burceanu. Rethinking Sparse Autoencoders: Select-and-Project for Fairness and Control from Encoder Features Alone. In NeurIPS-WS, 2025. 3 [5]Hugo Berg, Siobhan Hall, Yash Bhalgat, Hannah Kirk, Alek- sandar Shtedritski, and Max Bain. A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning. In IJCNLP, 2022. 2 [6]Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahem- bwe. Multimodal datasets: misogyny, pornography, and ma- lignant stereotypes. arXiv preprint arXiv:2110.01963, 2021. 1, 2 [7] Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to Computer Pro- grammer as Woman is to Homemaker? Debiasing Word Embeddings. In NeurIPS, 2016. 2 [8]Joy Buolamwini and Timnit Gebru. Gender Shades: Inter- sectional Accuracy Disparities in Commercial Gender Clas- sification. In Conference on Fairness, Accountability and Transparency, 2018. 2 [9]Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, and Stefanie Jegelka. Debiasing vision-language models via biased prompts. arXiv preprint arXiv:2302.00070, 2023. 2, 4, 5, 6 [10]Moreno D’Inc ` a, Elia Peruzzo, Massimiliano Mancini, Dejia Xu, Vidit Goel, Xingqian Xu, Zhangyang Wang, Humphrey Shi, and Nicu Sebe. OpenBias: Open-set Bias Detection in Text-to-Image Generative Models. In CVPR, 2024. 2 [11] Walter Gerych, Haoran Zhang, Kimia Hamidieh, Eileen Pan, Maanas K Sharma, Tom Hartvigsen, and Marzyeh Ghassemi. Bendvlm: Test-time debiasing of vision-language embed- dings. NeurIPS, 2024. 2, 3, 4, 6, 7 [12]Google. Gemini 2.5: Our most intelligent AI model, 2025. Accessed: 2025-09-01. 3 [13]Quentin Guimard, Moreno D’Inc ` a, Massimiliano Mancini, and Elisa Ricci. Classifier-to-Bias: Toward Unsupervised Automatic Bias Detection for Visual Classifiers. In CVPR, 2025. 2 [14]Kimia Hamidieh, Haoran Zhang, Walter Gerych, Thomas Hartvigsen, and Marzyeh Ghassemi. Identifying implicit social biases in vision-language models. In AAAI/ACM Con- ference on AI, Ethics, and Society, 2024. 1, 2 [15]Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. Women also Snowboard: Over- coming Bias in Captioning Models. In ECCV, 2018. 2 [16]Yusuke Hirota, Min-Hung Chen, Chien-Yi Wang, Yuta Nakashima, Yu-Chiang Frank Wang, and Ryo Hachiuma. SANER: Annotation-free Societal Attribute Neutralizer for Debiasing CLIP. In ICLR, 2025. 2, 4 [17]Parsa Hosseini, Sumit Nawathe, Mazda Moayeri, Sriram Bal- asubramanian, and Soheil Feizi. Seeing What’s Not There: Spurious Correlation in Multimodal LLMs. arXiv preprint arXiv:2503.08884, 2025. 1 [18]Robert Huben, Hoagy Cunningham, Logan Riggs Smith, Aidan Ewart, and Lee Sharkey. Sparse Autoencoders Find Highly Interpretable Features in Language Models. In ICLR, 2024. 2, 3, 4 [19]Taeuk Jang, Hoin Jung, and Xiaoqian Wang. Target Bias Is All You Need: Zero-Shot Debiasing of Vision-Language Models with Bias Corpus. In ICCV, 2025. 2, 7 [20]Kimmo Karkkainen and Jungseock Joo. Fairface: Face at- tribute dataset for balanced race, gender, and age for bias measurement and mitigation. In WACV, 2021. 6 [21]Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghu- nathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just Train Twice: Improving Group Robust- ness without Training Group Information. In ICML, 2021. 2 [22]Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. 6 [23]Yan Luo, Min Shi, Muhammad Osama Khan, Muham- mad Muneeb Afzal, Hao Huang, Shuaihang Yuan, Yu Tian, Luo Song, Ava Kouhana, Tobias Elze, Yi Fang, and Mengyu Wang. FairCLIP: Harnessing Fairness in Vision-Language Learning. In CVPR, 2024. 2 [24]Mahdiyar Molahasani,Azadeh Motamedi,Michael Greenspan, Il-Min Kim, and Ali Etemad. PRISM: Reducing Spurious Implicit Biases in Vision-Language Models with LLM-Guided Embedding Projection. In ICCV, 2025. 2, 6, 7 [25]OpenDiffusionAI. C12M-cleaned, 2023. Accessed: 2025- 10-01. 6, 1 [26]Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 1, 4 [27]Shiori Sagawa * , Pang Wei Koh * , Tatsunori B. Hashimoto, and Percy Liang. Distributionally Robust Neural Networks. In ICLR, 2020. 1, 2, 6 [28] Kuleen Sasse, Shan Chen, Jackson Pond, Danielle Bitter- man, and John Osborne. debiaSAE: Benchmarking and Mitigating Vision-Language Model Bias. arXiv preprint arXiv:2410.13146, 2024. 3 9 [29]Ashish Seth, Mayur Hemani, and Chirag Agarwal. Dear: Debiasing vision-language models with additive residuals. In CVPR, 2023. 2 [30]Vladimir Zaigrajew, Hubert Baniecki, and Przemyslaw Biecek. Interpreting CLIP with Hierarchical Sparse Autoen- coders. In ICML, 2025. 2, 3, 4, 6, 1 [31] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid Loss for Language Image Pre-Training. In ICCV, 2023. 1 [32]Haoyu Zhang, Yangyang Guo, and Mohan Kankanhalli. Joint Vision-Language Social Bias Removal for CLIP. In CVPR, 2025. 2 [33]Zhifei Zhang, Yang Song, and Hairong Qi. Age progres- sion/regression by conditional adversarial autoencoder. In CVPR, 2017. 6 10 SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models Supplementary Material A. SAE Training Details As outlined in the main paper, we train a separate Sparse Autoencoder for each CLIP backbone (ViT-B/16 and ViT- L/14@336px). Below, we detail the architecture, objective, and optimization hyperparameters used. Architecture and Objective. We employ the Matryoshka Sparse Autoencoder (MSAE) architecture proposed by Zaigrajew et al.[30]. Unlike standard SAEs, the MSAE is designed to learn hierarchically structured features. We set the total latent dimensionality to16384. The model is trained to minimize the reconstruction error (MSE) computed at spe- cific nested granularities, specificallyg ∈ 256, 512. To enforce the hierarchical structure, we apply Reverse Weight- ing (RW) to the loss function. This weighting scheme assigns higher importance to errors at lower granularities (i.e., the top-256 features), ensuring that the most salient semantic concepts are captured by the earlier latent dimensions before finer-grained details are learned in the higher dimensions. Initialization. We use a learned centering parameterb pre , which is subtracted from the input embedding before en- coding and added back after decoding. This parameter is initialized to the geometric mean of the training embeddings. For the weights, we follow standard SAE best practices: the decoder weightsW d are initialized using Kaiming uni- form initialization and scaled, while the encoder weights W e are initialized as the transpose of the decoder weights (W e = W T d ). The encoder bias is initialized to zero. Optimization and Data. All models are optimized using the AdamW optimizer with a learning rate of1× 10 −4 and a batch size of2048. We utilize a linear-decay learning rate scheduler, which maintains a constant learning rate for the initial portion of training before decaying linearly to zero. We use the C12M-cleaned dataset [25], split into 90% for training and 10% for validation. Computational Resources. Training was performed on a shared high-performance cluster node equipped with a single NVIDIA A100 GPU (64GB HBM2e), 8 CPU cores, and 128 GB of RAM. Under this setup, training a single SAE takes approximately 1.5 hours. B. Details on Disentanglement Study In this section, we provide the full experimental details and results for the disentanglement study presented in Sec. 3.1 of the main paper. B.1. Experimental Setup Dataset Generation. To construct the probing dataset, we combine a set of templates with specific attributes. We use: • Bias Attributes: – Gender (2 classes): ‘male’, ‘female’. – Race (7 classes): ‘Black’, ‘East Asian’, ‘Indian’, ‘Latino/Hispanic’, ‘Middle Eastern’, ‘Southeast Asian’, ‘White’. • Main Attribute: Profession (100 classes). The complete list is provided in Tab. 5. •Templates: 20 diverse prompt templates (listed in Tab. 6) that vary syntactic structure while retaining the semantic content slots forbias andprofession. We generate all possible combinations of (Template×Bias ×Profession), resulting in a balanced dataset where every profession is equally represented across all bias classes. Probing Methodology. We use Logistic Regression classi- fiers as linear probes. To ensure a rigorous evaluation: 1.Data Split: We use 5-fold stratified cross-validation. The splits are stratified by the main task (profession) to ensure all classes are represented in training and testing. 2.Scaling: Feature inputs (CLIP embeddings or SAE la- tents) are standardized (zero mean, unit variance) using statistics computed on the training set of each fold. 3. Training: The probes are trained using the L-BFGS solver with a maximum of 1000 iterations to ensure con- vergence. B.2. Two-Stage Disentanglement Experiment We use a sequential probing setup to quantify conceptual entanglement: 1. Stage 1 (Main Task): We train a probeP p to predict the ‘profession’ label from the features. We report its accuracy as acc p . 2.Control (Bias Task): We train a probeP b to predict the ‘bias’ label directly from the features. We report its accuracy asacc b . This serves as an upper bound on the extractable bias information. 3.Stage 2 (Entanglement): We freezeP p and use it to generate logits for the test set. We then train a second probeP b←p to predict the ‘bias’ label using only these profession logits as input. We report its accuracy as acc b←p . A highacc b←p indicates that the profession classifier relies on features that are entangled with the bias attribute. Ideally, if the embeddings are perfectly disentangled, the profession 1 Table 5. Complete list of 100 professions used in both the disentanglement and qualitative studies. AccountantActorArchitectAstronautAudiologistAuthorBakerBarber BiologistBlacksmithBricklayerBus driverButcherCarpenterChefChemist Civil engineerCleanerClerkCoachComedianComputer programmerConstruction workerConsultant CounselorDancerDentistDesignerDetectiveDietitianDJDoctor DriverEconomistEditorElectricianEngineerEntrepreneurFarmerFirefighter FloristGraphic designerHairdresserHistorianInterpreterJournalistJudgeLawyer LibrarianMagicianMakeup artistManagerMarine biologistMathematicianMechanicModel MusicianNannyNurseNutritionistOpticianPainterParamedicPastry chef PediatricianPharmacistPhotographerPhysicistPilotPlumberPolice officerPolitician ProfessorPsychologistReal estate agentReceptionistRecruiterReporterResearcherSailor SalespersonScientistSecurity guardSingerSocial workerSoftware developerStatisticianSurgeon SurveyorTeacherTechnicianTherapistTour guideTranslatorVetVideographer WaiterWeb developerWriterZoologist Table 6. Prompt templates used for the disentanglement study. Templates withbias andprofession placeholders A photo of a bias profession. A bias profession at work. An image of a bias profession. An illustration of a bias profession. A bias profession. A studio shot of a bias profession. A portrait of a bias profession. A bias professional who works as a profession. This is a bias profession. A close-up of a bias profession. Here is a bias profession. A bias profession on the job. A picture depicting a bias profession. A bias individual employed as a profession. A bias person who is a profession. We can see a bias profession here. A bias person working as a profession. A bias profession posing for the camera. This image shows a bias profession. A depiction of a bias profession. classifier should make its predictions without relying on any gender-related information. B.3. Full Results Tab. 7 presents the detailed accuracies for all stages. As noted in the main paper, both CLIP and SAE representations allow for near-perfect performance on the main task (acc p > 0.99). However, the sequential probe accuracy (acc b←p ) is significantly lower for the SAE latent space compared to the dense CLIP embedding space. This quantitative gap drives the higher Disentanglement Score (D) reported in the main paper, confirming that the SAE effectively separates bias information from task-relevant semantics. C. Details on Qualitative Study In Sec. 4.2 of the main paper, we presented a qualitative analysis of conceptual entanglement. Here, we provide the detailed experimental setup, dataset construction, and formal definitions of the metrics used for that study. C.1. Dataset Construction To study the entanglement of bias and content, we con- structed a targeted dataset of 100 profession prompts. The professions are the same as those listed in Tab. 5 (e.g., ac- countant, doctor, engineer). For each professionp, we generate three prompt variants: 1. Female: “A photo of a femaleprofession.” 2. Male: “A photo of a maleprofession.” 3. Neutral: “A photo of aprofession.” This results in a total of 300 prompts. This controlled set allows us to isolate the effect of the gender attribute on the profession semantics. C.2. Methodology Models and Baselines. We compute embeddings for all 300 prompts using the ViT-L/14@336px backbone, matching the quantitative results reported in Sec. 4.3. We compare three sets of embeddings: • BASE CLIP: The original, unperturbed embeddings. •ORTH-PROJ [9]: Embeddings debiased by projecting out the gender subspace. • SEM b : Embeddings debiased using our proposed sparse modulation. For this specific experiment, to ensure maxi- mum content preservation, the content scoreS concept was computed using the neutral profession prompt as the refer- ence. PCA Visualization. To generate the visualization in the main paper (Fig. 4), we apply Principal Component Analysis (PCA) to the set of 300 embeddings for each method inde- pendently. We project the embeddings onto their first two principal components. This allows us to visualize the geo- 2 Table 7. Full Probing Results. Mean accuracies for profession prediction (acc p ), direct bias prediction (acc b ), and sequential entanglement probe (acc b←p ) across Race and Gender settings. Lower entanglement (acc b←p ) indicates better disentanglement. Method ViT-B/16ViT-L/14@336px RACEGENDERRACEGENDER acc p (↑) acc b (↑) acc b←p (↓) acc p (↑) acc b (↑) acc b←p (↓) acc p (↑) acc b (↑) acc b←p (↓) acc p (↑) acc b (↑) acc b←p (↓) BASE CLIP1.0001.0000.9571.0001.0000.9231.0001.0000.9491.0001.0000.852 SAE0.9961.0000.7550.9950.9970.8000.9940.9980.7100.9930.9960.748 metric structure of the ‘male’, ‘female’, and ‘neutral’ clusters for each method without the projection being dominated by the global variance of the original space. Metric Definitions. To quantify the visual observations, we defined two metrics based on cosine similarity. Let z neut, orig p denote the original Base CLIP embedding for the neutral prompt of professionp. Letz g p denote the debi- ased embeddings for professionpwith gender attribute g ∈G =male, female. •Content Preservation (CP): This metric measures how well the gendered embeddings retain the semantics of the original neutral concept after debiasing. It is computed as the average cosine similarity between the gendered embeddings and the original neutral anchor: CP = 1 |P||G| X p∈P X g∈G cos(z g p , z neut, orig p )(12) A CP value close to the baseline (BASE CLIP) indicates that the method has preserved the core semantic identity of the profession. A significant drop indicates concept corruption. •Bias Neutralization (BN): This metric measures the align- ment between the male and female representations of the same profession. Higher similarity implies that the gender information distinguishing them has been removed (i.e., the embeddings have merged). BN = 1 |P| X p∈P cos(z male p , z fem p )(13) An ideal debiasing method should maximize BN (pushing it towards 1.0) while maintaining high CP. D. Text Prompts In this section, we provide details on the prompt sets used in our experiments: bias prompts (P bias ), diverse prompts (P div ), and augmented query prompts (P q ). All prompts were generated using Google Gemini 2.5 Pro [12]. D.1. Bias Prompts For each bias attribute we aim to mitigate (e.g., gender, race), we define a corresponding set of bias classesC a (e.g., ‘male’, ‘female’ for gender; the seven ethnicity categories used in the main paper for race). To populateP bias , we prompt the LLM to generate 20 natural language captions for each class that describe the attribute with syntactic variety but without introducing confounding concepts. For example: • Gender: “A portrait of a man.”, “A close-up of a woman’s face.”. • Race: “A photo of a Black person from the side.”, “A person with East Asian facial features.”. D.2. Diverse Prompts To effectively identify bias neurons, it is crucial to measure activations relative to a neutral baseline rather than in abso- lute terms. This allows us to distinguish neurons specific to a bias concept from those that activate generally. We generate a set of328diverse, neutral text prompts (P div ) designed to cover a broad range of semantic concepts with a roughly uniform distribution. These captions span various scenes, activities, objects, animals, and environments to ensure wide coverage of the semantic space. Examples are provided in Tab. 8. Table 8. Examples of diverse prompts used to establish a baseline activation distribution. Prompt A firefighter in full gear holding a water hose. A musician playing a guitar on a dimly lit stage. A group of puppies tumbling and playing together. A modern skyscraper made of glass and steel. A golden retriever fetching a stick in a park. A panoramic skyline of a modern city at night. A rocky canyon carved by a river. A close-up of moss growing on a tree trunk. D.3. Augmented Query Prompts To improve robustness in both retrieval and zero-shot clas- sification, we generate augmented prompts (P q ) for each query using an LLM. •Retrieval: For each input query (e.g., “A photo of a crimi- nal”), the LLM generates 10 paraphrases (e.g., “An image of a criminal”, “A person who committed a crime”) to en- hance semantic diversity and reduce sensitivity to specific wording. 3 Table 9. Measuring gender bias for Stereotype and Hair Color queries on CelebA. Bold: Best in setting (row group) and better than BASE CLIP. Underline: Best in setting, but not improving over BASE CLIP. Gray: Method is not zero-shot. Method ViT-B/16ViT-L/14@336px STEREOTYPEHAIR COLORSTEREOTYPEHAIR COLOR KL(↓)MS(↓)KL(↓)MS(↓)PREC.(↑)KL(↓)MS(↓)KL(↓)MS(↓)PREC.(↑) BASE CLIP0.3140.5550.1790.4090.6290.2370.5360.1480.3590.622 Bias-agnostic + input-specific prompts ROBOSHOT0.1890.3550.1440.2440.6330.1950.3940.2760.4290.675 SEM i 0.1730.4430.1910.3450.6780.1530.4130.2370.4580.698 Bias prompts only ORTH-PROJ0.1880.3820.1890.3780.6590.0990.3550.1440.3730.692 PRISM-MINI0.1900.3840.1880.3770.6580.0990.3570.1430.366 0.696 SEM b 0.2400.4960.1720.3950.7280.1990.4810.1350.3660.698 ZSDEBIAS0.1960.4410.1930.3770.5220.2560.5560.1180.3530.509 Bias prompts + input-specific prompts ORTH-CALI0.2360.4080.1480.3750.6840.0540.2660.1070.3120.688 SEM bi 0.2230.4880.1810.3990.7330.2090.4900.1680.4020.733 PRISM0.1430.3770.0600.1860.6690.0610.2450.1710.2990.659 Bias prompts + input-specific prompts + labeled images BENDVLM0.0350.2380.0280.1730.6560.0300.2170.0280.1640.680 BENDSEM bi 0.0300.2240.0290.1580.7500.0420.2610.0320.1870.685 •Zero-Shot Classification: For each target class label (e.g., “landbird”), the LLM generates 10 descriptive paraphrases (e.g., “This is a picture of a landbird”, “A depiction of a bird that lives on land”). We compute the median activation across these augmented sets to obtain a stable, noise-resistant representation of the query content (m q ), improving semantic generalization. E. Extended Retrieval Results In this section, we present the additional quantitative results for the retrieval task on CelebA, using both Stereotype and Hair Color queries, which were omitted from the main paper due to space constraints. We present these results in Tab. 9. Fairness vs. Precision Trade-off. In the Bias-agnostic and Bias prompts only settings, our methods (SEM i and SEM b ) demonstrate a competitive balance. While baselines like RO- BOSHOT and ORTH-PROJ sometimes achieve better (lower) fairness scores (KL/MS) on this specific dataset, they often do so at the cost of retrieval quality. In contrast, our methods consistently maintain higher retrieval precision. For instance, on the ViT-B/16 backbone, SEM i surpasses ROBOSHOT in Hair Color precision (0.679 vs. 0.632), and SEM b outper- forms ORTH-PROJ (0.729 vs. 0.660). This indicates that our method prioritizes preserving the query semantics while still reducing bias, avoiding the “over-correction” seen in prior methods that can degrade downstream task performance. Modularity Improves Semantic Consistency. This advan- tage is most notable in the Bias prompts + input-specific prompts + labeled images setting. Here, the combination of our method with the baseline (BENDSEM bi ) provides a dis- tinct advantage in semantic consistency. While BENDSEM bi achieves fairness scores comparable to BENDVLM alone, it boosts retrieval precision by 9.5% (from 0.656 to 0.751) on the ViT-B/16 backbone. This confirms that integrating our sparse, feature-level modulation helps traditional debias- ing methods retain critical semantic information, ensuring that the debiased embeddings remain accurate and useful for downstream tasks. F. Extended Ablation Study We provide the complete ablation results across all datasets and backbones in Tab. 10 (FairFace/UTKFace Retrieval), Tab. 11 (Zero-Shot Classification), and Tab. 12 (CelebA Retrieval). These results strongly support the design choices discussed in Sec. 4.4 of the main paper, confirming that our full methods, SEM i and SEM b , offer the most robust performance across diverse tasks. Analysis of SEM i . The zero-shot classification results (Tab. 11) reveal that removing our relevance-based atten- uation leads to consistent and substantial drops in Worst- Group (WG) accuracy across all datasets and backbones. For instance, on Waterbirds (ViT-B/16), WG accuracy col- lapses from 0.498 to 0.210, underscoring the critical role of modulating spurious features. Operating directly in the dense CLIP space (“median CLIP”) also proves unreliable. While this baseline performs well on the specific Waterbirds task (ViT-B/16), it is highly unstable elsewhere. It suffers significant performance drops on CelebA ZS (ViT-L/14) and consistently fails to mitigate gender bias in retrieval tasks, particularly on ViT-B/16. Specifically, compared to our SAE-based approach, the dense baseline yields substan- tially worse gender fairness metrics on FairFace, UTKFace, and CelebA Stereotype retrieval for the ViT-B/16 backbone 4 Table 10. Extended ablation study for retrieval on FairFace and UTKFace. Bold: Best in setting. Method Variant FairFaceUTKFace ViT-B/16ViT-L/14@336pxViT-B/16ViT-L/14@336px RACEGENDERRACEGENDERRACEGENDERRACEGENDER KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓) SEM i Variants (Bias-Agnostic) SEM i (Full)0.1700.6910.0880.2690.1470.6250.1230.3280.0960.4070.0650.2450.0590.4420.0320.185 – M(j) = 10.1390.6590.0780.2430.1240.5730.0930.2880.0750.3970.0880.2780.0610.3680.0380.196 – median CLIP0.1430.6690.1310.3250.1360.6260.0870.2620.0950.4480.1310.3260.0610.4200.0320.188 SEM b Variants (Bias-Aware) SEM b (Full)0.2320.7490.0980.2770.1940.7060.0980.2980.1480.5100.1230.3200.1370.4450.0470.202 – M(j) = (1− S bias ) 2 0.2050.7380.0950.2880.2980.8770.1190.3430.0720.4000.0630.2150.1310.4370.0230.151 – S bias = S gen only0.2010.7540.1050.2940.2110.7260.0920.2850.1330.5010.1290.3310.1580.4610.0450.201 – S bias = S spec only0.2530.7630.1020.2820.1850.7000.1020.3030.1590.5200.1290.3240.1110.4350.0470.200 Table 11. Extended ablation study for zero-shot classification on CelebA and Waterbirds. Bold: Best in setting. Method Variant CelebA (Gender)Waterbirds (Background) ViT-B/16ViT-L/14@336pxViT-B/16ViT-L/14@336px ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓) SEM i Variants (Bias-Agnostic) SEM i (Full)0.7360.6110.1250.7910.7450.0460.8010.4980.3030.8320.5230.309 – M (j) = 10.7340.6090.1250.7290.6400.0890.8340.2100.6240.8720.3570.515 – median CLIP0.7280.6010.1270.6870.5580.1290.8400.5630.2770.8790.4000.479 SEM b Variants (Bias-Aware) SEM b (Full)0.7970.7110.0860.8560.8240.0320.8250.4330.3920.8550.6240.231 – M (j) = (1− S bias ) 2 0.8180.7500.0680.8330.8120.0210.7880.0810.7070.8480.4450.403 – S bias = S gen only0.8090.7360.0730.8460.8180.0280.8300.4740.3560.8560.6470.209 – S bias = S spec only0.7890.6960.0930.8530.8220.0310.8220.4700.3520.8490.6620.187 Table 12. Extended ablation study for retrieval on CelebA. Bold: Best in setting. Method Variant ViT-B/16ViT-L/14@336px STEREOTYPEHAIR COLORSTEREOTYPEHAIR COLOR KL(↓)MS(↓)KL(↓)MS(↓)PREC.(↑)KL(↓)MS(↓)KL(↓)MS(↓)PREC.(↑) SEM i Variants (Bias-Agnostic) SEM i (Full)0.1730.4430.1910.3440.6790.1530.4130.2360.4580.698 – M (j) = 10.1850.4560.1930.3710.6720.1950.4900.2740.4840.709 – median CLIP0.2500.5030.1810.3820.6890.1550.4110.2990.5000.708 SEM b Variants (Bias-Aware) SEM b (Full)0.2400.4950.1720.3960.7290.1990.4810.1360.3660.699 – M (j) = (1− S bias ) 2 0.1100.3340.1220.3120.6410.1930.4860.1520.3380.545 – S bias = S gen only0.2380.4930.1820.4050.7350.1850.4620.1420.3720.712 – S bias = S spec only0.2480.5010.1810.4050.7260.1990.4800.1350.3550.688 (Tabs. 10 and 12), as well as on CelebA Hair Color retrieval for both backbones (Tab. 12). In contrast, our full SEM i method consistently achieves the best balance of fairness and performance across all benchmarks. Analysis of SEM b . The extended ablations highlight the necessity of our content-boosting term. While removing content boosting can sometimes improve retrieval fairness (notably on ViT-B/16), it leads to severe failures in several in- stances. For example, on Waterbirds (ViT-B/16), its WG ac- curacy plummets to 0.081 (Tab. 11); on FairFace (ViT-L/14), its KL divergence for the race attribute worsens significantly compared to the full method (0.298 vs. 0.194, Tab. 10); and crucially, removing content boosting severely degrades retrieval precision on CelebA across both backbones (drop- ping from 0.729 to 0.641 on ViT-B/16, and 0.699 to 0.545 on ViT-L/14). Furthermore, relying solely on either the general or specific bias score leads to inconsistent results. The “gen- eral only” variant often degrades social bias fairness (e.g., race debiasing on ViT-L/14 or gender debiasing on ViT-B/16, Tab. 10), while the “specific only” variant struggles with se- mantic consistency in some settings (e.g., yielding the worst CelebA accuracy and WG accuracy for ViT-B/16). Our full SEM b formulation, which combines these scores, avoids these pitfalls, maintaining robust performance across both classification and retrieval. 5 Table 13. Measuring race and gender bias for Stereotype queries on FairFace and UTKFace (ResNet backbones). Bold: Best in setting (row group) and better than BASE CLIP. Underline: Best in setting, but not improving over BASE CLIP. Gray: Method is not zero-shot. Method FairFaceUTKFace ResNet-50ResNet-101ResNet-50ResNet-101 RACEGENDERRACEGENDERRACEGENDERRACEGENDER KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓)KL(↓)MS(↓) BASE CLIP0.2150.7350.1700.3510.2030.7440.1440.3350.1270.4770.1530.3400.1520.4960.1360.333 Bias-agnostic + input-specific prompts ROBOSHOT0.2150.7060.2990.4450.2220.7980.3380.4940.1520.5860.2580.4140.2060.6520.3230.492 SEM i 0.1260.5630.0310.2060.1110.5660.0370.2010.0390.2650.1110.3830.1100.4010.0410.214 Bias prompts only ORTH-PROJ0.4640.9960.1110.2880.3220.8430.2130.4090.3400.6090.1170.3120.3220.5830.1630.360 PRISM-MINI0.4540.9830.1130.2910.3130.8370.2150.4110.3360.6080.1170.3110.3150.5820.1680.363 SEM b 0.1710.6520.0410.1960.1520.6380.0790.2580.1070.4110.0770.2830.0840.3400.0700.245 ZSDEBIAS0.0460.3830.0490.2170.0820.5880.0300.1860.0270.3390.0360.1830.0910.5670.0220.164 Bias prompts + input-specific prompts ORTH-CALI0.4110.9100.1410.3570.2970.8420.2780.4700.3070.5820.0860.2570.3020.5740.2040.397 SEM bi 0.1530.6260.0440.1930.1400.6230.0790.2590.1070.4060.0810.2810.0850.3480.0690.245 PRISM0.1570.6320.0690.2450.1520.5940.1070.2820.1340.5230.0880.2650.1330.5320.1270.314 Bias prompts + input-specific prompts + labeled images BENDVLM0.1500.5810.0060.0810.1250.5830.0100.1070.1010.4440.0080.0930.1260.5420.0130.123 BENDSEM bi 0.0670.4550.0050.0790.0590.4250.0060.0870.0420.3710.0090.1020.0350.3670.0120.126 G. Extended Results on ResNet Backbones To demonstrate that our feature-level debiasing framework generalizes beyond Vision Transformer (ViT) architectures, we extend our evaluation to convolutional neural networks. In this section, we benchmark our methods using the ResNet- 50 and ResNet-101 CLIP backbones. The experimental setup, datasets, and metrics remain identical to those used for the ViT evaluations in the main paper. The results are presented in Tab. 13 (FairFace and UTK- Face Retrieval), Tab. 14 (Zero-Shot Classification), and Tab. 15 (CelebA Retrieval). Consistent State-of-the-Art Fairness in Retrieval. The retrieval results in Tab. 13 confirm that our methods main- tain their state-of-the-art fairness mitigation on convolutional backbones. In the bias-agnostic setting, SEM i drastically reduces KL Divergence and MaxSkew compared to both the baseline and ROBOSHOT. For example, on FairFace Race (ResNet-50), SEM i lowers KL divergence to 0.126 (com- pared to 0.215 for BASE CLIP and ROBOSHOT). In the bias-aware settings, SEM b and SEM bi reliably achieve the best fairness metrics across almost all evaluated demograph- ics and datasets, outperforming projection-based baselines like ORTH-PROJ. SEM Significantly Improves Zero-Shot Robustness. As shown in Tab. 14, SEM exhibits exceptional performance on zero-shot classification with ResNet backbones. Most notably, almost every single “best in setting” result achieved by a SEM variant strictly improves over the BASE CLIP baseline, effectively addressing both social biases (CelebA) and spurious correlations (Waterbirds). For instance, on Waterbirds (ResNet-50), SEM b improves WG accuracy from 0.394 (BASE CLIP) to 0.577 (+18.3 points), substantially outperforming both ROBOSHOT (0.458) and ORTH-PROJ (0.457). Similarly, SEM bi consistently achieves the lowest fairness Gap on CelebA across both ResNet models while maintaining high overall accuracy. Maintaining the Fairness vs. Precision Trade-off. Tab. 15 details the performance on CelebA utilizing both Stereotype and Hair Color queries. While SEM i achieves exceptional fairness scores (lowering Stereotype KL to 0.050 on ResNet- 50), it does exhibit a drop in Hair Color precision (0.508). However, our bias-aware variants, SEM b and SEM bi , suc- cessfully navigate this trade-off. They significantly reduce Stereotype bias metrics compared to BASE CLIP while main- taining highly competitive precision scores (e.g., 0.700 pre- cision for SEM b on ResNet-50, matching or nearing the baseline precision of 0.735). Modularity with ResNets. Consistent with our ViT findings, our sparse modulation is highly complementary to existing methods when applied to ResNets. When integrating our SEM bi embeddings into the BENDVLM framework, the resulting BENDSEM bi approach establishes new state-of- the-art results in the labeled images setting. On ResNet-101 zero-shot classification (Tab. 14), BENDSEM bi pushes Wa- terbirds WG accuracy to 0.638, significantly outperforming BENDVLM alone (0.194). Similarly, it provides the lowest social bias metrics across nearly all retrieval benchmarks (Tabs. 13 and 15). 6 Table 14. Measuring zero-shot classification fairness on CelebA and Waterbirds (ResNet Backbones). Bold: Best in setting (row group) and better than BASE CLIP. Underline: Best in setting, but not improving over BASE CLIP. Gray: Method is not zero-shot. Method CelebA (Gender)Waterbirds (Background) ResNet-50ResNet-101ResNet-50ResNet-101 ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓)ACC.(↑)WG(↑)GAP(↓) BASE CLIP0.8200.7680.0530.6890.5020.1880.8370.3940.4420.8010.4990.301 Bias-agnostic + input-specific prompts ROBOSHOT0.8410.8060.0350.7370.5960.1400.7620.4580.3040.7610.4500.310 SEM i 0.8350.7990.0360.8110.7580.0520.8510.5570.2950.8430.5810.262 Bias prompts only ORTH-PROJ0.7950.7220.0730.6750.4860.1890.8590.4570.4020.8580.4010.457 PRISM-MINI0.7950.7220.0730.6750.4860.1890.8590.4570.4020.8580.4010.457 SEM b 0.8470.7980.0490.7950.7500.0450.8450.5770.2690.8460.5880.258 ZSDEBIAS0.6950.5890.1060.5650.4600.1060.8020.1480.6540.7740.3980.376 Bias prompts + input-specific prompts ORTH-CALI0.8310.8010.0300.6790.5050.1740.8080.7040.1040.8230.5540.269 SEM bi 0.8510.8030.0480.7910.7410.0490.8640.5250.3380.8710.5410.330 PRISM0.8240.7630.0610.7880.6880.1000.8860.6340.2520.8400.6720.168 Bias prompts + input-specific prompts + labeled images BENDVLM0.8090.7150.0940.7020.4900.2120.8260.6110.2150.8120.1940.618 BENDSEM bi 0.8480.8150.0330.7840.6990.0860.8560.6480.2080.8810.6380.243 Table 15. Measuring gender bias for Stereotype and Hair Color queries on CelebA (ResNet Backbones). Bold: Best in setting (row group) and better than BASE CLIP. Underline: Best in setting, but not improving over BASE CLIP. Gray: Method is not zero-shot. Method ResNet-50ResNet-101 STEREOTYPEHAIR COLORSTEREOTYPEHAIR COLOR KL(↓)MS(↓)KL(↓)MS(↓)PREC.(↑)KL(↓)MS(↓)KL(↓)MS(↓)PREC.(↑) BASE CLIP0.3890.6220.1870.3670.7350.3000.5600.2050.4140.718 Bias-agnostic + input-specific prompts ROBOSHOT0.1900.3370.3640.5500.7620.2940.4540.2740.4590.723 SEM i 0.0500.1930.2460.3690.5080.0410.1850.3010.5080.688 Bias prompts only ORTH-PROJ0.1450.3830.1360.3430.7830.1710.3720.3250.5060.752 PRISM-MINI0.1430.3790.1360.3390.7830.1720.3740.3210.4990.752 SEM b 0.1110.3110.2630.4530.7000.1950.4480.2320.4470.767 ZSDEBIAS0.0580.2370.1290.2910.4360.0160.1190.0460.1870.317 Bias prompts + input-specific prompts ORTH-CALI0.0690.2390.1160.3050.7740.1910.3520.3130.5020.751 SEM bi 0.1100.3070.2830.4680.7000.1650.3950.2400.4440.766 PRISM0.1700.3970.1870.3300.6790.1620.3610.1870.3420.707 Bias prompts + input-specific prompts + labeled images BENDVLM0.0290.2180.0250.1690.7540.0190.1730.0130.1250.704 BENDSEM bi 0.0100.1190.0100.0860.6190.0210.1840.0180.1400.723 7