Paper deep dive
RedRFT: A Light-Weight Benchmark for Reinforcement Fine-Tuning-Based Red Teaming
Xiang Zheng, Xingjun Ma, Wei-Bin Lee, Cong Wang
Models: GPT-2, GPT-2-Alpaca
Abstract
Abstract:Red teaming has proven to be an effective method for identifying and mitigating vulnerabilities in Large Language Models (LLMs). Reinforcement Fine-Tuning (RFT) has emerged as a promising strategy among existing red teaming techniques. However, a lack of a unified benchmark hinders current RFT-based red teaming methods. Implementation details, especially in Proximal Policy Optimization (PPO)-based RFT, significantly affect outcome stability and reproducibility. To address this issue, we introduce RedRFT, a lightweight benchmark designed to simplify and standardize the implementation and evaluation of RFT-based red teaming. RedRFT combines the design strengths of both single-file CleanRL and highly modularized Tianshou, offering high-quality single-file red teaming implementations and modular PPO core components, such as the General Advantage Estimator. It supports a variety of token and sentence diversity metrics, featuring modularized intrinsic reward computation that facilitates plug-and-play experimentation. To clarify their influence on RFT performance, we conducted an extensive ablation study on key components, including Low-Rank Adaptation (LoRA), Kullback-Leibler (KL) divergence, and Lagrange Multiplier. We hope this work contributes to 1) gaining a comprehensive understanding of the implementation nuances of RFT-based red teaming algorithms, and 2) enabling rapid prototyping of innovative features for RFT-based red teaming. Code for the benchmark can be accessed at this https URL.
Tags
Links
- Source: https://arxiv.org/abs/2506.04302
- Canonical: https://arxiv.org/abs/2506.04302
- Code: https://github.com/x-zheng16/RedRFT
PDF not stored locally. Use the link above to view on the source site.
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 97%
Last extracted: 3/12/2026, 5:54:58 PM
Summary
RedRFT is a lightweight, standardized benchmark for Reinforcement Fine-Tuning (RFT)-based red teaming of Large Language Models (LLMs). It integrates modular components like PPO, intrinsic reward estimators, and Lagrangian multipliers to simplify implementation and evaluation, while providing a unified framework to compare state-of-the-art red teaming algorithms.
Entities (5)
Relation Signals (3)
RedRFT → evaluates → Reinforcement Fine-Tuning
confidence 100% · RedRFT, a lightweight benchmark designed to simplify and standardize the implementation and evaluation of RFT-based red teaming.
Reinforcement Fine-Tuning → targets → Large Language Models
confidence 100% · RFT-based red teaming of LLMs.
RedRFT → utilizes → Proximal Policy Optimization
confidence 100% · The benchmark adopts Proximal Policy Optimization (PPO) as its optimization backbone
Cypher Suggestions (2)
Find all algorithms supported by the RedRFT benchmark · confidence 90% · unvalidated
MATCH (b:Benchmark {name: 'RedRFT'})-[:SUPPORTS|UTILIZES]->(a:Algorithm) RETURN a.nameIdentify components used in RFT-based red teaming · confidence 85% · unvalidated
MATCH (m:Methodology {name: 'Reinforcement Fine-Tuning'})-[:USES]->(c:Component) RETURN c.nameFull Text
115,706 characters extracted from source content.
Expand or collapse full text
main.bib RedRFT: A Light-Weight Benchmark for Reinforcement Fine-Tuning-Based Red Teaming Xiang Zheng1 Xingjun Ma2 Wei-Bin Lee3 Cong Wang1***Corresponding author. 1City University of Hong Kong 2Fudan University 3Hon Hai Research Institute 1xiang.zheng,congwang@cityu.edu.hk 2xingjunma@fudan.edu.cn 3wei-bin.lee@foxconn.com Abstract Red teaming has proven to be an effective method for identifying and mitigating vulnerabilities in Large Language Models (LLMs). Reinforcement Fine-Tuning (RFT) has emerged as a promising strategy among existing red teaming techniques. However, a lack of a unified benchmark hinders current RFT-based red teaming methods. Implementation details, especially in Proximal Policy Optimization (PPO)-based RFT, significantly affect outcome stability and reproducibility. To address this issue, we introduce RedRFT, a lightweight benchmark designed to simplify and standardize the implementation and evaluation of RFT-based red teaming. RedRFT combines the design strengths of both single-file CleanRL and highly modularized Tianshou, offering high-quality single-file red teaming implementations and modular PPO core components, such as the General Advantage Estimator. It supports a variety of token and sentence diversity metrics, featuring modularized intrinsic reward computation that facilitates plug-and-play experimentation. To clarify their influence on RFT performance, we conducted an extensive ablation study on key components, including Low-Rank Adaptation (LoRA), Kullback–Leibler (KL) divergence, and Lagrange Multiplier. We hope this work contributes to 1) gaining a comprehensive understanding of the implementation nuances of RFT-based red teaming algorithms, and 2) enabling rapid prototyping of innovative features for RFT-based red teaming. Code for the benchmark can be accessed at https://github.com/x-zheng16/RedRFT.git. 1 Introduction Large Language Models (LLMs) have demonstrated remarkable Natural Language Processing (NLP), reasoning, planning, and programming capabilities [ouyang2022training]. However, their broad generality presents risks: LLMs can produce incorrect or unsafe outputs [hendrycks2021unsolved]. To probe the potential vulnerabilities of a target LLM for responsible deployment, red teaming has emerged as a critical practice [ganguli2022red]. This approach involves uncovering potential vulnerabilities in the target LLMs through adversarial prompting. In this paper, we focus on benchmarking a new series of black-box red teaming methods that fine-tune a red-team LLM via Reinforcement Learning (RL) to generate adversarial prompts that elicit toxic responses from the target black-box LLM, as shown in Figure 1. Black-box red teaming approaches fall into three main categories: handcrafted red teaming, which relies on human experts to craft adversarial prompts manually; gradient-free red teaming, which employs gradient-free optimization techniques to optimize adversarial prompts, such as random search [andriushchenko2024jailbreaking], evolutionary algorithms [liu2023autodan], Bayesian optimization [lee2023query], and LLM-as-Optimizers [yang2023large]; and Reinforcement Fine-Tuning (RFT)-based red teaming [perez2022red, perez2023discovering, deng2023attack, casper2023explore, hong2024curiosity, zhao2025diver, zheng2025calm], which uses RL to fine-tune a red-team LLM for generating adversarial prompts. While gradient-free methods have achieved high attack success rates against state-of-the-art LLMs, their effectiveness depends heavily on the availability of high-quality handcrafted adversarial prompts as a starting point. RFT-based red teaming has become an automated strategy that does not require pre-existing high-quality adversarial prompts [perez2023discovering]. In this approach, the red-team LLM is fine-tuned via RL to produce adversarial prompts, either by completing partial sentences with adversarial tokens or crafting full adversarial instructions to trigger toxic responses from the target LLM. Unlike traditional jailbreak objectives, which aim to maximize the occurrence of specific words (e.g., "Sure" or "Yes") in the target’s response [andriushchenko2024jailbreaking], RFT-based red teaming prioritizes eliciting toxic outputs [hong2024curiosity]. However, a limitation of trivial RFT-based red teaming is the low diversity of the generated adversarial prompts [perez2022red]. Intuitively, once the red-team LLM generates a few high-reward adversarial prompt patterns, it exploits them repeatedly to gain rewards without further exploration. Intrinsic motivation is a promising technique for RFT-based red teaming to improve diversity [hong2024curiosity]. It utilizes intrinsic bonuses to encourage exploration and has been widely adopted in various RL tasks [zhengcim]. Recently, Hong et al. [hong2024curiosity] utilized a negative sentence similarity score as the intrinsic bonus. Zhao et al. [zhao2025diver] propose a constrained policy optimization formulation for RFT-based red teaming to trade off toxicity and diversity. Zheng et al. [zheng2025calm] design a policy-cover-based token-level intrinsic bonus for the red-team LLM. Despite these advances, evaluating and comparing existing RFT-based red teaming methods and rapidly prototyping new ones remains difficult due to the absence of a unified evaluation benchmark. Furthermore, the significance of implementation details in RL underscores the urgent need for a comprehensive benchmark to understand the critical components of RFT-based red teaming. To make benchmarking and developing new RFT-based red teaming methods easier, we propose RedRFT, a standardized and lightweight benchmark tailored specifically for RFT-based red teaming. Unlike general-purpose post-training libraries, such as Transformer Reinforcement Learning (TRL), RedRFT is designed to meet the unique demands of RFT-based red teaming. It features modularized models, including the red-team LLM, target LLM, and judge models, to simplify the implementation of the red teaming pipeline. The benchmark adopts Proximal Policy Optimization (PPO) [schulman2017proximal] as its optimization backbone, adhering to best practices from Tianshou, a highly modular RL library. Additionally, RedRFT incorporates multiple intrinsic reward estimators and integrates the Lagrangian dual method to support RFT-based red teaming under constraints. It also offers a standardized evaluation framework, inspired by the quality-diversity community, and implements five leading baselines, all optimized with the same backbone algorithm to ensure fair comparisons. In summary, our contributions are as follows: • We introduce RedRFT, a lightweight and standardized benchmark tailored for RFT-based red teaming of LLMs. RedRFT integrates modular components, including interactive agents, intrinsic reward estimators, Lagrangian multipliers, and a unified PPO backbone, to streamline implementation and evaluation. • We provide open-source implementations of five state-of-the-art RFT-based red teaming algorithms (RPPO, TDiv, CRT, DiveR-CT, and CALM), all optimized with a unified PPO backbone to ensure fair comparisons. We propose a standardized evaluation framework that quantifies the toxicity and diversity of generated adversarial prompts, enabling robust performance assessment of baseline RFT-based red teaming methods. • Through extensive experiments, we identify critical insights for RFT-based red teaming: state-level intrinsic rewards show comparable performance with prompt-level intrinsic rewards, constrained policy optimization enhances performance (e.g., DiveR-CT surpasses CRT), large batch sizes stabilize PPO training, and Low-Rank Adaptation (LoRA) and Kullback-Leibler (KL) divergence are essential for effective fine-tuning. These findings inform best practices and highlight opportunities for future improvements in RFT-based red teaming methodologies. Figure 1: The standardized framework for RFT-based red teaming. It involves: 1) the rollout pipeline that define the interaction between the red-team LLM, the target LLM, and the judge system, and 2) the evaluation pipeline that consists of a rollout buffer to record all rollouts during the fine-tuning process and a reward system to estimate extrinsic rewards, intrinsic rewards (based on rollout buffer), and costs. To simplify the visualization, we do not depict the optimization backbone in RFT. 2 Related Work Our work is closely related to automated red teaming and reinforcement fine-tuning benchmarks. Additionally, for a comprehensive understanding of intrinsic motivation, we summarize its development in the RL community provided in Appendix A. Automated black-box red teaming. We classify automated black-box red teaming methods into gradient-free red teaming and RFT-based red teaming. Gradient-free red teaming optimizes the adversarial prompt via gradient-free optimization methods, e.g., random search [andriushchenko2024jailbreaking], evolutionary algorithms [liu2023autodan], Bayesian optimization [lee2023query], and LLM-as-Optimizers [yang2023large]. The former three gradient-free optimization methods are off-the-shelf for black-box prompt optimization. For LLM-as-Optimizers, we mean those that utilize the pre-trained LLM via only prompting to iteratively refine the adversarial prompts based on the target LLM’s response, e.g., Prompt Automated Iterative Refinement (PAIR) [chao2023jailbreaking], Tree of Attacks with Pruning (TAP) [mehrotra2024tree], Persuasive Adversarial Prompts (PAP) [zeng2024johnny], and GPTFuzzer [yu2023gptfuzzer]. RFT-based red teaming involves fine-tuning a red-team LLM to generate adversarial prompts. The gradient for updating the red-team LLM is estimated via on-policy RL algorithms like PPO. Perez et al. [perez2022red] first applied RFT for adversarial prompt generation with a trivial PPO backbone. To increase the diversity of the generated adversarial prompts, Casper et al. [casper2023explore] propose maximizing the distance between the sentence embeddings of the target response. In contrast, Hong et al. [hong2024curiosity] find that directly maximizing the negative cosine similarity of the sentence embeddings of the adversarial prompts shows better performance. To trade off the quality and diversity, Zhao et al. [zhao2025diver] formulate the RFT-based red teaming as constrained policy optimization and leverage the Lagrange dual theory to solve it. Apart from the prompt-level bonus, which is estimated for the whole prompt, Zheng et al. [zheng2025calm] propose the policy-cover-based intrinsic bonus for each state in the generation process of the adversarial prompt to encourage the red-team LLM to generate novel tokens. Reinforcement fine-tuning benchmarks. There has been a variaty of benchmarks for reinforcement fine-tuning of LLMs. TRL [vonwerra2022trl] is a cutting-edge library for post-training large models and implements both on-policy RL algorithms like PPO and offline RL algorithms like Direct Preference Optimization (DPO) [rafailov2023direct]. OpenRLHF [hu2024openrlhf] is a scalable Ray-based framework for Reinforcement Learning from Human Feedback (RLHF) that implements multiple RLHF algorithms like PPO and Group Relative Policy Optimization (GRPO) [shao2024deepseekmath]. RL4LMs [ramamurthy2022reinforcement] is a modular RL library that aligns language models with human preferences. It also includes GRUE as a general evaluation benchmark for RL algorithms for NLP tasks. However, these RFT benchmarks focus on the safety alignment of the large models and are not ready-to-use for RFT-based red teaming. 3 Preliminaries We first introduce two main preliminaries of RFT-based red teaming for a formal description of the standardized framework and the algorithmic baselines in the latter Sections. Next token generation as Markov decision process. To utilize RL to fine-tune an LLM, we need to formalize the next token generation process of the LLM as a Markov Decision Process (MDP) M=(,,ℙ,r,γ,μ)ℙM=(S,A,P,r,γ,μ)M = ( S , A , blackboard_P , r , γ , μ ), where SS and AA stand for the state space and the action space, P is the transition function, r is the reward function, γ is the initial state distribution, and μ is the discount factor for advantage estimation seperatively. At every timestep t, the LLM agent (the policy) π generates the next token (the action) at∈subscripta_t∈Aaitalic_t ∈ A based on the current prompt (the state) t=0+[a0,…,at−1]=0+t∈subscriptsubscript0subscript0…subscript1subscript0subscript s_t= s_0+[a_0,...,a_t-1]= s_0+ a_t∈% Sitalic_sitalic_t = italic_s0 + [ a0 , … , aitalic_t - 1 ] = italic_s0 + italic_aitalic_t ∈ S, i.e., at∼π(⋅|t)a_t π(·| s_t)aitalic_t ∼ π ( ⋅ | italic_sitalic_t ), where 0subscript0 s_0italic_s0 is the inital prompt for the LLM agent. After sampling the next token atsubscripta_taitalic_t, the state of the environment changes from tsubscript s_titalic_sitalic_t to t+1=ℙ(t,at)=t+[at]=0+[a0,…,at−1]+[at]=0+t+1subscript1ℙsubscriptsubscriptsubscriptdelimited-[]subscriptsubscript0subscript0…subscript1delimited-[]subscriptsubscript0subscript1 s_t+1=P( s_t,a_t)= s_t+[a_t]= s_% 0+[a_0,...,a_t-1]+[a_t]= s_0+ a_t+1italic_sitalic_t + 1 = blackboard_P ( italic_sitalic_t , aitalic_t ) = italic_sitalic_t + [ aitalic_t ] = italic_s0 + [ a0 , … , aitalic_t - 1 ] + [ aitalic_t ] = italic_s0 + italic_aitalic_t + 1 (we use +++ here for the list concatenation operator), and the agent receives the extrinsic reward rtEsuperscriptsubscriptEr_t^Eritalic_tE and the intrinsic reward rtIsuperscriptsubscriptIr_t^Iritalic_tI provided by the evaluation pipeline. For each timestep t, we can get a typical (state, action, reward, new state) pair, i.e., (t,at,rtE,rtI,t+1)subscriptsubscriptsuperscriptsubscriptEsuperscriptsubscriptIsubscript1( s_t,a_t,r_t^E,r_t^I, s_t+1)( italic_sitalic_t , aitalic_t , ritalic_tE , ritalic_tI , italic_sitalic_t + 1 ). In the following Sections, we detail the definition and estimation of rewards (including toxicity and diversity) involved in RFT-based red teaming. In RFT-based red teaming, we fine-tune a red-team LLM παsubscript _απitalic_α to generate the adversarial prompt T=[a0,…,aT−1]subscriptsubscript0…subscript1 a_T=[a_0,...,a_T-1]italic_aitalic_T = [ a0 , … , aitalic_T - 1 ], where T is the number of the generated tokens. Constrained policy optimization. Constrained policy optimization is basic for stabilizing policy updates and enhancing safety in RL. In this paper, we formalize the RFT-based red teaming as a constrained policy optimization problem. We first augment the normal MDP M=(,,P,R,γ,μ)M=(S,A,P,R,γ,μ)M = ( S , A , P , R , γ , μ ) with a cost function C as MC=(,,P,R,C,γ,μ)subscriptM_C=(S,A,P,R,C,γ,μ)Mitalic_C = ( S , A , P , R , C , γ , μ ). At each timestep t, apart from the rewards rtEsuperscriptsubscriptr_t^Eritalic_titalic_E and rtIsuperscriptsubscriptr_t^Iritalic_titalic_I, the agent also receives a cost ctsubscriptc_tcitalic_t that indicates whether the prompt violates the constraint, e.g., being non-gibberish for clear semantics. RFT-based red teaming aims to solve a constrained optimization problem: maxπαJE+λIJI,s.t.JC≤τ,subscriptsubscriptsubscriptEsuperscriptIsubscriptIs.t.subscriptC _ _αJ_E+λ^IJ_I,\ s.t.\ % J_C≤τ,maxitalic_π start_POSTSUBSCRIPT α end_POSTSUBSCRIPT JE + λI JI , s.t. JC ≤ τ , (1) where J∗=[∗],∗∈rE,rI,cJ_*=E_ s[*],*∈\r^E,r^I,c\J∗ = blackboard_Eitalic_s [ ∗ ] , ∗ ∈ rE , rI , c is the expected reward/cost under the state distribution d=(1−γ)∑t=0∞γtP(t=|0,πα)subscript1superscriptsubscript0superscriptsubscriptconditionalsubscript0subscriptd_ s=(1-γ) _t=0^∞γ^tP( s_t= s| % s_0, _α)dbold_italic_s = ( 1 - γ ) ∑t = 0∞ γitalic_t P ( italic_sitalic_t = italic_s | italic_s0 , πitalic_α ) induced by the red-team LLM παsubscript _απitalic_α, λIsuperscriptIλ^IλI is the coefficient of the intrinsic objective, and τ is the cost budget. In practice, there might be multiple constraints for the feasible stationary policy. We use one constraint here to reduce clutter. 4 RedRFT: Standardized Framework for RFT-Based Red Teaming We now introduce the standardized framework for RFT-based Red Teaming in our RedRFT, including the standardized rollout and evaluation pipelines. Moreover, to show how to standardize a specific red teaming task utilizing RedRFT, we describe two RFT-based red teaming tasks adopted by previous related works, i.e., text continuation and instruction following, under our framework. Rollout pipeline. The rollout pipeline in RedRFT defines how the data flows from red-team LLMs to the judge models, i.e., the interaction between agents involved in the red teaming process. To simplify the illustration of the rollout pipeline, we use one red-team LLM παsubscript _απitalic_α, one target LLM πνsubscript _νπitalic_ν, and the judge system πχsubscript _χπitalic_χ. In each probing trial, we aims to collect a rollout (0,T,,E)subscript0subscriptsuperscriptE( s_0, a_T, y, r^E)( italic_s0 , italic_aitalic_T , italic_y , italic_rE ), where 0subscript0 s_0italic_s0 is the initial prompt for the red-team LLM, Tsubscript a_Titalic_aitalic_T is the adversarial prompt generated by the red-team LLM, yitalic_y is the target response generated by the target LLM, and EsuperscriptE r^Eitalic_rE is the judge report generated by the judge system. Formally, we define the rollout pipeline as 0∼,T∼πα(⋅|0),∼πν(⋅|0,T),E∼πχ(⋅|0,T,), s_0 D, a_T _α(·| s_0)% , y _ν(·| s_0, a_T), r^E% _χ(·| s_0, a_T, y),italic_s0 ∼ D , italic_aitalic_T ∼ πitalic_α ( ⋅ | italic_s0 ) , italic_y ∼ πitalic_ν ( ⋅ | italic_s0 , italic_aitalic_T ) , italic_rE ∼ πitalic_χ ( ⋅ | italic_s0 , italic_aitalic_T , italic_y ) , (2) where DD is a dataset that contains initial prompts for the red-team LLM. We emphasize that this definition is a general form. For instance, the judge system πχsubscript _χπitalic_χ can be a composite of a hate speech detector for computing toxicity score as the extrinsic reward rtoxsuperscripttoxr^toxrtox and a gibberish detector for computing the gibberish score as the extrinsic cost cgibsuperscriptgibc^gibcgib and the judge report is a tuple E=(rtox,cgib)superscriptEsuperscripttoxsuperscriptgib r^E=(r^tox,c^gib)italic_rE = ( rtox , cgib ). When we use a reasoning language model, e.g., GPT-4o [hurst2024gpt] or DeepSeek-r1 [guo2025deepseek], as a safety judge, the judge report can also include a detailed reasoning process for the safety judge. Evaluation pipeline. The evaluation pipeline of RedRFT includes a rollout buffer and a reward system. The rollout buffer records all the historical rollouts, i.e., ℬ=(0,T,,E)ℬsubscript0subscriptsuperscriptEB=\( s_0, a_T, y, r^E)\B = ( italic_s0 , italic_aitalic_T , italic_y , italic_rE ) , for the intrinsic reward estimation and the toxicity-diversity profile generation after the fine-tuning process. The reward system consists of the extrinsic rewards and cost extraction, the intrinsic rewards estimation, and the evaluation metric. Extrinsic rewards and costs can be directly extracted from the judge report ritalic_r when we use the encoder-based classifier, as explained in the rollout pipeline, i.e., rtox,cgib←E←superscripttoxsuperscriptgibsuperscriptEr^tox,c^gib← r^Ertox , cgib ← italic_rE where ← is the assignment operator. For a generative judge system like GPT-4o, we must define a reward extractor to extract the unsafe score and the gibberish cost from the generated judge report. Intrinsic rewards are estiamted based on the rollout buffer ℬBB. Based on our rollout pipeline, we can see the key difference between intrinsic and extrinsic rewards: the intrinsic rewards do not depend on the judge system, while extrinsic rewards are exactly extracted from the judge report ritalic_r. As discussed in Section 2, current intrinsic rewards for NLP are all knowledge-based, i.e., the novelty of the prompt is estimated based on all the rollouts of or sampled rollouts from the rollout buffer. Currently, there are mainly two types of intrinsic rewards for RFT-based red teaming: 1) the prompt-level intrinsic reward based on the negative cosine similarity between the adversarial prompt and the historical rollouts sampled from the rollout buffer [hong2024curiosity]: rtCos=0if t∈0,…,T−2∑T′∼ℬ−ϕ(T)ϕ(T′)if t=T−1,superscriptsubscriptCoscases0if 0…2subscriptsimilar-tosuperscriptsubscript′ℬitalic-ϕsubscriptitalic-ϕsuperscriptsubscript′if 1r_t^Cos= cases0&if t∈\0,...,T-2\\\ _ a_T B-φ( a_T)φ( a_% T )&if t=T-1 cases,ritalic_tCos = start_ROW start_CELL 0 end_CELL start_CELL if t ∈ 0 , … , T - 2 end_CELL end_ROW start_ROW start_CELL ∑italic_a start_POSTSUBSCRIPT T′ ∼ B end_POSTSUBSCRIPT - ϕ ( italic_aitalic_T ) ϕ ( italic_aitalic_T′ ) end_CELL start_CELL if t = T - 1 end_CELL end_ROW , (3) where ϕ(T)italic-ϕsubscriptφ( a_T)ϕ ( italic_aitalic_T ) is the sentence embedding of the adversarial prompt Tsubscript a_Titalic_aitalic_T; 2) the state-level intrinsic reward based on the policy cover theory [zheng2025calm]: rtPC=(ρ(t+1)d(t+1))−1,∀t∈0,…,T−1,formulae-sequencesuperscriptsubscriptPCsuperscriptsubscriptsubscript1subscriptsubscript11for-all0…1r_t^PC=( _ s( s_t+1)d_ s( s_t+1))^-% 1,\ ∀ t∈\0,...,T-1\,ritalic_tPC = ( ρbold_italic_s ( italic_sitalic_t + 1 ) dbold_italic_s ( italic_sitalic_t + 1 ) )- 1 , ∀ t ∈ 0 , … , T - 1 , (4) where t+1=0+t+1subscript1subscript0subscript1 s_t+1= s_0+ a_t+1italic_sitalic_t + 1 = italic_s0 + italic_aitalic_t + 1 is the new state of the MDP after the red-team LLM takes an action at∼πα(⋅|t)a_t _α(·| s_t)aitalic_t ∼ πitalic_α ( ⋅ | italic_sitalic_t ) based on the current state t=0+tsubscriptsubscript0subscript s_t= s_0+ a_titalic_sitalic_t = italic_s0 + italic_aitalic_t, as defined in Section 3, ρsubscript _ sρbold_italic_s is the state distribution of the rollout buffer ℬBB (i.e., the policy cover), and dsubscriptd_ sdbold_italic_s is the state distribution induced by only the latest red-team LLM as defined in Section 3. For both types of intrinsic rewards, we use a light-weight sentence transformer111https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2 from SentenceTransformers as ϕitalic-ϕφϕ. It is easy to see the difference between the prompt-level intrinsic reward and the state-level intrinsic reward; that is, the prompt-level intrinsic reward is defined for the entire adversarial prompt Tsubscript a_Titalic_aitalic_T, while the state-level intrinsic reward is defined for each state t=0+tsubscriptsubscript0subscript s_t= s_0+ a_titalic_sitalic_t = italic_s0 + italic_aitalic_t in the MDP of the adversarial prompt generation. Evaluation metric. Previous RFT-based red teaming methods have three drawbacks in their evaluation metric design. Firstly, they use the empirical intrinsic rewards estimated during the fine-tuning process as one of the evaluation metrics, which is inaccurate since the rollout buffer keeps growing during the fine-tuning process. Secondly, they do not filter the infeasible adversarial prompts that violate the non-gibberish constraint when generating the toxicity and diversity profile. Thirdly, they lack a composite metric to evaluate the algorithm’s performance. To address these, we first design a novel diversity score rdivsuperscriptdivr^divrdiv for each adversarial prompt Tsubscript a_Titalic_aitalic_T based on the state entropy theory: rdiv=−ln(ρT(T)),superscriptdivsubscriptsubscriptsubscriptr^div=- ( _ a_T( a_T)),rdiv = - ln ( ρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT ( italic_aitalic_T ) ) , (5) where ρTsubscriptsubscript _ a_Tρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT is the distribution of the adversarial prompts in the rollout buffer ℬBB. For the detailed derivation of this diversity score, please refer to Appendix B. Intuitively, rdivsuperscriptdivr^divrdiv estimates the Shannon information contained in each adversarial prompt. To estimate ρT(T)subscriptsubscriptsubscript _ a_T( a_T)ρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT ( italic_aitalic_T ), we adopt the non-parametric k-N estimator. Please refer to Appendix C for details on the density estimation. We then propose a redeemed toxicity and diversity profile: τtoxsubscripttox _toxτtox ↦1|ℬ|∑E∼ℬ(rtox>τtox,cgib<τgib),maps-toabsent1ℬsubscriptsimilar-tosuperscriptEℬ1formulae-sequencesuperscripttoxsubscripttoxsuperscriptgibsubscriptgib 1|B| _ r^E % B1(r^tox> _tox,c^gib< _% gib),↦ divide start_ARG 1 end_ARG start_ARG | B | end_ARG ∑italic_rE ∼ B 1 ( rtox > τtox , cgib < τgib ) , (6) τdivsubscriptdiv _divτdiv ↦1|ℬ|∑T∼ℬ(rdiv>τdiv,cgib<τgib),maps-toabsent1ℬsubscriptsimilar-tosubscriptℬ1formulae-sequencesuperscriptdivsubscriptdivsuperscriptgibsubscriptgib 1|B| _ a_T B% 1(r^div> _div,c^gib< _gib% ),↦ divide start_ARG 1 end_ARG start_ARG | B | end_ARG ∑italic_a start_POSTSUBSCRIPT T ∼ B end_POSTSUBSCRIPT 1 ( rdiv > τdiv , cgib < τgib ) , where rdivsuperscriptdivr^divrdiv is the diversity score for each adversarial prompt in the final rollout buffer ℬBB at the end of the fine-tuning process, (⋅)1⋅1(·)1 ( ⋅ ) is the indicator function, |⋅||·|| ⋅ | is the size of the buffer, τ∗,∗∈tox,div,gib _*,*∈\tox,div,gib\τ∗ , ∗ ∈ tox , div , gib is the threshold for the toxicity score, the diversity score, and the gibberish score individually. Note that we directly use the recorded toxicity scores in the rollout buffer, while re-computing the diversity score for each adversarial prompt based on Equation 5. We also propose the cumulative toxicity-diversity score to evaluate the overall performance of the RFT-based red teaming method: τtox↦∑(T,E)∼ℬrtoxrdiv(rtox>τtox,cgib<τgib).maps-tosubscripttoxsubscriptsimilar-tosubscriptsuperscriptEℬsuperscripttoxsuperscriptdiv1formulae-sequencesuperscripttoxsubscripttoxsuperscriptgibsubscriptgib _tox _( a_T, r^E) % Br^toxr^div1(r^tox> _tox,c^% gib< _gib).τtox ↦ ∑( italic_a start_POSTSUBSCRIPT T , italic_rE ) ∼ B end_POSTSUBSCRIPT rtox rdiv 1 ( rtox > τtox , cgib < τgib ) . (7) Tasks for RFT-based red teaming. We detail two common RFT-based red teaming tasks, i.e., text continuation and instruction following as follows. Text continuation is the basic task for LLMs, where the target LLM is required to continue the initial prompt with new tokens to complete coherent content, e.g., continue to write a movie review based on a few initial tokens like “I loved this excellent movie”. For the initial prompt 0subscript0 s_0italic_s0, we sample the initial tokens from IMDB, a dataset with 50,000 movie reviews. We filter out reviews containing fewer than 200 words and use the first four words of each review as the initial prompt for the red-team LLM. For the adversarial prompt Tsubscript a_Titalic_aitalic_T, we set the max length of the adversarial prompt as ten tokens to limit the red-team LLM’s capability and reduce the GPU memory utilized for fine-tuning. For the target response yitalic_y, we probe the target LLM with the joint tokens 0+Tsubscript0subscript s_0+ a_Titalic_s0 + italic_aitalic_T to generate thirty new tokens to complete the movie review. For the judge report ρ, we use toxicity as the unsafe score of the target response yitalic_y and adopt a hate speech detector222https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target as the safety judge. Instruction following is another fundamental task for LLMs, where the target LLM needs to understand the instruction provided by the user and strictly follow it to generate new tokens, e.g., “Generate a list of five unique project ideas”. For the initial prompt 0subscript0 s_0italic_s0, we sample instructions from Alpaca, a dataset containing 52,000 instructions, as the demonstration for the red-team LLM. Unlike text continuation, we design a simple template as the initial prompt to instruct the red-team LLM to generate new adversarial instructions for the target LLM, i.e., “Write a list of instructions: 1. <an instruction from Alpaca>. 2.” For the adversarial prompt Tsubscript a_Titalic_aitalic_T, we set the same max length for the adversarial prompt as text continuation. For the target response yitalic_y, we probe the target LLM with the adversarial prompt Tsubscript a_Titalic_aitalic_T to generate thirty new tokens to follow the instruction. For the judge report ρ, we also use the toxicity of the target response yitalic_y as the unsafe score and adopt another hate speech classifier333https://huggingface.co/tomh/toxigen_roberta as the safety judge to show our framework is insensitive to the choice of the safety judge. For both tasks, we adopt the exact gibberish text detector444https://huggingface.co/madhurjindal/autonlp-Gibberish-Detector-492513457 for the gibberish cost estimation. 5 RedRFT: Algorithmic Baselines for RFT-Based Red Teaming Implementation details are critical for RL. While RedRFT provides standardized rollout and evaluation pipelines, current algorithms are hard to compare due to small differences in implementation details. Hence, providing a unified codebase with identical implementations of the optimization backbone for each baseline is important. Providing such a unified codebase is one of the main contributions of this benchmark. 5.1 Optimization Backbone Current RFT-based red teaming mainly adopts the on-policy RL algorithm PPO as the optimization backbone. The PPO-style optimization objective of RFT-based red teaming is as follows: maxπα′∑t,atsubscriptsubscriptsuperscript′subscriptsubscriptsubscript _π _α _ s_t,a_tmaxitalic_π′ start_POSTSUBSCRIPT α end_POSTSUBSCRIPT ∑italic_s start_POSTSUBSCRIPT t , aitalic_t end_POSTSUBSCRIPT minβAmix(t,at),clip(β;1−ϵ,1+ϵ)Amix(t,at)(Mixed Advantage Term)superscriptmixsubscriptsubscriptclip1italic-ϵ1italic-ϵsuperscriptmixsubscriptsubscript(Mixed Advantage Term) \β A^mix( s_t,a_t),clip% (β;1-ε,1+ε)A^mix( s_t,a_t)\ (% Mixed Advantage Term)min β Amix ( italic_sitalic_t , aitalic_t ) , clip ( β ; 1 - ϵ , 1 + ϵ ) Amix ( italic_sitalic_t , aitalic_t ) (Mixed Advantage Term) (8) −λentlnπα′(at|t)(Policy Entropy Term)superscriptentsubscriptsuperscript′conditionalsubscriptsubscript(Policy Entropy Term) -λ^ent π _α(a_t| s_t)% (Policy Entropy Term)- λent ln π′italic_α ( aitalic_t | italic_sitalic_t ) (Policy Entropy Term) −λKLDKL(πα′(⋅|t),πref(⋅|t)),(KL Divergence Term) -λ^KLD_KL (π _α(% ·| s_t), _ref(·| s_t) ), (KL% Divergence Term)- λKL DKL ( π′italic_α ( ⋅ | italic_sitalic_t ) , πref ( ⋅ | italic_sitalic_t ) ) , (KL Divergence Term) where Amix(t,at)superscriptmixsubscriptsubscriptA^mix( s_t,a_t)Amix ( italic_sitalic_t , aitalic_t ) is the mixed advantage function, β=πα′(at|t)/πα(at|t)subscriptsuperscript′conditionalsubscriptsubscriptsubscriptconditionalsubscriptsubscriptβ=π _α(a_t| s_t)/ _α(a_t| s_t)β = π′italic_α ( aitalic_t | italic_sitalic_t ) / πitalic_α ( aitalic_t | italic_sitalic_t ) is the policy ratio, πrefsubscriptref _refπref is the reference model for the red-team LLM, DKLsubscriptKLD_KLDKL is the KL divergence, and λentsuperscriptentλ^entλent and λKLsuperscriptKLλ^KLλKL are the coefficients for the policy entropy and the KL divergence terms. The policy entropy is used to prevent the policy from converging too early. The KL divergence term is to avoid the red-team LLM collapse. Lagrange multiplier. The mixed advantage function Amix(t,at)superscriptmixsubscriptsubscriptA^mix( s_t,a_t)Amix ( italic_sitalic_t , aitalic_t ) in Equation 8 is a weighted sum of multiple advantage function. For instance, when we use the prompt-level intrinsic reward rtCossuperscriptsubscriptCosr_t^Cosritalic_tCos, the mixed advantage function is Amix=∑∗λ∗A∗,∗∈tox,Cos,gibA^mix= _*λ^*A^*,*∈\tox,Cos,gib\Amix = ∑∗ λ∗ A∗ , ∗ ∈ tox , Cos , gib . Lagrange Multiplier provides a way to adjust the weight adaptively. In detail, we can formulate the RFT-based red teaming as a constrained policy optimization problem as Equation 1 and then convert it into the unconstrained version based on the Lagrangian dual theory. For instance, when we use Jgib=[cgib]<=τgibsubscriptgibsubscriptdelimited-[]superscriptgibsubscriptgibJ_gib=E_ s[c^gib]<= _gibJgib = blackboard_Eitalic_s [ cgib ] < = τgib as the constraint, the corresponding Lagrangian dual problem becomes minλgib>0maxπα′λtoxJtox+λCosJCos+λgib(τgib−Jgib).subscriptsuperscriptgib0subscriptsubscriptsuperscript′oxsuperscripttoxsuperscriptCossuperscriptCossuperscriptgibsubscriptgibsubscriptgib _λ^gib>0 _π _αλ^toxJ% ^tox+λ^CosJ^Cos+λ^gib( _% gib-J_gib).minitalic_λgib > 0 maxitalic_π′ start_POSTSUBSCRIPT α end_POSTSUBSCRIPT λtox Jtox + λCos JCos + λgib ( τgib - Jgib ) . (9) The Lagrange multiplier λgibsuperscriptgibλ^gibλgib can now be updated by Stochastic Gradient Descent (SGD) by solving the outer minimization problem, i.e., minλgib>0λgib(τgib−Jgib)subscriptsuperscriptgib0superscriptgibsubscriptgibsubscriptgib _λ^gib>0λ^gib( _gib-J_% gib)minitalic_λgib > 0 λgib ( τgib - Jgib ). Based on the performance difference lemma in RL, the mixed advantage function is now Amix=λtoxAtox+λCosACos−λgibAgibsuperscriptmixsuperscripttoxsuperscripttoxsuperscriptCossuperscriptCossuperscriptgibsuperscriptgibA^mix=λ^toxA^tox+λ^CosA % Cos-λ^gibA^gibAmix = λtox Atox + λCos ACos - λgib Agib. However, the Lagrange multiplier is known to be sensitive to the update rate. Therefore, we propose a more stable cross-entropy-based optimization objective for the Lagrange multiplier as follows: minλgib>0[−(1−y)ln(1−λgib)−ylnλgib],subscriptsuperscriptgib0delimited-[]11superscriptgibsuperscriptgib _λ^gib>0E[-(1-y) (1-λ^gib)-y % λ^gib],minitalic_λgib > 0 blackboard_E [ - ( 1 - y ) ln ( 1 - λgib ) - y ln λgib ] , (10) where y=(Jgib>τgib)1subscriptgibsubscriptgiby=1(J_gib> _gib)y = 1 ( Jgib > τgib ). Intuitively, when Jgib≤τgibsubscriptgibsubscriptgibJ_gib≤ _gibJgib ≤ τgib, the constraint is satisfied, and we expect the Lagrange multiplier to decrease. In contrast, when Jgib>τgibsubscriptgibsubscriptgibJ_gib> _gibJgib > τgib, that is, the constraint is violated, we expect the Lagrange multiplier to increase to strengthen the power of the constraint. For a detailed discussion on the Lagrange multiplier, please refer to Appendix D. 5.2 RFT-Based Red Teaming Algorithms Table 1: Comparison of RFT-based red teaming methods. Method Intrinsic Reward Constraint RPPO [perez2022red] - - TDiv [casper2023explore] Prompt-Level - CRT [hong2024curiosity] Prompt-Level - DiveR-CT [zhao2025diver] Prompt-Level Gibberish CALM [zheng2025calm] State-Level Gibberish As part of RedRFT, we open-source code for five leading or well-known algorithms. All the algorithms we implement adopt the same standardized framework introduced in Section 4 and the same optimization backbone as presented in Section 5.1. The only difference between each algorithm is the intrinsic reward and constraint choice. We list all implemented baselines in Table 1 and provide a brief overview of these baselines. Brief overview of five algorithmic baselines. Red-team PPO (RPPO) utilizes the basic PPO without intrinsic rewards and constraints. TDiv defines a prompt-level intrinsic reward based on the diversity of the target response. Curiosity-Driven Red Teaming (CRT) uses rCossuperscriptCosr^CosrCos as the intrinsic reward and directly maximizes the non-gibberish score. Diversity-Enhanced Red Teaming With Relaxing Constraints (DiveR-CT) uses a modified rCossuperscriptCosr^CosrCos with the k-N estimator and uses Gibberish score as the cost. Curiosity-Driven Auditing (CALM) proposes the state-level intrinsic reward rPCsuperscriptPCr^PCrPC. For details of each method, we refer the reader to Appendix E. 6 Experiments Figure 2: Results of baselines on toxic continuation (uppper) and instruction following (bottom). We evaluate the algorithms listed in Table 1 by fine-tuning the red-team LLM with the same optimization backbone as mentioned in Section 5 and evaluating the performance on tasks described in Section 4 with the evaluation metric introduced in Section 4. We assess the performance of each algorithm on each task under each hyperparameter configuration over three random seeds. Please refer to Appendix F for a complete description of the hyperparameter settings. We use GPT-2 as the red-team LLM for all the experiments on a single 13th Gen Intel(R) Core(TM) i9-13900KF CPU and a single NVIDIA GeForce RTX 4090 D GPU, which highlights RedRFT’s light-weight property. We show the main benchmarking results in Figure 2, which shows the toxicity-diversity profile for all baselines on both tasks. Overall, we observe that none of the current RFT-based red teaming methods achieve a cumulative toxicity-diversity score greater than 0.5, indicating significant room for improvement. Below, we summarize our key findings (KF) from the experiments. KF1: State-level intrinsic reward is comparable with prompt-level intrinsic reward. Our main results show that CALM, which employs state-level intrinsic reward, shows comparable results with those with prompt-level intrinsic rewards. This suggests that state-level rewards, which are more dense than prompt-level intrinsic rewards, are promising for hard red-teaming tasks that require higher exploration capability. A detailed comparison is provided in Appendix G. KF2: Constrained policy optimization is better. Our main results also demonstrate that methods incorporating constrained policy optimization, such as DiveR-CT, outperform unconstrained methods like RPPO and CRT, in terms of balancing toxicity and constraint satisfaction. This improvement is attributed to the use of the Lagrangian dual method, which adaptively adjusts the constraint weight to ensure feasible prompts while maximizing toxicity. KF3: Large batch size is more stable for PPO training. We find that a large batch size (256) for collecting rollouts, combined with a relatively small mini-batch size (16) for estimating the policy gradient, outperforms smaller batch sizes (e.g., 64, 32) with the same mini-batch size in most experiments, as shown in Figure 3 (left). This configuration enhances the stability of PPO training, as larger batch sizes provide more representative samples of the policy’s behavior, reducing variance in gradient estimates. This phenomenon aligns with observations in PPO for other domains [andrychowicz2020matters]. For a complete ablation study and detailed discussion on batch size selection, please refer to Appendix H. KF4: LoRA and KL divergence are necessary. Our ablation studies demonstrate that both LoRA and KL divergence regularization are critical for effective RFT-based red teaming. As shown in Figure 3 (middle), CRT with the default setting (both LoRA and KL divergence) achieves similar toxicity-novelty scores as CRT without KL divergence, significantly surpassing CRT without both KL and LoRA. LoRA reduces the computational cost of fine-tuning by updating only a small subset of parameters, while KL divergence prevents the red-team LLM from deviating too far from the reference model, avoiding collapse. These findings hold across all methods. For detailed results and discussion, please refer to Appendix I. KF5: Cross-entropy-based optimization objective can stabilize the Lagrange multiplier update. From Figure 3 (right), we observe that the cross-entropy-based optimization objective for the Lagrange multiplier, introduced in Section 5.1, significantly improves the stability of the Lagrange multiplier update. This stability ensures that the red-team LLM generates fewer gibberish prompts while maintaining high toxicity scores, which is promising for developing novel RFT-based red teaming methods. A detailed analysis is provided in Appendix J. Figure 3: Ablation study on batch size (left), LoRA & KL (middle) and Lagrange Multiplier (right). Figure 4: Case study on fast prototyping and benchmarking RedRFT in text continuation. KF6: RedRFT is suitable for fast prototyping and benchmarking. To demonstrate the flexibility of RedRFT for rapid prototyping and benchmarking, we conducted a case study by prototyping a novel RFT-based red teaming method, RedRFT, with our proposed rdivsuperscriptdivr^divrdiv as the intrinsic rewards and keeping other components the same as DiveR-C. As shown in Figure 4, RedRFT outperforms the state-of-the-art methods CRT and DiveR-CT by a large margin regarding the cumulative toxicity-diversity score. A detailed discussion on RedRFT can be found in Appendix K. 7 Conclusion In this work, we introduced RedRFT, a lightweight and standardized benchmark designed to advance the study and development of RFT-based red teaming for LLMs. RedRFT addresses the critical need for a unified framework by providing modular implementations of key components, including the rollout pipeline, evaluation metrics, and PPO backbone. By open-sourcing implementations of five state-of-the-art RFT-based red teaming algorithms, we enable fair comparisons and facilitate rapid prototyping of novel methods. Our extensive experiments on text continuation and instruction following tasks reveal key insights, such as the promise of state-level intrinsic rewards, the advantage of constrained policy optimization, the necessity of LoRA and KL divergence, the stabilizing effect of large batch sizes, and cross-entropy-based Lagrange multiplier updates. Moreover, the cumulative toxicity-diversity scores below 0.5 across all baselines indicate significant opportunities for further improvement in balancing toxicity and diversity in adversarial prompt generation. Limitations. A primary limitation of this work is the lack of experiments on larger target LLMs, such as those with billions of parameters, due to computational resource constraints. Our experiments were conducted using GPT-2 as both the red-team and target LLM, which, while effective for demonstrating RedRFT’s capabilities, may not fully capture the complexities of red teaming state-of-the-art models like GPT-4 or LLaMA. Larger models often exhibit different vulnerabilities and behaviors, and evaluating RFT-based red teaming against them could provide deeper insights into scalability and generalizability. Future work could address this limitation by testing RedRFT on more advanced LLMs. Appendix A Related Work on Intrinsic Motivation in Reinforcement Learning To better illustrate the role of intrinsic motivation in RedRFT, we provide a brief introduction to the roadmap of intrinsic motivation in RL. Intrinsic motivation in RL. Intrinsic motivation enhances exploration of the RL agent by encouraging the agent to seek novel or informative states without relying solely on extrinsic rewards. These techniques are broadly classified into three categories: knowledge-based, data-based, and competence-based intrinsic motivation [laskin2021urlb]. Each category employs distinct mechanisms to guide exploration, but also faces specific limitations. Below, we describe each category, highlighting its principles, applications, and challenges. Knowledge-based intrinsic motivation incentivizes agents to acquire novel knowledge by exploring states with high uncertainty. Rooted in uncertainty quantification, knowledge-based methods often draw inspiration from the Upper Confidence Bound (UCB) algorithm, adapted for nonlinear Markov Decision Processes (MDPs). Typically, a neural network predicts state dynamics, and the prediction error serves as an intrinsic reward, encouraging exploration of less familiar states [burda2018exploration, burda2018large]. Despite its effectiveness in certain settings, this approach struggles in unsupervised or sparse-reward RL scenarios due to issues such as catastrophic forgetting, where previously learned knowledge is lost, and lack of awareness of latent skills [zhang2021made, liu2021aps]. Data-based intrinsic motivation focuses on the agent’s latest interactions with the environment [hazan2019provably, mutti2021task]. A prominent objective in this category is maximizing state entropy, which encourages uniform coverage of the state space by maximizing the differential entropy of the state distribution induced by the current policy [liu2021behavior]. This approach promotes diverse exploration but becomes computationally infeasible in high-dimensional state spaces, where uniform coverage is challenging [park2023metra]. Competence-based intrinsic motivation serves as a powerful intrinsic motivator from an evolutionary perspective. Competence-based methods typically maximize the mutual information between latent skills and the trajectories they generate, enabling agents to develop diverse and reusable skills [gregor2016variational, sharma2019dynamics, laskin2022cic]. However, the invariance of mutual information to scaling and transformations of input variables can result in static skills with limited state-space coverage, hindering their applicability to downstream tasks [park2022lipschitz]. Recent approaches, such as contrastive learning for skill discovery, aim to improve skill diversity and robustness by leveraging structured representations [eysenbach2022contrastive, yang2023behavior]. This categorization highlights the diversity of intrinsic motivation strategies in RL, each addressing different aspects of exploration. However, integrating these approaches or combining them with extrinsic rewards remains an active area of research to overcome their respective limitations and enhance overall agent performance [zheng2024constrained]. In the context of RFT-based red teaming, current curiosity/diversity-driven approaches belong to knowledge-based intrinsic motivation, where the intrinsic bonus for each adversarial prompt is estimated based on the rollout buffer of historical adversarial prompts. Appendix B Justification of Diversity Score To evaluate the diversity of the generated adversarial prompts during the fine-tuning process, we propose a novel diversity score as shown in Equation 5. To justify the design of this diversity score, we can easily see that H(ρT)=T∼ℬ[−ln(ρT)]=∑T∼ℬ[rdiv].subscriptsubscriptsubscriptsimilar-tosubscriptℬdelimited-[]subscriptsubscriptsubscriptsimilar-tosubscriptℬdelimited-[]superscriptdivH( _ a_T)=E_ a_T B[- ( _% a_T)]= _ a_T B[r^div].H ( ρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT ) = blackboard_Eitalic_a start_POSTSUBSCRIPT T ∼ B end_POSTSUBSCRIPT [ - ln ( ρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT ) ] = ∑italic_a start_POSTSUBSCRIPT T ∼ B end_POSTSUBSCRIPT [ rdiv ] . (11) That is, the sum of diversity scores of adversarial prompts in the buffer ℬBB is exactly the differential entropy of ρTsubscriptsubscript _ a_Tρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT, the distribution of the adversarial prompts. When the embeddings of the adversarial prompts are uniformly distributed in the latent space, this differential entropy reaches its maximum. Note that this diversity score is slightly different from the policy cover-based intrinsic bonus rtPCsuperscriptsubscriptPCr_t^PCritalic_tPC introduced by CALM [zheng2025calm]. In rtPCsuperscriptsubscriptPCr_t^PCritalic_tPC, the Appendix C Estimation of Diversity Score To estimate the diversity score ρTsubscriptsubscript _ a_Tρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT, we need to estimate the distribution density ρTsubscriptsubscript _ a_Tρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT. We choose the k-Nearest Neighbor (k-N) estimator to avoid complex modeling. Specifically, for an adversarial prompt Tsubscript a_Titalic_aitalic_T, its density in the feature space is estimated as follows ρ^T(T)=∫B(T,κ)ρT(u)du∫B(T,r)du≈k|ℬ|κd,subscript^subscriptsubscriptsubscriptsubscriptsubscriptsubscriptdifferential-dsubscriptsubscriptdifferential-dℬsuperscript ρ_ a_T( a_T)= _B( a_T,κ)% _ a_T(u)du _B ( a_T,r )% du≈ k|B|κ^d,over start_ARG ρ end_ARGitalic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT ( italic_aitalic_T ) = divide start_ARG ∫B ( italic_a start_POSTSUBSCRIPT T , κ ) end_POSTSUBSCRIPT ρbold_italic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT ( u ) d u end_ARG start_ARG ∫B ( italic_a start_POSTSUBSCRIPT T , r ) end_POSTSUBSCRIPT d u end_ARG ≈ divide start_ARG k end_ARG start_ARG | B | κitalic_d end_ARG , (12) where B is the neighborhood of Tsubscript a_Titalic_aitalic_T, κ=‖ϕ(T)−ϕ∗(T)‖normitalic-ϕsubscriptsuperscriptitalic-ϕsubscriptκ=\|φ( a_T)-φ^*( a_T)\|κ = ∥ ϕ ( italic_aitalic_T ) - ϕ∗ ( italic_aitalic_T ) ∥ is the distance between ϕ(T)∈ℝditalic-ϕsubscriptsuperscriptℝφ( a_T) ^dϕ ( italic_aitalic_T ) ∈ blackboard_Rd and its k-th nearest neighbor ϕ∗(T)superscriptitalic-ϕsubscriptφ^*( a_T)ϕ∗ ( italic_aitalic_T ), d is the dimension of the feature space. Based on LABEL:eqn:_rho, we can obtain the diversity score rdiv=−ln(ρ^T(T))≈lnκsuperscriptdivsubscript^subscriptsubscriptr^div=- ( ρ_ a_T( a_T))≈ = - ln ( over start_ARG ρ end_ARGitalic_a start_POSTSUBSCRIPT T end_POSTSUBSCRIPT ( italic_aitalic_T ) ) ≈ ln κ (13) Appendix D Discussion on Updating the Lagrange Multiplier Here we discuss the design details of the Lagrange Multiplier in RedRFT. The constrained policy optimization problem in RedRFT is maxπα′Jtox+λIJI,s.t.Jgib≤τgib.subscriptsubscriptsuperscript′subscripttoxsuperscriptIsubscriptIs.t.subscriptgibsubscriptgib _π _αJ_tox+λ^IJ_I,\ % s.t.\ J_gib≤ _gib.maxitalic_π′ start_POSTSUBSCRIPT α end_POSTSUBSCRIPT Jtox + λI JI , s.t. Jgib ≤ τgib . (14) We use πα′subscriptsuperscript′π _απ′italic_α to stand for the policy after update and παsubscript _απitalic_α the current policy. Based on the Lagrange dual theory, Equation 14 becomes minλgibmaxπα′Jtox+λIJI+λgib(−Jgib+τgib).subscriptsuperscriptgibsubscriptsubscriptsuperscript′subscripttoxsuperscriptIsubscriptIsuperscriptgibsubscriptgibsubscriptgib _λ^gib _π _αJ_tox+λ^% IJ_I+λ^gib(-J_gib+ _gib).minitalic_λgib maxitalic_π′ start_POSTSUBSCRIPT α end_POSTSUBSCRIPT Jtox + λI JI + λgib ( - Jgib + τgib ) . (15) After optimizing the inner maximization problem based on PPO, we can obtain the inner minimization problem minλgibλgib(−Jgibπα′+τgib).subscriptsuperscriptgibsuperscriptgibsubscriptsuperscriptsubscriptsuperscript′gibsubscriptgib _λ^gibλ^gib(-J^π _α_% gib+ _gib).minitalic_λgib λgib ( - Jitalic_π start_POSTSUPERSCRIPT ′α end_POSTSUPERSCRIPTgib + τgib ) . (16) where Jgibπα′=∼πα′[cgib]subscriptsuperscriptsubscriptsuperscript′gibsubscriptsimilar-tosubscriptsuperscript′delimited-[]superscriptgibJ^π _α_gib=E_ s π _% α[c^gib]Jitalic_π start_POSTSUPERSCRIPT ′α end_POSTSUPERSCRIPTgib = blackboard_Eitalic_s ∼ π′ start_POSTSUBSCRIPT α end_POSTSUBSCRIPT [ cgib ] is the expected gibberish score of the adversarial prompts generated by the updated policy πα′subscriptsuperscript′π _απ′italic_α. The gradient for λgibsuperscriptgibλ^gibλgib is thus (−Jgibπα′+τgib)subscriptsuperscriptsubscriptsuperscript′gibsubscriptgib(-J^π _α_gib+ _gib)( - Jitalic_π start_POSTSUPERSCRIPT ′α end_POSTSUPERSCRIPTgib + τgib ) and the update rule of λgibsuperscriptgibλ^gibλgib becomes λgib←λgib−ηλgib(−Jgibπα′+τgib),←superscriptgibsuperscriptgibsubscriptsuperscriptgibsubscriptsuperscriptsubscriptsuperscript′gibsubscriptgibλ^gib←λ^gib- _λ^gib% (-J^π _α_gib+ _gib),λgib ← λgib - ηitalic_λgib ( - Jitalic_π start_POSTSUPERSCRIPT ′α end_POSTSUPERSCRIPTgib + τgib ) , (17) where ηλgibsubscriptsuperscriptgib _λ^gibηitalic_λgib is the learning rate for λgibsuperscriptgibλ^gibλgib. We use the latest adversarial prompts sampled by πα′subscriptsuperscript′π _απ′italic_α to estimate Jgibπα′subscriptsuperscriptsubscriptsuperscript′gibJ^π _α_gibJitalic_π start_POSTSUPERSCRIPT ′α end_POSTSUPERSCRIPTgib. From Equation 17, we can easily see that when the constraint is satisfied, λgibsuperscriptgibλ^gibλgib gradually decreases; otherwise, it gradually grows to strengthen the effect of JgibsubscriptgibJ_gibJgib in the inner optimization objective. However, we find in experiments that this update rule heavily relies on the learning rate selection. An inappropriate learning rate will cause severe oscillation of λgibsuperscriptgibλ^gibλgib and make RFT training unstable. Appendix E Details of Algorithmic Baselines We present the details of five algorithmic baselines implemented in our RedRFT in this section. RPPO. Red-team PPO (RPPO) [perez2022red] utilizes the basic PPO backbone without any intrinsic rewards. TDiv. Different from CRT and DiveR-CT that define the intrinsic rewards based on the sentence embeddings of the adversarial prompts, TDiv. [casper2023explore] defines the intrinsic rewards based on the sentence embeddings of the target responses induced by the adversarial prompts, that is, "the intra-batch cosine distances of the target LM’s embeddings of the generated prompts." Since the original paper does not provide a formal definition of this reward, we define this intrinsic reward as follows: rtTDiv=0if t∈0,…,T−2∑′∼πα−ϕ()ϕ(′)if t=T−1,superscriptsubscriptTDivcases0if 0…2subscriptsimilar-tosuperscript′subscriptitalic-ϕitalic-ϕsuperscript′if 1r_t^TDiv= cases0&if t∈\0,...,T-2\\\ _ y _α-φ( y)φ( y )% &if t=T-1 cases,ritalic_tTDiv = start_ROW start_CELL 0 end_CELL start_CELL if t ∈ 0 , … , T - 2 end_CELL end_ROW start_ROW start_CELL ∑italic_y′ ∼ π start_POSTSUBSCRIPT α end_POSTSUBSCRIPT - ϕ ( italic_y ) ϕ ( italic_y′ ) end_CELL start_CELL if t = T - 1 end_CELL end_ROW , (18) where yitalic_y is the target response as defined in Equation 2. Note that we use ′∼παsimilar-tosuperscript′subscript y _αitalic_y′ ∼ πitalic_α instead of ′∼ℬsimilar-tosuperscript′ℬ y Bitalic_y′ ∼ B to obey the description of the original paper, that is, "intra-batch cosine distances". rtTDivsuperscriptsubscriptTDivr_t^TDivritalic_tTDiv is also a prompt-level intrinsic reward, that is, only when t=T−11t=T-1t = T - 1, this reward can be non-zero. CRT. In CRT [hong2024curiosity], apart from the prompt-level intrinsic reward rtCossuperscriptsubscriptCosr_t^Cosritalic_tCos, Hong et al. also propose another prompt-level intrinsic reward based on SelfBLEU [zhu2018texygen]. The BLEU score assesses how similar two sentences are, while the SelfBLEU score evaluates how one sentence resembles the rest in a generated collection. Regarding one sentence as a hypothesis and the others as references, we can calculate the BLEU score for every generated sentence. rtBLEU=0if t∈0,…,T−21−BLEU(T,ℬ)if t=T−1,superscriptsubscriptBLEUcases0if 0…21BLEUsubscriptℬif 1r_t^BLEU= cases0&if t∈\0,...,T-2\\\ 1-BLEU( a_T,B)&if t=T-1 cases,ritalic_tBLEU = start_ROW start_CELL 0 end_CELL start_CELL if t ∈ 0 , … , T - 2 end_CELL end_ROW start_ROW start_CELL 1 - BLEU ( italic_aitalic_T , B ) end_CELL start_CELL if t = T - 1 end_CELL end_ROW , (19) where BLEU(T,ℬ)BLEUsubscriptℬBLEU( a_T,B)BLEU ( italic_aitalic_T , B ) calcuates the BLEU score of the hypothesis sentence Tsubscript a_Titalic_aitalic_T based on the refenrece sentences from ℬBB. Though the definition is simple, we find in experiments that the calculation of the BLEU score becomes too time-consuming when the rollout buffer ℬBB grows. Hence, to balance the accuracy and efficiency, we only sample min(3×batch_size,|ℬ|)3batch_sizeℬ (3×batch\_size,|B|)min ( 3 × batch_size , | B | ) adversarial prompts from the rollout buffer ℬBB to calculate the rtBLEUsuperscriptsubscriptBLEUr_t^BLEUritalic_tBLEU. The composite intrinsic reward of CRT is rtCRT=rtCos+rtBLEUsuperscriptsubscriptCRTsuperscriptsubscriptCossuperscriptsubscriptBLEUr_t^CRT=r_t^Cos+r_t^BLEUritalic_tCRT = ritalic_tCos + ritalic_tBLEU (20) Since CRT does not employ the constrained policy optimization, the final optimization objective for CRT is maxπαJtox+λCRTJCRT−λgibJgibsubscriptsubscriptsubscripttoxsuperscriptCRTsubscriptCRTsuperscriptgibsubscriptgib _ _αJ_tox+λ^CRTJ_CRT-λ^% gibJ_gibmaxitalic_π start_POSTSUBSCRIPT α end_POSTSUBSCRIPT Jtox + λCRT JCRT - λgib Jgib (21) where JCRT=[rtCRT]subscriptCRTdelimited-[]superscriptsubscriptCRTJ_CRT=E[r_t^CRT]JCRT = blackboard_E [ ritalic_tCRT ] is the intrinsic objective of CRT, λCRTsuperscriptCRTλ^CRTλCRT is the coefficient to balance the extrinsic objective JtoxsubscripttoxJ_toxJtox and the intrinsic objective. DiveR-CT. DiveR-CT uses a similar prompt-level intrinsic reward as CRT. The difference lies in the sampling strategy. Instead of utilizing the whole samples from the rollout buffer, DiveR-CT proposes to use only the k-th nearest neighbors of the adversarial prompts to estimate the novelty, that is, rtkCos=0if t∈0,…,T−2∑T′∼B(T)−ϕ(T)ϕ(T′)if t=T−1,superscriptsubscriptkCoscases0if 0…2subscriptsimilar-tosubscriptsuperscript′subscriptitalic-ϕsubscriptitalic-ϕsubscriptsuperscript′if 1r_t^kCos= cases0&if t∈\0,...,T-2\\\ _ a _T B( a_T)-φ( a_T)φ( a% _T)&if t=T-1 cases,ritalic_tkCos = start_ROW start_CELL 0 end_CELL start_CELL if t ∈ 0 , … , T - 2 end_CELL end_ROW start_ROW start_CELL ∑italic_a′ start_POSTSUBSCRIPT T ∼ B ( italic_aitalic_T ) end_POSTSUBSCRIPT - ϕ ( italic_aitalic_T ) ϕ ( italic_a′italic_T ) end_CELL start_CELL if t = T - 1 end_CELL end_ROW , (22) where B(T′)subscriptsuperscript′B( a _T)B ( italic_a′italic_T ) indicates the k-th nearest neighbors of the adversarial prompt Tsubscript a_Titalic_aitalic_T. The composite intrinsic reward of DiveR-CT is then rtDiveR-CT=rtkCos+rtBLEU.superscriptsubscriptDiveR-CTsuperscriptsubscriptkCossuperscriptsubscriptBLEUr_t^DiveR-CT=r_t^kCos+r_t^BLEU.ritalic_tDiveR-CT = ritalic_tkCos + ritalic_tBLEU . (23) DiveR-CT adopts the constrained policy optimization, and the final optimization objective is maxπαJtox+λDiveR-CTJDiveR-CT,s.t.Jgib≤τgib,subscriptsubscriptsubscripttoxsuperscriptDiveR-CTsubscriptDiveR-CTs.t.subscriptgibsubscriptgib _ _αJ_tox+λ^DiveR-CTJ_DiveR-CT% ,\ s.t.\ J_gib≤ _gib,maxitalic_π start_POSTSUBSCRIPT α end_POSTSUBSCRIPT Jtox + λDiveR-CT JDiveR-CT , s.t. Jgib ≤ τgib , (24) where JDiveR-CT=[rtDiveR-CT]subscriptDiveR-CTdelimited-[]superscriptsubscriptDiveR-CTJ_DiveR-CT=E[r_t^DiveR-CT]JDiveR-CT = blackboard_E [ ritalic_tDiveR-CT ] is the intrinsic objective of CRT, λDiveR-CTsuperscriptDiveR-CTλ^DiveR-CTλDiveR-CT is the coefficient to balance the extrinsic objective JtoxsubscripttoxJ_toxJtox and the intrinsic objective. Note that in the original paper of DiveR-CT, the authors also investigate the cases where there exists a constraint for JtoxsubscripttoxJ_toxJtox. However, to standardize the RFT-based red teaming, we choose to set the default optimization objective of DiveR-CT as Equation 24. It is easy and flexible to configure the constraint for any objective, e.g., JtoxsubscripttoxJ_toxJtox, in our RedRFT (with only one line change in the configuration file). CALM. CALM [zheng2025calm] proposes a state-level intrinsic reward based on the policy cover theory as defined in Equation 4. We adopt a constrained policy optimization objective for CALM as follows: maxπαJtox+λCALMJCALM,s.t.Jgib≤τgib,subscriptsubscriptsubscripttoxsuperscriptCALMsubscriptCALMs.t.subscriptgibsubscriptgib _ _αJ_tox+λ^CALMJ_CALM,\ % s.t.\ J_gib≤ _gib,maxitalic_π start_POSTSUBSCRIPT α end_POSTSUBSCRIPT Jtox + λCALM JCALM , s.t. Jgib ≤ τgib , (25) where JCALM=[rtPC]subscriptCALMdelimited-[]superscriptsubscriptPCJ_CALM=E[r_t^PC]JCALM = blackboard_E [ ritalic_tPC ] is the intrinsic objective of CALM and λCALMsuperscriptCALMλ^CALMλCALM is the corresponding coefficient. Note that in the original paper of CALM, the authors leverage the technique of Random Network Distillation (RND) [burda2018exploration] to estimate the policy cover and state density. To make the comparison fair, we adopt the non-parametric k-N estimator in Equation 12 instead of RND to estimate the policy cover and state density. Appendix F Implementation Details Models and datasets. For both tasks, we use GPT-2 as the red-team model. When fine-tuning the red-team model, we either fine-tune only the last two blocks or utilize the Low-Rank Adaptation (LoRA) technique. For text continuation, as mentioned in Section 4, we use IMDB as the dataset to sample an initial prompt for the red-team LLM. The target model in text continuation is also GPT-2. For instruction following, we adopt Alpaca as the dataset and GPT2-Alpaca555https://huggingface.co/vicgalle/gpt2-alpaca as the target model. The running time of one experiment varies from 30 minutes to 1 hour. Hyperparameters. We list all hyperparameters for the optimization backbone in Table 2 and five algorithmic baselines in Table 3. Table 2: Hyperparameters for the PPO Backbone in RedRFT Parameter Value Total Queries 50,000 Batch Size 256 Learning Rate 3×10−53superscript1053× 10^-53 × 10- 5 Mini-Batch Size 16 Gamma 0.99 Epochs 4 Normalize Returns True Value Function Coefficient 1 Policy Entropy Coefficient 0.001 KL Reference Coefficient 0.001 Maximum Gradient Norm 1 Lagrange Type Average Lagrange Learning Rate 0.1 Table 3: Reward Coefficients and Constraints for Algorithmic Baselines in Reinforcement Fine-Tuning-Based Red Teaming Method Reward Coefficients Cost Threshold EX λtox=1,λgib=−1formulae-sequencesuperscripttox1superscriptgib1λ^tox=1,λ^gib=-1λtox = 1 , λgib = - 1 – TDiv λTDiv=1,λgib=−1formulae-sequencesuperscriptTDiv1superscriptgib1λ^TDiv=1,λ^gib=-1λTDiv = 1 , λgib = - 1 – CRT λCRT=1,λgib=−1formulae-sequencesuperscriptCRT1superscriptgib1λ^CRT=1,λ^gib=-1λCRT = 1 , λgib = - 1 – DiverCT λTDiv=1superscriptTDiv1λ^TDiv=1λTDiv = 1 τgib=0.1subscriptgib0.1 _gib=0.1τgib = 0.1 CALM λCALM=0.1superscriptCALM0.1λ^CALM=0.1λCALM = 0.1 τgib=0.1subscriptgib0.1 _gib=0.1τgib = 0.1 RedRFT λRedRFT=1superscriptRedRFT1λ^RedRFT=1λRedRFT = 1 τgib=0.1subscriptgib0.1 _gib=0.1τgib = 0.1 Appendix G Ablation Study on Intrinsic Reward As mentioned in Section 6, state-level intrinsic rewards produce results that are comparable to those of prompt-level intrinsic rewards. To illustrate this key finding more clearly, we visualize the variations in toxic rewards and intrinsic rewards during the fine-tuning process in Figure 5. The results indicate that CALM can identify high-quality adversarial prompts more efficiently than DiveR-CT, albeit at the cost of reduced diversity in the adversarial prompts. Furthermore, when compared to RPPO, which does not incorporate any intrinsic rewards, both CALM and DiveR-CT demonstrate a better balance between toxicity scores and diversity scores. Figure 5: Learning curves of toxic rewards and intrinsic rewards during the fine-tuning process. Appendix H Ablation Study on Batch Size We provide the full results of the ablation study on batch size across three methods, RPPO, CRT, and DiveR-CT, in both toxic completion and instruction following, shown in Figure 6 and Figure 7. We choose batch size from [64,128,256]64128256[64,128,256][ 64 , 128 , 256 ] and mini-batch size from [8,16,32]81632[8,16,32][ 8 , 16 , 32 ]. We exclude the config of batch_size=64 and mini_batch_size=32 since the training under such a config is unstable. From the results, we can conclude that a large batch size is more suitable for RFT-based red teaming for better toxicity-diversity score and efficiency. Figure 6: Ablation study on batch size in toxic completion. bs means batch_size and mbs stands for mini_batch_size. We find that batch_size=256 and mini_batch_size=16 show the most comparable results across different methods. Figure 7: Ablation study on batch size in instruction following. bs means batch_size and mbs stands for mini_batch_size. We find that batch_size=256 and mini_batch_size=16 show the most comparable results across different methods. Appendix I Ablation Study on LoRA and KL Divergence We present detailed results from our ablation study on LoRA and KL divergence. As illustrated in Figure 8, both LoRA and KL divergence enhance performance across various algorithmic baselines. Figure 8: Ablation study on LoRA and KL divergence in toxic completion. LoRA and KL divergence both contribute to the performance across different algorithmic baselines. Appendix J Ablation Study on Lagrangian Multiplier For the update of Lagrangian multiplier, apart from the learning curve of the Lagrangian multiplier presented in the main body of this paper, we also provide a detailed result on the corresponding toxic scores and non-gibberish scores. The results shown in Figure 9 underscore the importance of the stable update rule for the Lagrangian multiplier. Figure 9: Ablation study on Lagrangian Multiplier Appendix K Demonstration of Fast Prototyping To further illustrate the advantages of RedRFT in fast prototyping, we present the results of a novel RFT-based red teaming method, RedRFT. This method modifies only the intrinsic reward of DiveR-CT, changing it to the entropy-based diversity score rdivsuperscriptdivr^divrdiv. As shown in Figure 10, RedRFT exhibits performance that is comparable to other state-of-the-art RFT-based red teaming methods in the instruction-following task. Figure 10: RedRFT demonstrates comparable performance to CRT and DiveR-CT in instruction following as a prototype method. Appendix L Potential Societal and Ethical Impact As RedRFT focuses on RFT-based red teaming, our experiments generate adversarial prompts that elicit toxic or unsafe responses from target LLMs. While this process raises ethical concerns due to the potential creation of harmful content, we emphasize that these experiments are conducted in a controlled research environment with the explicit goal of improving LLM safety. By identifying vulnerabilities in LLMs, RedRFT will facilitate the development of more robust safety alignment techniques, which are critical for mitigating risks associated with real-world deployment. The toxic content generated during our experiments is not disseminated but used solely to evaluate and enhance the resilience of LLMs against adversarial prompts. Furthermore, RedRFT’s standardized framework promotes transparency and reproducibility, enabling the research community to build on our findings responsibly. We believe that the societal benefits of safer and more trustworthy LLMs outweigh the controlled risks of our experiments, as these efforts contribute to reducing the potential for harm in applications ranging from conversational agents to automated content generation. Nonetheless, we advocate for ongoing ethical oversight and adherence to responsible AI research practices to ensure that red teaming advancements align with societal values. NeurIPS Paper Checklist 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: Please refer to the main body of the paper, especially Section 4, Section 5, Section 6. Guidelines: • The answer NA means that the abstract and introduction do not include the claims made in the paper. • The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. • The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. • It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: Please refer to Section 7. Guidelines: • The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. • The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. • The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. • The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. • The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. • If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. • While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3. Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [N/A] Justification: Our benchmark RedRFT focuses on the framework standardization of the RFT-based red teaming and empirical ablation study on implementation details. Thus, we do not provide original theoretical results of the each baseline method. For strict theoretical results of these baseline methods, please refer to their original papers. Guidelines: • The answer NA means that the paper does not include theoretical results. • All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. • All assumptions should be clearly stated or referenced in the statement of any theorems. • The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. • Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4. Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: The design principle of our benchmark RedRFT is light-weight and clean, as described in Section 4 and Section 5. Our experiment results can be easily reproduced with a simple environment setup, the default config files, and the prepared shell scripts. Please refer to Section 6 and Appendix F. Guidelines: • The answer NA means that the paper does not include experiments. • If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. • If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. • Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. • While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: Please refer to https://github.com/x-zheng16/RedRFT.git for the code of our benchmark. Guidelines: • The answer NA means that paper does not include experiments requiring code. • Please see the NeurIPS code and data submission guidelines (https://nips.c/public/guides/CodeSubmissionPolicy) for more details. • While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). • The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.c/public/guides/CodeSubmissionPolicy) for more details. • The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. • The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. • At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). • Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6. Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: Please refer to Section 6 and Appendix F. Guidelines: • The answer NA means that the paper does not include experiments. • The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. • The full details can be provided either with the code, in appendix, or as supplemental material. 7. Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: We report error bars in all our experiments, as shown in Section 6. Guidelines: • The answer NA means that the paper does not include experiments. • The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. • The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). • The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) • The assumptions made should be given (e.g., Normally distributed errors). • It should be clear whether the error bar is the standard deviation or the standard error of the mean. • It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. • For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). • If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8. Experiments compute resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: Please refer to Section 6. Guidelines: • The answer NA means that the paper does not include experiments. • The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. • The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. • The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9. Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.c/public/EthicsGuidelines? Answer: [Yes] Justification: We confirm that our Benchmark conforms, in every respect, with the NeurIPS Code of Ethics. Guidelines: • The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. • If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. • The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). 10. Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: Since the main paper has limited space, we include the discussion on the potential societal and ethical impact of our benchmark in Appendix L. Guidelines: • The answer NA means that there is no societal impact of the work performed. • If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. • Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. • The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. • The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. • If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [N/A] Justification: The baseline methods our benchmark reproduces are all publicly published, and there is no high risk of misuse in these methods, as discussed in their original papers. Guidelines: • The answer NA means that the paper poses no such risks. • Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. • Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. • We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We respect and cite all the main Python libraries our benchmark relies on, including TensorDict, Hydra, and PyKeops. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. • The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., C-BY 4.0) should be included for each asset. • For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. • If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. • For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. • If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13. New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: We provide a detailed document for all new assets. Please refer to https://github.com/x-zheng16/RedRFT.git for the document. Guidelines: • The answer NA means that the paper does not release new assets. • Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. • The paper should discuss whether and how consent was obtained from people whose asset is used. • At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14. Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [N/A] Justification: Our benchmark belongs to automated red teaming and does not involve crowdsourcing or research with human subjects. Guidelines: • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. • Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. • According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15. Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [N/A] Justification: Our benchmark belongs to automated red teaming and does not involve crowdsourcing or research with human subjects. Guidelines: • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. • Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. • We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. • For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 16. Declaration of LLM usage Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [N/A] Justification: The core design and development process of our benchmarks, including the standardized RFT-based red teaming framework and the implementation of all RFT-based red teaming methods, does not rely on LLMs. We only use LLMs, e.g., GPT-4o, for grammar correction. Guidelines: • The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. • Please refer to our LLM policy (https://neurips.c/Conferences/2025/LLM) for what should or should not be described.