← Back to papers

Paper deep dive

Contract And Conquer: How to Provably Compute Adversarial Examples for a Black-Box Model?

Anna Chistyakova, Mikhail Pautov

Year: 2026Venue: arXiv preprintArea: cs.LGType: PreprintEmbeddings: 62

Abstract

Abstract:Black-box adversarial attacks are widely used as tools to test the robustness of deep neural networks against malicious perturbations of input data aimed at a specific change in the output of the model. Such methods, although they remain empirically effective, usually do not guarantee that an adversarial example can be found for a particular model. In this paper, we propose Contract And Conquer (CAC), an approach to provably compute adversarial examples for neural networks in a black-box manner. The method is based on knowledge distillation of a black-box model on an expanding distillation dataset and precise contraction of the adversarial example search space. CAC is supported by the transferability guarantee: we prove that the method yields an adversarial example for the black-box model within a fixed number of algorithm iterations. Experimentally, we demonstrate that the proposed approach outperforms existing state-of-the-art black-box attack methods on ImageNet dataset for different target models, including vision transformers.

Tags

adversarial-robustness (suggested, 80%)ai-safety (imported, 100%)cslg (suggested, 92%)preprint (suggested, 88%)

Links

PDF not stored locally. Use the link above to view on the source site.

Intelligence

Status: failed | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 0%

Last extracted: 3/13/2026, 1:11:38 AM

OpenRouter request failed (402): {"error":{"message":"This request requires more credits, or fewer max_tokens. You requested up to 65536 tokens, but can only afford 56816. To increase, visit https://openrouter.ai/settings/keys and create a key with a higher monthly limit","code":402,"metadata":{"provider_name":null}},"user_id":"user_2shvuzpVFCCndDdGXIdfi40gIMy"}

Entities (0)

No extracted entities yet.

Relation Signals (0)

No relation signals yet.

Cypher Suggestions (0)

No Cypher suggestions yet.

Full Text

61,936 characters extracted from source content.

Expand or collapse full text

Contract And Conquer: How to Provably Compute Adversarial Examples for a Black-Box Model? Anna Chistyakova Mikhail Pautov Abstract Black-box adversarial attacks are widely used as tools to test the robustness of deep neural networks against malicious perturbations of input data aimed at a specific change in the output of the model. Such methods, although they remain empirically effective, usually do not guarantee that an adversarial example can be found for a particular model. In this paper, we propose Contract And Conquer (CAC), an approach to provably compute adversarial examples for neural networks in a black-box manner. The method is based on knowledge distillation of a black-box model on an expanding distillation dataset and precise contraction of the adversarial example search space. CAC is supported by the transferability guarantee: we prove that the method yields an adversarial example for the black-box model within a fixed number of algorithm iterations. Experimentally, we demonstrate that the proposed approach outperforms existing state-of-the-art black-box attack methods on ImageNet dataset for different target models, including vision transformers. adversarial robustness, black-box attacks Figure 1: Illustration of the contraction of the adversarial example search space. Given the number j of algorithm iteration, the adversarial example search space on iteration j, namely, Uδ​(x)j,U_δ(x)_j, is the intersection of the ρj− _j-vicinity of an adversarial example zjz_j with the initial attack search space, Uδ​(x).U_δ(x). Formally, Uδ​(x)j=Uδ​(x)∩Uρj​(zj).U_δ(x)_j=U_δ(x)∩ U_ _j(z_j). The quantity ρj _j is defined in Eq. 7. For each algorithm iteration, the adversarial example search space is represented by the intersection of bold circles. 1 Introduction Evaluating and enhancing the robustness of neural networks to malicious perturbations of input data, called adversarial attacks, is crucial in safety-critical applications, such as medicine or autonomous systems. It has long been known that a small, often imperceptible perturbation of image (Goodfellow et al., 2014) or a minor paraphrase of an input prompt (Zhu et al., 2023) can cause a desired change in the output of the corresponding model. It is noteworthy that the effectiveness of adversarial attacks is experimentally confirmed in the black-box settings, when the attacker has limited access to the model, namely, when they can query the model and receive its output in a fixed format (Qi et al., 2023; Maheshwary et al., 2021; Guo et al., 2019). Figure 2: Schematic representation of the proposed method. Given alternation iteration j and the target model T, we prepare the distillation dataset ​(S)D(S) and train the surrogate model SjS_j. Then, SjS_j is attacked at the target point x in the white-box setting, and an adversarial example zjz_j is computed. If zjz_j is transferable to T,T, algorithm returns zjz_j and stops; otherwise, the adversarial example search space is contracted as shown in Fig. 1, (zj,T​(zj))(z_j,T(z_j)) is added to the distillation dataset, and the next instance of the surrogate model, Sj+1,S_j+1, is obtained. Starting from the seminal work (Szegedy et al., 2014), the majority of research in the field of adversarial machine learning has focused on developing methods to compute adversarial examples and empirical approaches to defend the models against them. Mainly, the methods of computing adversarial examples are based on utilizing the information about the target model’s outputs and gradients (Carlini and Wagner, 2017; Madry et al., 2018; Andriushchenko et al., 2020; Park et al., 2024) or its estimation (Guo et al., 2019; Chen et al., 2017; Cheng et al., 2024; Han et al., 2024). In parallel, empirical defense methods are mainly based on adversarial training (Madry et al., 2018; Bai et al., 2021), where the model is trained on generated adversarial examples, gradient regularization (Ross and Doshi-Velez, 2018), calibration (Stutz et al., 2020), or weight perturbation (Wu et al., 2020; Xu et al., 2022). It is worth mentioning that the existence of an arms race between empirical defenses and adversarial attacks is concerning for security-critical applications: specifically, it can not be guaranteed that recently developed empirical defense mechanisms will remain effective against novel attack methods, and vice versa. Thus, the effectiveness of the application of empirical methods from adversarial machine learning to evaluate the robustness in safety-critical settings is questionable. More than that, a variety of regulatory acts for artificial intelligence systems are in the process of development today, for example, the EU AI Act or the US National AI Initiative Act. These frameworks, among other things, are designed to develop standards of robustness of machine learning algorithms and services to adversarial attacks. As a consequence, to deploy a machine learning system in a specific setting, one will have to verify that it complies with the aforementioned standards. To ground the evaluation of the resilience of machine learning methods to adversarial attacks, it may be reasonable to focus on certified robustness methods. Instead of relying on heuristics used in empirical defense approaches, certified robustness methods aim to provide mathematical guarantees about a model’s behavior when its input is subjected to a certain perturbation. The methods of certified robustness are usually based on randomized smoothing (Cohen et al., 2019; Pautov et al., 2022a; Voracek, 2024), set propagation techniques (Gowal et al., 2018; Mao et al., 2024), convex relaxation (Anderson et al., 2025; Kim and Pilanci, 2024), or probabilistic certification (Weng et al., 2019; Pautov et al., 2022b; Feng et al., 2025). These approaches can yield sample-level or population-level guarantees that no adversarial example exists, given the type of perturbation and the perturbation budget. Unfortunately, certified robustness comes at a cost of computationally expensive inference (Cohen et al., 2019), may require significant changes to both training and inference, limit available model architectures (Cullen et al., 2025), or may lead to a notable performance degradation of the certifiably robust model. Aforementioned drawbacks are among the ones that limit the embedding of certified robustness into the state-of-the-art machine learning services: for example, an integration of randomized smoothing defense into medical diagnostics or into digital services that mark harmful content may lead to a significant degradation of performance on benign input data or severely slow down the system. As a consequence, to both retain practical effectiveness and to align with the upcoming AI regulatory acts, the developers will probably seek alternatives to certified robustness. At the same time, a complementary research question arises: how to guarantee that the given black-box machine learning is not robust? Specifically, a method to prove that the given model is not robust might be an important tool for the assessment of robustness, especially from the perspective of compliance with the prospective standards. In this paper, we focus on this research question and propose Contract And Conquer (CAC), an iterative method to compute adversarial examples for black-box models with convergence guarantees. By design, CAC is an alternation of two processes: (i) knowledge distillation (Hinton, 2015) of the target black-box target model by a small surrogate model and (i) a white-box adversarial attack on a surrogate model within a vicinity of the target input point. Intuition behind CAC is simple: knowledge distillation forces the surrogate model to replicate the predictions of the target model in the closed vicinity of target point, where a white-box attack on the surrogate model is used to craft adversarial examples; careful alternation of these operations, together with small contraction of the vicinity of the target point, yields an upper bound on the number of alternations needed to compute an adversarial example for the black-box target model. Our contributions are summarized as follows: • A novel iterative transfer-based adversarial attack, Contract and Conquer (CAC), is proposed. The method is based on knowledge distillation of the target model on an expanding dataset and the white-box attack on the surrogate model within a contracting adversarial example search space. • We theoretically demonstrate that, under mild assumptions on the surrogate model, the proposed transfer-based attack is guaranteed to yield an adversarial example for the black-box target model within a fixed number of algorithm iterations. • We experimentally show that CAC outperforms the state-of-the-art black-box attack methods on popular image benchmarks for different target models, including vision transformers. 2 Related Work 2.1 Adversarial Attacks Soon after the vulnerability of neural networks to adversarial perturbations was established (Goodfellow et al., 2014; Szegedy et al., 2014), a lot of attack methods have been proposed (Moosavi-Dezfooli et al., 2016; Carlini and Wagner, 2017; Chen et al., 2020). One way to categorize attack methods is based on the degree of accessibility of the target model to an adversary. White-box attacks, that imply full access to the target model, including its internal weights, gradients and/or training data, are broadly gradient-based ones (Goodfellow et al., 2014; Carlini and Wagner, 2017; Madry et al., 2018), or surrogate loss-based ones (Zhang et al., 2022b, a; Wang et al., 2023). Gradient-based attacks exploit information about the target model’s gradients, and, hence, tend to be of superior effectiveness; at the same time, the transferability, or the ability of adversarial examples to generalize across models, of gradient-based adversarial attacks is usually modest, since the examples are specifically computed for the specific model instance (Madry et al., 2018; Qin et al., 2022). To enhance the transferability of adversarial examples, some methods redesign objective functions, utilizing, among other things, an information from the hidden layers of the target model, for example, by improving the similarity between the features of the adversarial example and its benign preimage (Huang et al., 2019), enhancing an invariance of an adversarial noise w.r.t. input objects (Liu and Wang, 2025) or by disrupting a subset of important object-aware features of the target model (Wang et al., 2021b). In contrast, black-box attacks assume that an adversary only has query access to the target model, and, hence, can be used to evaluate the robustness of machine learning services in real-world setups (Papernot et al., 2017; Zhang et al., 2021; Ma et al., 2025). These methods can be coarsely divided into score-based (Uesato et al., 2018; Andriushchenko et al., 2020; Bai et al., 2020), decision-based (Rahmati et al., 2020; Maho et al., 2021; Wang et al., 2022) and transfer-based (Liu et al., 2017; Xie et al., 2019; Naseer et al., 2022; Li et al., 2023; Chen et al., 2024) categories. When decision-based and score-based methods utilize the outputs of the target model to conduct an attack, transfer-based ones rely on training the surrogate models to further conduct a white-box adversarial attack against. These approaches, in particular, tend to show better transferability of adversarial examples from one model to another, mainly by design of optimization procedure and due to different heuristics used (Debicha et al., 2023; Xie et al., 2025). We want to highlight that although transfer-based black-box adversarial attacks demonstrate remarkable transferability of adversarial examples from the surrogate models to the target models, they do not provide any guarantees of the success of an attack; in general, this important disadvantage is shared by known black-box attack methods. 2.2 Adversarial Defenses To level out the threat of adversarial attacks, plenty of defense methods have been proposed. They can be divided into two categories, namely, empirical ones and certified ones. When empirical methods mainly rely on data-driven and architecture-level heuristics, the certified ones are equipped with formal guarantees: for example, they allow to formally prove that no adversarial example exists in a particular vicinity of the target point (Gowal et al., 2018; Cohen et al., 2019). Among empirical methods, adversarial training (Goodfellow et al., 2014; Madry et al., 2018) and its modifications (Shafahi et al., 2019; Wong et al., 2020) stand out. These approaches enhance the robustness of neural networks by jointly training them on benign samples and adversarial examples generated by certain attack methods, exposing the population of adversarial examples that the model has to be defended from; it is noteworthy that adversarial training methods offer the strongest empirical robustness. The other methods pre-process the data before feeding it to the target network (Guo et al., 2018; Nesti et al., 2021), adopt image purification techniques (Nie et al., 2022; Wei et al., 2025), use auxiliary methods to detect and correct adversarial perturbations (Liu et al., 2019; Aldahdooh et al., 2022; Che et al., 2025), or modify the defended model (Yu et al., 2021; Allouah et al., 2025; Zhao et al., 2025). Among the certified methods, randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019) and its variants (Yang et al., 2020; Bansal et al., 2022; Korzh et al., 2025) are used to provide the state-of-the-art worst-case guarantees on robustness of neural networks in different setups. Instead of providing the output for a single input sample, these methods aggregate the predictions over a large amount of perturbed input samples. The other certified defense methods include, but are not limited to, set propagation techniques (Gowal et al., 2018; Wang et al., 2021a; Mao et al., 2024), and formal verification methods (Tjeng et al., 2019; Shi et al., 2020). It is worth mentioning that application of provable, effective, but computationally expensive defense methods in real-world AI systems is rather selective and incremental, than rigorous and complete, since in many setups, speed, performance and utility may outweigh robustness. 3 Methodology In this section, we provide background and motivation followed by the description of the proposed method. Later, we introduce theoretical justifications of the method. 3.1 Background and Motivation In this work, we separately consider hard-label and soft-label settings. Specifically, let T be the target black-box model that takes real-valued image tensor x∈[0,1]dx∈[0,1]^d as input, and returns, in hard-label setting, class index y∈[1,…,K]y∈[1,…,K], where K is the number of classes; in soft-label setting, it returns the vector of class probabilities p∈[1,…,K].p∈[1,…,K]. Here and below, we represent the prediction label assigned by the black-box model T for input x in the form T​(x)=y, for hard-label case, T(x)=y, for hard-label case, T​(x)=arg⁡maxi∈[1,…,K]⁡T​(x)i, for soft-label. T(x)= _i∈[1,…,K]T(x)_i, for soft-label. (1) Let S:[0,1]d→[0,1]KS:[0,1]^d→[0,1]^K be the white-box model that maps an input tensor to a class index as y=arg⁡maxi∈[1,…,K]⁡S​(x)i.y= _i∈[1,…,K]S(x)_i. In this work, we focus on the simplest formalism of an adversarial attack given in the definitions below. Definition 3.1. Let x be the sample correctly classified by the model T, y=T​(x)y=T(x), and let δ>0δ>0 be the fixed constant. Then, the object x′:‖x−x′‖∞≤δx :\|x-x \|_∞≤δ is called an adversarial example for T at point x,x, if T​(x′)≠T​(x).T(x )≠ T(x). If T​(x′)=y′T(x )=y for some predefined class y′≠y,y ≠ y, then x′x is called targeted adversarial example. Starting from here, we refer to Uδ​(x)=x′:‖x−x′‖∞≤δU_δ(x)=\x :\|x-x \|_∞≤δ\ as the initial adversarial example search space. Following the well-established notion (Madry et al., 2018), we treat the l∞l_∞ constraint as the measure of invisibility of adversarial examples. Definition 3.2. Let x′∈Uδ​(x)x ∈ U_δ(x) be the adversarial example computed for the white-box model S at point x, and T be the separate black-box model. Then, x′x is called transferable from S to T if arg⁡maxi∈[1,…,K]⁡S​(x)i=T​(x),arg⁡maxi∈[1,…,K]⁡S​(x′)i=T​(x′). cases _i∈[1,…,K]S(x)_i=T(x),\\ _i∈[1,…,K]S(x )_i=T(x ). cases (2) The goal of this work is to propose an approach to compute adversarial examples for the target model, T, that is supported by a mathematical guarantee of the success of an attack. To do so, we apply a transfer-based attack paradigm. In a nutshell, instead of computing an attack for the target model explicitly, we apply knowledge distillation to obtain a smaller surrogate model, S, to attack in the white-box setting; then we demonstrate experimentally and formally prove that, under mild assumptions on the surrogate model and controllable contraction of the adversarial example search space, we are guaranteed to compute an adversarial example for T within the fixed number of iterations. In the next section, we provide a detailed description of the proposed method. 3.2 Description of CAC 3.2.1 Surrogate Model and White-box Attack Suppose that the black-box model T, the target point x of class y, and the initial adversarial attack search space Uδ​(x)U_δ(x) are fixed. We firstly obtain the surrogate model, S, by applying knowledge distillation to T.T. The distillation dataset for the surrogate model, ​(S),D(S), consists of pairs (xk,T​(xk)),(x_k,T(x_k)), where xkk=1m−1\x_k\_k=1^m-1 is a subset of a hold-out dataset. This subset is formed in the following way: firstly, a random subset xkk=1Ni​n​i​t\x_k\_k=1^N_init is sampled from a hold-out dataset; then, among Ni​n​i​tN_init points, we choose m−1m-1 closest ones to the target point x. The target point (x,T​(x))(x,T(x)) is included in ​(S).D(S). Consequently, knowledge distillation is performed by training S on ​(S)D(S) by minimizing an empirical risk L​(S,​(S))=1|​(S)|​∑(xk,yk)∈​(S)l​(S,xk,yk),L(S,D(S))= 1|D(S)| _(x_k,y_k) (S)l(S,x_k,y_k), (3) where l​(S,xk,yk)l(S,x_k,y_k) is the cross-entropy loss function. In the experiments, we use Ni​n​i​t=10000N_init=10000 and m=300m=300. We assume that the surrogate model has enough learning capabilities to match the predictions of the target model on ​(S),D(S), which is formalized in the following form: T​(xk)=arg⁡maxi∈[1,…,K]⁡S​(xk)i=yk,12​[S​(xk)yk−maxi≠yk⁡S​(xk)i]>ε casesT(x_k)= _i∈[1,…,K]S(x_k)_i=y_k,\\ 12 [S(x_k)_y_k- _i≠ y_kS(x_k)_i ]> cases (4) for all (xk,yk)∈​(S).(x_k,y_k) (S). Here, the second inequality reflects the confidence of the surrogate model, and ε>0 >0 is a constant. When the surrogate model is trained, we attack it in a white-box manner. Specifically, we apply MI-FGSM (Dong et al., 2018) to find an adversarial example for S within initial adversarial attack search space Uδ​(x)U_δ(x): gt+1=μ​gt+∇xt′l′​(S,xt′,y)‖∇xt′l′​(S,xt′,y)‖1,xt+1′=ProjUδ​(x)​[xt′+α​(δ)​ sign​(gt+1)],x1′=x,g1=0. casesg_t+1=μ g_t+ _x _tl (S,x _t,y)\| _x _tl (S,x _t,y)\|_1,\\ x _t+1=Proj_U_δ(x) [x _t+α(δ) sign(g_t+1) ],\\ x _1=x, g_1=0.\\ cases (5) Here, μ is the momentum parameter, α​(δ)>0α(δ)>0 is the gradient step, ProjUδ​(x)Proj_U_δ(x) is the projection onto the attack search space, M is the maximum number of gradient steps, and l′l is a loss function specified later. We refer to the process of distillation followed by the search for an adversarial example for the surrogate model as a single alternation. 3.2.2 Adjustment of Attack Parameters Let j be the number of current alternation. We assume that for some iteration number t∈[1,…,M]t∈[1,…,M], an adversarial example for the surrogate model, zj=xt′,z_j=x_t , is found. Then, the target model T is queried with zjz_j to check if it is transferable from S to T.T. If so, an algorithm yields zjz_j as an adversarial example for T;T; otherwise, we adjust the adversarial attack procedure: firstly, (zj,T​(zj))(z_j,T(z_j)) is included in the distillation dataset ​(S)D(S); secondly, the adversarial example search space is contracted as follows: Uδ​(x)j←Uδ​(x)∩Uρj​(zj), U_δ(x)_j← U_δ(x)∩ U_ _j(z_j), (6) where ρj=t​‖zj−zj−1‖∞ _j=t\|z_j-z_j-1\|_∞ (7) is the contracted distance between two previous adversarial examples. Here, Uδ​(x)jU_δ(x)_j is the adversarial example search space after j−j-th alternation iteration, t∈(0,1)t∈(0,1) is the contraction parameter, Uρj​(zj)=a:‖a−zj‖∞≤ρjU_ _j(z_j)=\a:\|a-z_j\|_∞≤ _j\ and z0=x.z_0=x. After these two adjustments, an algorithm proceeds to the next alternation described in Section 3.2.1, but with updated distillation dataset and adversarial example search space. The procedures from Sections 3.2.1 and 3.2.2 are described in Algorithm 1. The adversarial example search space contraction is schematically presented in Figure 1. Algorithm 1 Contract and Conquer 0: Black-box target model T, target point x of class y, distance threshold δ, momentum parameter μ, maximum number of MI-FGSM iterations M, maximum number queries to the target model N, initial size of distillation dataset m, hold-out dataset data points xkk=1m−1\x_k\_k=1^m-1, contraction parameter t 0: Surrogate model S, adversarial example (z,T​(z))(z,T(z)) for the target model T 1: ​(S)←(xk,T​(xk))i=1m−1∪(x,y)D(S)←\(x_k,T(x_k))\_i=1^m-1∪\(x,y)\ initialize distillation dataset 2: N←N−mN← N-m the remaining number of queries to the target model decreases since m were spent to initialize distillation dataset 3: z0←x,Uδ​(x)0←Uδ​(x),α←δ/Mz_0← x, U_δ(x)_0← U_δ(x), α←δ/M 4: j←1j← 1 5: while N≥0N≥ 0 do 6: train S on distillation dataset ​(S)train S on distillation dataset D(S) 7: (zj,arg⁡maxi∈[1,…,K]⁡S​(zj)i)←MI-FGSM​(S,α,μ,Uδ​(x)j−1,M,(x,y))(z_j, _i∈[1,…,K]S(z_j)_i) -FGSM(S,α,μ,U_δ(x)_j-1,M,(x,y)) compute an adversarial example for the surrogate model according to Eq. 5 8: if arg⁡maxi∈[1,…,K]⁡S​(zj)i=h​(T,zj) _i∈[1,…,K]S(z_j)_i=h(T,z_j) then 9: return S,(zj,h​(T,zj))S,\ (z_j,h(T,z_j)) 10: else 11: ​(S)←​(S)∪(zj,T​(zj))D(S) (S)∪\(z_j,T(z_j))\ 12: ρj←t​‖zj−zj−1‖ _j← t\|z_j-z_j-1\| 13: Uδ​(x)j←Uδ​(x)∩Uρj​(zj)U_δ(x)_j← U_δ(x)∩ U_ _j(z_j) contract adversarial example search space according to Eq. 6 14: α←ρj/Mα← _j/M update the gradient step 15: end if 16: N←N−1N← N-1 the remaining number of queries decreases since 11 query is spent to compute T​(zj)T(z_j) 17: j←j+1j← j+1 18: end while Remark 3.3. CAC is, in fact, not tied to a specific white-box attack; the usage of MI-FGSM is motivated by its simplicity and efficiency. The procedure in Algorithm 1 is described for a single white-box adversarial example to ease the notation. In practice, the algorithm computes a batch of na​d​v=10n_adv=10 adversarial examples for speed-up. To ensure the variety of these examples, each example is computed for the target point z0=x+εkz_0=x+ _k and search space Uδ​(x).U_δ(x). Here εkk=1na​d​v∼​[−δ,δ].\ _k\_k=1^n_adv [-δ,δ]. Additionally, if an adversarial example for the target model is found and the maximum number of queries to the target model has not been exhausted, (i) the radius of the initial adversarial example search space, δ, decreases, and (i) the algorithm restarts to possibly yield an adversarial example closer to the target point. 3.3 Convergence Guarantee In this Section, we introduce the theoretical justification of CAC and justify the assumptions made. The following lemma represents the convergence guarantee of the method. Lemma 3.4. Fix an input sample x and initial adversarial attack search space, Uδ​(x)=a:‖x−a‖∞≤δ.U_δ(x)=\a:\|x-a\|_∞≤δ\. Suppose that for every j∈ℤ+,j _+, the white-box attack in Algorithm 1 yields an adversarial example for the model S. Let S be a differentiable function with the bounded gradients in Uδ​(x)U_δ(x) for every j∈ℤ+j _+ and let γ=supj∈ℤ+supx′∈Uδ​(x)‖∇S​(x′)‖o​p,∞,γ= _j _+\ _x ∈ U_δ(x)\|∇ S(x )\|_op,∞, (8) where ∥⋅∥o​p,∞\|·\|_op,∞ is the operator norm induced by l∞l_∞ norm of vectors. Let the surrogate model S be trained according to Eq. 4, meaning that if yk=arg⁡maxi∈[1,…,K]⁡S​(xk)i,y_k= _i∈[1,…,K]S(x_k)_i, then 12[S(xk)yk−maxi≠yk(S(xk))i]>ε 12 [S(x_k)_y_k- _i≠ y_k(S(x_k))_i ]> (9) for all (xk,yk)∈​(S)(x_k,y_k) (S). Then, Algorithm 1 yields an adversarial example for the model S which is transferable to T at most at (n−1)−(n-1)-th alternation iteration, where (n−1)​ln⁡t≤ln⁡ε−ln⁡δ−ln⁡γ.(n-1) t≤ - δ- γ. (10) Remark 3.5. The proof is moved to the appendix, not to distract the reader. Here, we want to briefly motivate assumptions made in Lemma 3.4. Firstly, the boundedness of the gradient S is achieved by construction of S out of layers with the bounded gradients and by using activation functions with the bounded gradients, what is done in our case. Secondly, the assumptions about the learning capabilities of the surrogate model formalized in Eq. 4 and the possibility to compute an adversarial example for the surrogate model on each alternation iteration can be achieved simultaneously by an appropriate choice of the architecture of S and its training; these two assumptions are practically verifiable. 4 Experiments In this section, we provide technical description of experiments, datasets and model architectures, baseline methods, and the comparison methodology. 4.1 Setup of Experiments 4.1.1 Datasets and Target Models In our experiments, we use CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009) datasets to train the surrogate models. For the baseline experiments, we choose ResNet-50 (He et al., 2016) and ViT-B (Dosovitskiy et al., 2021) architectures of target models. The accuracy of ResNet50 on ImageNet is 80.13%80.13\%, on CIFAR-10 is 94.65%94.65\%; the accuracy of ViT-B on ImageNet is 85.21%85.21\%, on CIFAR-10 is 96.89%96.89\%. 4.1.2 Surrogate Models and White-Box Attack We use ResNet-18 as the architecture of the white-box surrogate model. The knowledge distillation is conducted for 100100 epochs with the use of Adam optimizer with the constant learning rate of 10−310^-3. We conduct the white-box attack on the surrogate models with the following parameters: the number of MI-FGSM iterations is set to be M=3M=3, the momentum parameter of attack is set to be μ=1.0,μ=1.0, the contraction parameter is set to be t=0.99,t=0.99, the initial adversarial example search space radius is set to be δ=0.125δ=0.125, the gradient step is set to be α=δ/M.α=δ/M. The loss function l′​(S,xt′,y)l (S,x _t,y) from Eq. 5 is the cross-entropy loss in the hard-label setting and MSE loss for the soft-label setting. To quantitatively evaluate the effectiveness of the method, we randomly choose the subset of 100100 target points from the test subset of the corresponding dataset which are initially correctly classified by the target model. 4.1.3 Baseline Methods We evaluate the proposed method against HopSkipJump (Chen et al., 2020), Sign-OPT (Cheng et al., 2020), GeoDA (Rahmati et al., 2020), SquareAttack (Andriushchenko et al., 2020), SparseRS (Croce et al., 2022), PAR (Shi et al., 2022) and AdvViT (Zhou et al., 2025) methods. HopSkipJump, Sign-OPT, and GeoDA are regarded as query-efficient competitive benchmarks in the hard-label black-box setting; SparseRS and SquareAttack are among the most efficient in the soft-label setting. At the same time, AdvViT and PAR are the state-of-the-art hard-label black-box attacks designed specifically for transformer architectures. Additionally, we evaluate CAC against combinations of HopSkipJump and SignOPT with PAR, where the latter is used as an initialization for the baseline methods. The hyperparameters of the baseline methods are reported in the appendix. Table 1: Quantitative comparison of attack methods, hard-label setting, the target model is ResNet-50, the dataset is ImageNet. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 487.95487.95 35.074¯ 35.074 18.83318.833 0.153¯ 0.153 0.0800.080 HopSkipJump l2l_2 1.001.00 500.31500.31 48.83848.838 29.11829.118 0.5390.539 0.2800.280 HopSkipJump l∞l_∞ 1.001.00 500.01500.01 73.25573.255 35.85635.856 0.3610.361 0.2020.202 SignOPT 1.001.00 548.24548.24 48.04748.047 28.46728.467 0.5510.551 0.2830.283 GeoDA 1.001.00 524.98524.98 49.65849.658 31.11731.117 0.1800.180 0.0940.094 Table 2: Quantitative comparison of attack methods, hard-label setting, the target model is ViT-B, the dataset is ImageNet. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 488.91488.91 49.28249.282 26.48826.488 0.165¯ 0.165 0.0910.091 HopSkipJump l2l_2 1.001.00 500.34500.34 70.12270.122 38.34338.343 0.6850.685 0.3180.318 HopSkipJump l∞l_∞ 1.001.00 500.01500.01 106.142106.142 48.45548.455 0.5630.563 0.2920.292 SignOPT 1.001.00 557.31557.31 74.74474.744 44.85044.850 0.7080.708 0.3380.338 GeoDA 1.001.00 540.21540.21 65.47165.471 40.49740.497 0.1900.190 0.1240.124 PAR 1.001.00 322.38322.38 38.75138.751 25.74525.745 0.8890.889 0.2330.233 AdvViT 0.750.75 461.04461.04 34.520¯ 34.520 20.25720.257 0.5840.584 0.3010.301 SignOPT + PAR 1.001.00 467.64467.64 51.46851.468 37.94137.941 0.6250.625 0.2760.276 HopSkipJump l2l_2 + PAR 1.001.00 500.36500.36 56.51456.514 40.45440.454 0.6650.665 0.3280.328 HopSkipJump l∞l_∞ + PAR 1.001.00 500.09500.09 102.909102.909 49.01849.018 0.5430.543 0.2870.287 Table 3: Quantitative comparison of attack methods, soft-label setting, the target model is ResNet-50, the dataset is ImageNet. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 489.93489.93 36.396¯ 36.396 19.03819.038 0.122¯ 0.122 0.0680.068 SquareAttack l∞l_∞ 0.980.98 500.00500.00 89.29289.292 4.9534.953 0.2500.250 0.0000.000 SparseRS 0.940.94 500.00500.00 44.47044.470 2.5742.574 0.9940.994 0.0170.017 Table 4: Quantitative comparison of attack methods, soft-label setting, the target model is ViT-B, the dataset is ImageNet. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 488.60488.60 41.370¯ 41.370 23.57923.579 0.144¯ 0.144 0.0840.084 SquareAttack l∞l_∞ 0.260.26 500.00500.00 90.10390.103 4.6024.602 0.2500.250 0.0000.000 SparseRS 0.790.79 500.00500.00 44.33544.335 2.4442.444 0.9930.993 0.0170.017 Table 5: Quantitative comparison of attack methods, hard-label setting, the target model is ResNet-50, the dataset is CIFAR-10. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 291.0291.0 2.675¯ 2.675 1.0911.091 0.061¯ 0.061 0.0250.025 HopSkipJump l2l_2 1.001.00 300.07300.07 2.7042.704 2.6342.634 0.1740.174 0.1610.161 HopSkipJump l∞l_∞ 1.001.00 310.66310.66 3.2813.281 3.2323.232 0.0820.082 0.0850.085 SignOPT 0.920.92 288.59288.59 3.6423.642 3.3513.351 0.2420.242 0.2090.209 GeoDA 0.960.96 300.81300.81 3.3883.388 3.4403.440 0.0710.071 0.0710.071 Table 6: Quantitative comparison of attack methods, hard-label setting, the target model is ViT-B, the dataset is CIFAR-10. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 489.89489.89 21.62521.625 11.99011.990 0.070¯ 0.070 0.0440.044 HopSkipJump l2l_2 0.990.99 496.30496.30 40.16040.160 33.46033.460 0.4170.417 0.2810.281 HopSkipJump l∞l_∞ 1.001.00 500.01500.01 60.74260.742 40.92940.929 0.2920.292 0.2260.226 SignOPT 0.960.96 532.76532.76 40.65340.653 33.66433.664 0.4260.426 0.2650.265 GeoDA 0.940.94 604.32604.32 25.87125.871 23.51723.517 0.0710.071 0.0780.078 PAR 1.001.00 281.56281.56 20.52620.526 17.51517.515 0.6450.645 0.2360.236 AdvViT 0.960.96 530.21530.21 17.741¯ 17.741 16.19116.191 0.3190.319 0.2150.215 SignOPT + PAR 1.001.00 481.18481.18 26.96826.968 27.32427.324 0.4540.454 0.2210.221 HopSkipJump l2l_2 + PAR 1.001.00 500.20500.20 30.35230.352 25.58925.589 0.4380.438 0.2440.244 HopSkipJump l∞l_∞ + PAR 1.001.00 500.04500.04 53.65653.656 33.29933.299 0.2530.253 0.1800.180 Table 7: Quantitative comparison of attack methods, soft-label setting, the target model is ResNet-50, the dataset is CIFAR-10. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 291.00291.00 2.468¯ 2.468 1.0751.075 0.056¯ 0.056 0.0250.025 SquareAttack l∞l_∞ 0.820.82 300.00300.00 13.02813.028 0.6370.637 0.2500.250 0.0000.000 SparseRS 0.960.96 300.00300.00 4.3714.371 0.3480.348 0.9200.920 0.0650.065 Table 8: Quantitative comparison of attack methods, soft-label setting, the target model is ViT-B, the dataset is CIFAR-10. Method ASR AQN Avg l2l_2 Std l2l_2 Avg l∞l_∞ Std l∞l_∞ CAC (ours) 1.001.00 489.50489.50 15.745¯ 15.745 9.8509.850 0.050¯ 0.050 0.0370.037 SquareAttack l∞l_∞ 0.850.85 500.00500.00 92.18292.182 3.573.57 0.2500.250 0.0000.000 SparseRS 0.980.98 500.00500.00 43.19843.198 1.861.86 0.9740.974 0.0320.032 4.1.4 Comparison Methodology To align CAC with the baseline methods for comparison, we fix the maximum number of queries to the target model and the initial adversarial examples search space for each target point and evaluate the efficiency for each method by computing its attack success rate. We report average distances between the target point and the closest corresponding adversarial example, as well as the average number of queries, AQN, required to compute an adversarial example at the target point. Average number of queries denotes the number of requests to the target model used by a method to generate an adversarial example for the target point, averaged over all target points. Attack success rate is the fraction of target points for which a method successfully computes an adversarial example within the maximum number of queries. For all the methods, except the CAC, we soften the maximum number of queries to the target model: specifically, we terminate the method after the iteration during which the maximum number of queries was exceeded. 4.2 Results of Experiments We report the results separately for soft-label and hard-label case, different architectures of target models, and datasets. In Tables 1, 2, 3, 4 we report aforementioned quantities for the subset of ImageNet and indicate, where applicable, what type of norm constraint was used in internal procedures of the methods (specifically, l2l_2 or l∞l_∞). In Tables 5, 6, 7, 8 we report the results for CIFAR-10. We highlight the best results in terms of closeness of adversarial examples to the target points. From Tables 1 – 8 it can be seen that CAC yields adversarial examples closer to the initial target points than other methods in experimental setups in terms of l∞l_∞ norm and almost all setups in terms of l2l_2 norm. At the same time, been supported by the convergence guarantee, the method shows a high attack success rate; it should be mentioned that although the other methods show high success rates as well, they are not supported by formal guarantees. 5 Conclusion and Future Work In this paper, we propose Contract and Conquer, a framework to compute adversarial perturbations for black-box neural networks with convergence guarantees. We conduct an attack in the transfer-based paradigm. Specifically, we apply knowledge distillation to obtain a smaller surrogate model to attack in the white-box setting. We theoretically show that, under mild assumptions on the surrogate model and controllable contraction of the adversarial examples search space, the method is guaranteed to yield an adversarial example for the target black-box model within a fixed number of iterations. Experimentally, we demonstrate that the method both shows a high attack success rate and yields adversarial examples from a smaller vicinity of the target points than the concurrent methods. Future work includes the reduction of the influence of practical assumptions, specifically, the possibility to compute an adversarial example for the surrogate model on each algorithm iteration, to build a theoretical framework to assess the compliance of AI models with the prospective robustness standards. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. References A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges (2022) Adversarial example detection for dnn models: a review and experimental comparison. Artificial Intelligence Review 55 (6), p. 4403–4462. Cited by: §2.2. Y. Allouah, R. Guerraoui, N. Gupta, A. Jellouli, G. Rizk, and J. Stephan (2025) Adaptive gradient clipping for robust federated learning. In The Thirteenth International Conference on Learning Representations, Cited by: §2.2. B. G. Anderson, Z. Ma, J. Li, and S. Sojoudi (2025) Towards optimal branching of linear and semidefinite relaxations for neural network robustness certification. Journal of Machine Learning Research 26 (81), p. 1–59. Cited by: §1. M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein (2020) Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, p. 484–501. Cited by: §1, §2.1, §4.1.3. T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang (2021) Recent advances in adversarial training for adversarial robustness. In International Joint Conference on Artificial Intelligence, p. 4312–4321. External Links: Document Cited by: §1. Y. Bai, Y. Zeng, Y. Jiang, Y. Wang, S. Xia, and W. Guo (2020) Improving query efficiency of black-box adversarial attack. In European Conference on Computer Vision, p. 101–116. Cited by: §2.1. A. Bansal, P. Chiang, M. J. Curry, R. Jain, C. Wigington, V. Manjunatha, J. P. Dickerson, and T. Goldstein (2022) Certified neural network watermarks with randomized smoothing. In International Conference on Machine Learning, p. 1450–1465. Cited by: §2.2. N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (sp), p. 39–57. Cited by: §1, §2.1. L. Che, C. Wu, and Y. Hou (2025) Large language model text adversarial defense method based on disturbance detection and error correction. Electronics 14 (11), p. 2267. Cited by: §2.2. H. Chen, Y. Zhang, Y. Dong, X. Yang, H. Su, and J. Zhu (2024) Rethinking model ensemble in transfer-based adversarial attacks. In The Twelfth International Conference on Learning Representations, Cited by: §2.1. J. Chen, M. I. Jordan, and M. J. Wainwright (2020) Hopskipjumpattack: a query-efficient decision-based attack. In 2020 IEEE Symposium on Security and Privacy (SP), p. 1277–1294. Cited by: §2.1, §4.1.3. P. Chen, H. Zhang, Y. Sharma, J. Yi, and C. Hsieh (2017) Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, p. 15–26. Cited by: §1. M. Cheng, S. Singh, P. H. Chen, P. Chen, S. Liu, and C. Hsieh (2020) Sign-opt: a query-efficient hard-label adversarial attack. In International Conference on Learning Representations, Cited by: §4.1.3. S. Cheng, Y. Miao, Y. Dong, X. Yang, X. Gao, and J. Zhu (2024) Efficient black-box adversarial attacks via bayesian optimization guided by a function prior. In International Conference on Machine Learning, p. 8163–8183. Cited by: §1. J. Cohen, E. Rosenfeld, and Z. Kolter (2019) Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, p. 1310–1320. Cited by: §1, §2.2. F. Croce, M. Andriushchenko, N. D. Singh, N. Flammarion, and M. Hein (2022) Sparse-rs: a versatile framework for query-efficient sparse black-box adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, p. 6437–6445. Cited by: §4.1.3. A. C. Cullen, P. Montague, S. M. Erfani, and B. I. Rubinstein (2025) Position: certified robustness does not (yet) imply model security. In Forty-second International Conference on Machine Learning Position Paper Track, Cited by: §1. I. Debicha, R. Bauwens, T. Debatty, J. Dricot, T. Kenaza, and W. Mees (2023) TAD: transfer learning-based multi-adversarial detection of evasion attacks against network intrusion detection systems. Future Generation Computer Systems 138, p. 185–197. Cited by: §2.1. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, p. 248–255. Cited by: §4.1.1. Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li (2018) Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p. 9185–9193. Cited by: §3.2.1. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby (2021) An image is worth 16x16 words: transformers for image recognition at scale. In International Conference on Learning Representations, Cited by: §4.1.1. C. Feng, Z. Liu, Z. Zhi, I. Bogunovic, C. Gerner-Beuerle, and M. Rodrigues (2025) PROSAC: provably safe certification for machine learning models under adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39, p. 2933–2941. Cited by: §1. I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §1, §2.1, §2.2. S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, R. Arandjelovic, T. Mann, and P. Kohli (2018) On the effectiveness of interval bound propagation for training verifiably robust models. arXiv preprint arXiv:1810.12715. Cited by: §1, §2.2. C. Guo, J. Gardner, Y. You, A. G. Wilson, and K. Weinberger (2019) Simple black-box adversarial attacks. In International Conference on Machine Learning, p. 2484–2493. Cited by: §1, §1. C. Guo, M. Rana, M. Cisse, and L. van der Maaten (2018) Countering adversarial images using input transformations. In International Conference on Learning Representations, Cited by: §2.2. X. Han, Q. Li, H. Cao, L. Han, B. Wang, X. Bao, Y. Han, and W. Wang (2024) BFS2Adv: black-box adversarial attack towards hard-to-attack short texts. Computers & Security 141, p. 103817. Cited by: §1. K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p. 770–778. Cited by: §4.1.1. G. Hinton (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §1. Q. Huang, I. Katsman, H. He, Z. Gu, S. Belongie, and S. Lim (2019) Enhancing adversarial example transferability with an intermediate level attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision, p. 4733–4742. Cited by: §2.1. S. Kim and M. Pilanci (2024) Convex relaxations of relu neural networks approximate global optima in polynomial time. In International Conference on Machine Learning, p. 24458–24485. Cited by: §1. D. Korzh, E. Karimov, M. Pautov, O. Y. Rogov, and I. Oseledets (2025) Certification of speaker recognition models to additive perturbations. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 39, p. 17947–17956. Cited by: §2.2. A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §4.1.1. M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana (2019) Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), p. 656–672. Cited by: §2.2. Q. Li, Y. Guo, W. Zuo, and H. Chen (2023) Making substitute models more bayesian can enhance transferability of adversarial examples. In The Eleventh International Conference on Learning Representations, Cited by: §2.1. B. Liu and X. Wang (2025) Boosting the local invariance for better adversarial transferability. arXiv preprint arXiv:2503.06140. Cited by: §2.1. J. Liu, W. Zhang, Y. Zhang, D. Hou, Y. Liu, H. Zha, and N. Yu (2019) Detection based defense against adversarial examples from the steganalysis point of view. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 4825–4834. Cited by: §2.2. Y. Liu, X. Chen, C. Liu, and D. Song (2017) Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations, Cited by: §2.1. J. Ma, Y. Li, Z. Xiao, A. Cao, J. Zhang, C. Ye, and J. Zhao (2025) Jailbreaking prompt attack: a controllable adversarial attack against diffusion models. In Findings of the Association for Computational Linguistics: NAACL 2025, p. 3141–3157. Cited by: §2.1. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018) Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, Cited by: §1, §2.1, §2.2, §3.1. R. Maheshwary, S. Maheshwary, and V. Pudi (2021) Generating natural language attacks in a hard label black box setting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, p. 13525–13533. Cited by: §1. T. Maho, T. Furon, and E. Le Merrer (2021) Surfree: a fast surrogate-free black-box attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 10430–10439. Cited by: §2.1. Y. Mao, M. N. Mueller, M. Fischer, and M. Vechev (2024) Understanding certified training with interval bound propagation. In The Twelfth International Conference on Learning Representations, Cited by: §1, §2.2. S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p. 2574–2582. Cited by: §2.1. M. Naseer, K. Ranasinghe, S. Khan, F. Khan, and F. Porikli (2022) On improving adversarial transferability of vision transformers. In International Conference on Learning Representations, Cited by: §2.1. F. Nesti, A. Biondi, and G. Buttazzo (2021) Detecting adversarial examples by input transformations, defense perturbations, and voting. IEEE Transactions on Neural Networks and Learning Systems 34 (3), p. 1329–1341. Cited by: §2.2. W. Nie, B. Guo, Y. Huang, C. Xiao, A. Vahdat, and A. Anandkumar (2022) Diffusion models for adversarial purification. In International Conference on Machine Learning, p. 16805–16827. Cited by: §2.2. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami (2017) Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, p. 506–519. Cited by: §2.1. J. Park, P. Miller, and N. McLaughlin (2024) Hard-label based small query black-box adversarial attack. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, p. 3986–3995. Cited by: §1. M. Pautov, O. Kuznetsova, N. Tursynbek, A. Petiushko, and I. Oseledets (2022a) Smoothed embeddings for certified few-shot learning. Advances in Neural Information Processing Systems 35, p. 24367–24379. Cited by: §1. M. Pautov, N. Tursynbek, M. Munkhoeva, N. Muravev, A. Petiushko, and I. Oseledets (2022b) Cc-cert: a probabilistic approach to certify general robustness of neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, p. 7975–7983. Cited by: §1. G. Qi, Y. Chen, Y. Zhu, B. Hui, X. Li, X. Mao, R. Zhang, and H. Xue (2023) Transaudio: towards the transferable adversarial audio attack via learning contextualized perturbations. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), p. 1–5. Cited by: §1. Z. Qin, Y. Fan, Y. Liu, L. Shen, Y. Zhang, J. Wang, and B. Wu (2022) Boosting the transferability of adversarial attacks with reverse adversarial perturbation. Advances in Neural Information Processing Systems 35, p. 29845–29858. Cited by: §2.1. A. Rahmati, S. Moosavi-Dezfooli, P. Frossard, and H. Dai (2020) Geoda: a geometric framework for black-box adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 8446–8455. Cited by: §2.1, §4.1.3. A. Ross and F. Doshi-Velez (2018) Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32. Cited by: §1. A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein (2019) Adversarial training for free!. Advances in Neural Information Processing Systems 32. Cited by: §2.2. Y. Shi, Y. Han, Y. Tan, and X. Kuang (2022) Decision-based black-box attack against vision transformers via patch-wise adversarial removal. Advances in Neural Information Processing Systems 35, p. 12921–12933. Cited by: §4.1.3. Z. Shi, H. Zhang, K. Chang, M. Huang, and C. Hsieh (2020) Robustness verification for transformers. In International Conference on Learning Representations, Cited by: §2.2. D. Stutz, M. Hein, and B. Schiele (2020) Confidence-calibrated adversarial training: generalizing to unseen attacks. In International Conference on Machine Learning, p. 9155–9166. Cited by: §1. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. In International Conference on Learning Representations, Cited by: §1, §2.1. V. Tjeng, K. Y. Xiao, and R. Tedrake (2019) Evaluating robustness of neural networks with mixed integer programming. In International Conference on Learning Representations, Cited by: §2.2. J. Uesato, B. O’donoghue, P. Kohli, and A. Oord (2018) Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, p. 5025–5034. Cited by: §2.1. V. Voracek (2024) Treatment of statistical estimation problems in randomized smoothing for adversarial robustness. Advances in Neural Information Processing Systems 37, p. 133464–133486. Cited by: §1. S. Wang, H. Zhang, K. Xu, X. Lin, S. Jana, C. Hsieh, and J. Z. Kolter (2021a) Beta-crown: efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems 34, p. 29909–29921. Cited by: §2.2. X. Wang, Z. Zhang, K. Tong, D. Gong, K. He, Z. Li, and W. Liu (2022) Triangle attack: a query-efficient decision-based adversarial attack. In European conference on computer vision, p. 156–174. Cited by: §2.1. Z. Wang, H. Guo, Z. Zhang, W. Liu, Z. Qin, and K. Ren (2021b) Feature importance-aware transferable adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, p. 7639–7648. Cited by: §2.1. Z. Wang, Z. Zhang, S. Liang, and X. Wang (2023) Diversifying the High-level Features for better Adversarial Transferability. In Proceedings of the British Machine Vision Conference, Cited by: §2.1. X. Wei, C. Kang, Y. Dong, Z. Wang, S. Ruan, Y. Chen, and H. Su (2025) Real-world adversarial defense against patch attacks based on diffusion model. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §2.2. L. Weng, P. Chen, L. Nguyen, M. Squillante, A. Boopathy, I. Oseledets, and L. Daniel (2019) PROVEN: verifying robustness of neural networks with a probabilistic approach. In International Conference on Machine Learning, p. 6727–6736. Cited by: §1. E. Wong, L. Rice, and J. Z. Kolter (2020) Fast is better than free: revisiting adversarial training. In International Conference on Learning Representations, Cited by: §2.2. D. Wu, S. Xia, and Y. Wang (2020) Adversarial weight perturbation helps robust generalization. Advances in Neural Information Processing Systems 33, p. 2958–2969. Cited by: §1. C. Xie, Z. Zhang, Y. Zhou, S. Bai, J. Wang, Z. Ren, and A. L. Yuille (2019) Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 2730–2739. Cited by: §2.1. P. Xie, Y. Bie, J. Mao, Y. Song, Y. Wang, H. Chen, and K. Chen (2025) Chain of attack: on the robustness of vision-language models against transfer-based adversarial attacks. In Proceedings of the Computer Vision and Pattern Recognition Conference, p. 14679–14689. Cited by: §2.1. J. Xu, L. Li, J. Zhang, X. Zheng, K. Chang, C. Hsieh, and X. Huang (2022) Weight perturbation as defense against adversarial word substitutions. In Findings of the Association for Computational Linguistics: EMNLP 2022, p. 7054–7063. Cited by: §1. G. Yang, T. Duan, J. E. Hu, H. Salman, I. Razenshteyn, and J. Li (2020) Randomized smoothing of all shapes and sizes. In International Conference on Machine Learning, p. 10693–10705. Cited by: §2.2. C. Yu, J. Chen, Y. Xue, Y. Liu, W. Wan, J. Bao, and H. Ma (2021) Defending against universal adversarial patches by clipping feature norms. In Proceedings of the IEEE/CVF International Conference on Computer Vision, p. 16434–16442. Cited by: §2.2. J. Zhang, W. Wu, J. Huang, Y. Huang, W. Wang, Y. Su, and M. R. Lyu (2022a) Improving adversarial transferability via neuron attribution-based attacks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, p. 14993–15002. Cited by: §2.1. Y. Zhang, Y. Tan, T. Chen, X. Liu, Q. Zhang, and Y. Li (2022b) Enhancing the transferability of adversarial examples with random patch.. In IJCAI, Vol. 8, p. 13. Cited by: §2.1. Y. Zhang, X. Yuan, J. Li, J. Lou, L. Chen, and N. Tzeng (2021) Reverse attack: black-box attacks on collaborative recommendation. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, p. 51–68. Cited by: §2.1. J. Zhao, L. Xie, S. Gu, Z. Qin, Y. Zhang, Z. Wang, and Y. Hu (2025) Universal attention guided adversarial defense using feature pyramid and non-local mechanisms. Scientific Reports 15 (1), p. 5237. Cited by: §2.2. C. Zhou, X. Shi, and Y. Wang (2025) Query-efficient hard-label black-box attack against vision transformers. Applied Soft Computing 183, p. 113686. Cited by: §4.1.3. K. Zhu, J. Wang, J. Zhou, Z. Wang, H. Chen, Y. Wang, L. Yang, W. Ye, Y. Zhang, N. Gong, et al. (2023) Promptrobust: towards evaluating the robustness of large language models on adversarial prompts. In Proceedings of the 1st ACM Workshop on Large AI Systems and Models with Privacy and Safety Analysis, p. 57–68. Cited by: §1. Appendix A Proof of the lemma. Proof. Recall that K is the number of classes and let SjS_j be the instance of the surrogate model on j-th alternation iteration. Let zjj=1∞\z_j\_j=1^∞ be the sequence of adversarial examples, where zjz_j is an adversarial example for SjS_j and z0=x.z_0=x. Note that zj∈Uδ​(x)z_j∈ U_δ(x) for all j∈ℤ+.j _+. Since for all j∈ℤ+j _+, SjS_j is differentiable within Uδ​(x),U_δ(x), for any two points a,b∈Uδ​(x)a,b∈ U_δ(x) we may write Sj​(a)−Sj​(b)=∇Sj​(τ)⊤​(a−b),S_j(a)-S_j(b)=∇ S_j(τ) (a-b), (11) where τ∈Uδ​(x)τ∈ U_δ(x) and on the line segment between a and b. Specifically, for two subsequent adversarial examples, zjz_j and zj−1,z_j-1, the expression becomes Sj​(zj)−Sj​(zj−1)=∇Sj​(τj)⊤​(zj−zj−1),S_j(z_j)-S_j(z_j-1)=∇ S_j(τ_j) (z_j-z_j-1), (12) where τj _j is on the line segment between zjz_j and zj−1.z_j-1. Note that zjz_j is adversarial example for SjS_j, whereas zj−1z_j-1 was included into distillation dataset ​(S)D(S) on previous alternation iteration. By introducing ρj=‖zj−zj−1‖∞, _j=\|z_j-z_j-1\|_∞, one can see that ρj≤t​ρj−1≤t2​ρj−2≤⋯≤tj−1​ρ1= _j≤ t _j-1≤ t^2 _j-2≤…≤ t^j-1 _1= tj−1​‖z1−z0‖∞=tj−1​‖z1−x‖∞≤tj−1​δ. t^j-1\|z_1-z_0\|_∞=t^j-1\|z_1-x\|_∞≤ t^j-1δ. (13) Note that when ρj _j is less than ε/γ, /γ, the norm of the difference between Sj​(zj)−Sj​(zj−1)S_j(z_j)-S_j(z_j-1) is bounded from above. Specifically, let ϕ:[0,1]→[0,1]K,ϕ​(t)=Sj​(zj+t​(zj−1−zj))φ:[0,1]→[0,1]^K, φ(t)=S_j(z_j+t(z_j-1-z_j)) (14) and ϕ′​(t)=∇Sj​(zj+t​(zj−1−zj))​(zj−zj−1).φ (t)=∇ S_j(z_j+t(z_j-1-z_j))(z_j-z_j-1). (15) Thus, Sj​(zj−1)−Sj​(zj)=ϕ​(1)−ϕ​(0)=∫01ϕ′​(t)​t=∫01∇Sj​(zj+t​(zj−1−zj))​(zj−zj−1)​t.S_j(z_j-1)-S_j(z_j)=φ(1)-φ(0)= _0^1φ (t)dt= _0^1∇ S_j(z_j+t(z_j-1-z_j))(z_j-z_j-1)dt. (16) Now, since ‖∇Sj‖o​p,∞=sup‖x‖≠0‖∇Sj​x‖∞‖x‖∞⟹‖∇Sj​x‖∞≤‖∇Sj‖o​p,∞​‖x‖∞\|∇ S_j\|_op,∞= _\|x\|≠ 0 \|∇ S_jx\|_∞\|x\|_∞ \|∇ S_jx\|_∞≤\|∇ S_j\|_op,∞\|x\|_∞ (17) we write ‖Sj​(zj−1)−Sj​(zj)‖∞=‖∫01∇Sj​(zj+t​(zj−1−zj))​(zj−zj−1)​t‖∞≤ \|S_j(z_j-1)-S_j(z_j)\|_∞= \| _0^1∇ S_j(z_j+t(z_j-1-z_j))(z_j-z_j-1)dt \|_∞≤ ≤∫01‖∇Sj​(zj+t​(zj−1−zj))​(zj−zj−1)‖∞​t≤ ≤ _0^1 \|∇ S_j(z_j+t(z_j-1-z_j))(z_j-z_j-1) \|_∞dt≤ ≤∫01‖∇Sj​(zj+t​(zj−1−zj))‖o​p,∞​‖zj−zj−1‖∞​t≤ ≤ _0^1\|∇ S_j(z_j+t(z_j-1-z_j))\|_op,∞\|z_j-z_j-1\|_∞dt≤ ≤‖zj−zj−1‖∞​∫01sup(‖∇Sj​(zj+t​(zj−1−zj))‖o​p,∞)​d​t≤ ≤\|z_j-z_j-1\|_∞ _0^1 (\|∇ S_j(z_j+t(z_j-1-z_j))\|_op,∞ )dt≤ ≤γ​‖zj−zj−1‖∞<γ​ε/γ=ε ≤γ\|z_j-z_j-1\|_∞<γ /γ= (18) That yields arg⁡maxi∈[1,…,K]⁡Sj​(zj)i=arg⁡maxi∈[1,…,K]⁡Sj​(zj−1)i _i∈[1,…,K]S_j(z_j)_i= _i∈[1,…,K]S_j(z_j-1)_i (19) according to Eq. 9. Recall that zj−1z_j-1 is included into distillation dataset ​(S)D(S) on iteration j−1,j-1, so arg⁡maxi∈[1,…,K]⁡Sj​(zj−1)i=T​(zj−1) _i∈[1,…,K]S_j(z_j-1)_i=T(z_j-1) (20) according to Eq. 4. Now observe that Algorithm 1 yielded an adversarial example, namely, zjz_j for the model S on iteration j. At the same time, the prediction of S for zj−1z_j-1 and for zjz_j are the same (see Eq. 19). That means that the predicted class label for zjz_j, say, cA=arg⁡maxi∈[1,…,K]⁡Sj​(zj)ic_A= _i∈[1,…,K]S_j(z_j)_i was assigned by T to the previous sample, zj−1z_j-1: cA=arg⁡maxi∈[1,…,K]⁡Sj​(zj)i=T​(zj−1).c_A= _i∈[1,…,K]S_j(z_j)_i=T(z_j-1). (21) Finally, for the values of j satisfying tj−1​δ≤ε/γ↔t∈(0,1)(j−1)​ln⁡t≤ln⁡ε−ln⁡δ−ln⁡γ,t^j-1δ≤ /γ\ t∈(0,1)\ (j-1) t≤ - δ- γ, (22) the value ρj _j is less than ε/γ, /γ, what finalizes the proof. ∎ Appendix B Hyperparameters of Baseline Methods In this section, we present the values of hyperparameters used in the methods with which we compare our approach. In all experiments, the query budget was set to 500. The only exception is the ResNet50 model on the CIFAR-10 dataset, where the query budget was limited to 300. Table 9: Hyperparameters of baseline methods Method Hyperparameters HopSkipJump l2l_2 / l∞l_∞ num_samples_for_init = 100100 num_samples_for_grad_est = 100100 max_iter=100100 SignOPT num_samples_for_init = 100100 num_samples_for_grad_est = 100100 max_iter = 100100 GeoDA sub_dim = 150 db_search_steps = 200 bin_search_tol = 0.0001 λ = 0.6 σ = 0.0002 PAR initial_patch_size = 5656 min_patch_size = 77 AdvViT num_samples_for_init = 100100 init_attempts_extra = 100100 patch_num = 1414 dim_size = 44 α = 4.04.0 K_sign = 100100 SquareAttack l∞l_∞ ε=32255 = 32255 p_init = 0.050.05 SparseRS norm = ℓ0 _0 ε = 20002000 p_init = 0.30.3