Paper deep dive
Optimal Path Planning in Hostile Environments
Andrzej Kaczmarczyk, Šimon Schierreich, Nicholas Axel Tanujaya, Haifeng Xu
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 93%
Last extracted: 3/22/2026, 5:03:08 AM
Summary
The paper introduces a multi-agent path planning problem in hostile environments where agents must navigate a graph with traps that have specific reload times. The authors analyze the computational complexity, proving that while the problem is in NP, it is NP-hard even on trees, but solvable in polynomial time for vertex-disjoint paths.
Entities (4)
Relation Signals (3)
Trap → eliminates → Asset
confidence 98% · If an asset enters a location protected by some active trap, then the asset is eliminated
Asset → navigates → Topology
confidence 95% · A group of identical agents begins at a common start location and must navigate a graph-based environment
Routing Plan → hascomplexity → NP-hard
confidence 90% · We then show that the problem is NP-hard even when the environment graph is a tree.
Cypher Suggestions (2)
Find all entities related to the Routing Plan problem · confidence 90% · unvalidated
MATCH (p:Problem {name: 'Routing Plan'})-[r]->(e) RETURN p, r, eIdentify components that interact with assets · confidence 85% · unvalidated
MATCH (a:Agent)-[:NAVIGATES]->(g:Graph)<-[:PROTECTS]-(t:Component) RETURN a, t, g
Abstract
Abstract:Coordinating agents through hazardous environments, such as aid-delivering drones navigating conflict zones or field robots traversing deployment areas filled with obstacles, poses fundamental planning challenges. We introduce and analyze the computational complexity of a new multi-agent path planning problem that captures this setting. A group of identical agents begins at a common start location and must navigate a graph-based environment to reach a common target. The graph contains hazards that eliminate agents upon contact but then enter a known cooldown period before reactivating. In this discrete-time, fully-observable, deterministic setting, the planning task is to compute a movement schedule that maximizes the number of agents reaching the target. We first prove that, despite the exponentially large space of feasible plans, optimal plans require only polynomially-many steps, establishing membership in NP. We then show that the problem is NP-hard even when the environment graph is a tree. On the positive side, we present a polynomial-time algorithm for graphs consisting of vertex-disjoint paths from start to target. Our results establish a rich computational landscape for this problem, identifying both intractable and tractable fragments.
Tags
Links
- Source: https://arxiv.org/abs/2603.18958v1
- Canonical: https://arxiv.org/abs/2603.18958v1
Full Text
103,286 characters extracted from source content.
Expand or collapse full text
Optimal Path Planning in Hostile Environments Andrzej Kaczmarczyk 1 Šimon Schierreich 2 Nicholas Axel Tanujaya 3 Haifeng Xu4 1 Czech Technical University in Prague 3 Bina Nusantara University 2 AGH University of Krakow 4 University of Chicago andrzej.kaczmarczyk@cvut.cz, schiesim@fit.cvut.cz, nicholas.axel.135@gmail.com, haifengxu@uchicago.edu Abstract Coordinating agents through hazardous environments, such as aid-delivering drones navigating conflict zones or field robots traversing deployment areas filled with obstacles, poses fundamental planning challenges. We introduce and analyze the computational complexity of a new multi-agent path planning problem that captures this setting. A group of identical agents begins at a common start location and must navigate a graph-based environment to reach a common target. The graph contains hazards that eliminate agents upon contact but then enter a known cooldown period before reactivating. In this discrete-time, fully-observable, deterministic setting, the planning task is to compute a movement schedule that maximizes the number of agents reaching the target. We first prove that, despite the exponentially large space of feasible plans, optimal plans require only polynomially-many steps, establishing membership in NP. We then show that the problem is NP-hard even when the environment graph is a tree. On the positive side, we present a polynomial-time algorithm for graphs consisting of vertex-disjoint paths from start to target. Our results establish a rich computational landscape for this problem, identifying both intractable and tractable fragments. 1 Introduction A humanitarian aid organization is planning to transport life-critical supplies through a conflict zone. The hostile actions of parties involved may severely constrain the planning task: these parties may attempt to divert or seize aid shipments. In particular, hostile forces monitor the locations under their control. When a convoy crosses such a location, it is detected and captured. However, capturing a convoy occupies the hostile patrol for a predictable period during which subsequent convoys may pass safely. How should the organization schedule dispatches to maximize the number of convoys reaching their destination? This scenario motivates a more general planning problem. A central planner manages a set of identical agents at a common starting location. They seek to navigate as many of these agents as possible through a hostile environment to a designated target, potentially sacrificing some agents to enable others to pass. Beyond humanitarian logistics (van Wassenhove, 2006; Rottkemper and Fischer, 2013), this model captures applications such as cargo transport through pirate-prone waters (Jakob et al., 2012), UAV-based delivery of resources (Wu et al., 2016a), and analysis of cybersecurity intrusion detection where probe packets cause a temporary detector failure (Ptacek and Newsham, 1998). We formalize the environment as a graph whose vertices represent locations and whose edges represent feasible transitions. Two vertices are designated as the start and target. Hostility is modeled via traps placed at certain vertices. Each trap eliminates any agent that enters its vertex but then enters a reload period of known duration before reactivating. The planning task is to compute a movement schedule that maximizes the number of agents reaching the target. This formulation yields a multi-agent path planning problem with a new temporal-strategic dimension. A rich literature spans related topics including constrained path finding (Phillips and Likhachev, 2011; Ahmadi et al., 2021), multi-agent path finding (Stern et al., 2019), token reconfiguration (Călinescu et al., 2008), pursuit-evasion games (Borie et al., 2011), and humanitarian logistics in conflict zones (Boehmer et al., 2024) (see Section˜2 for further discussion). However, to the best of our knowledge, the existing models do not address our specific problem. The key distinction lies in the nature of our traps, which are statically placed (limiting the applicability of game-theoretic frameworks), deterministic (making stochastic models unnecessarily general), and, crucially, which require a reload time after each capture. This reload mechanic is new and demands novel analytical techniques. We briefly justify our modeling choices. First, reloading of traps captures deceptive strategies in which early agents absorb hostile attention allowing those who follow to pass. Second, we model a static rather than adversarial environment. Instead of analyzing repeated games, we focus on a single, short-horizon assignment within a known environment; an essential building block for longer-horizon decision-making. Third, while stochastic models may appear more realistic, they are notoriously difficult to solve in settings like ours. Furthermore, our deterministic approach isolates the fundamental computational structure of the problem and admits a natural interpretation as worst-case reasoning by a risk-averse planner. Finally, we assume that the reload durations are known. In practice, these can be estimated from historical data or knowledge of standard operational procedures. Our Contribution. We introduce a novel multi-agent path-planning model for transportation in hostile environments and analyze it from the perspective of computational complexity. First, we identify tractable fragments of the problem. We show that if the underlying topology is a simple path, then an optimal strategy can be computed in polynomial time. Despite the simplicity of this topology, our run-wait-sacrifice strategy requires surprisingly non-trivial arguments about the structure of optimal plans. We then extend these results to more general topologies whose condensation (see Definition˜4) has a linear structure; these include, e.g., vertex-disjoint unions of paths. In contrast to these positive results, we establish computational hardness for modest generalizations. Specifically, we prove that the problem is NP-hard even when the underlying topology is a tree with maximum degree 33, or when each agent crosses at most 66 traps. For the sake of readability, we deferred some proofs to Appendix˜A. These are marked with , a symbol linking to the corresponding proofs. 2 Related Work Our problem is connected to various research streams in computer science, mathematics, operations research, artificial intelligence, and planning literature. Classical Path Finding. The foundational problem underlying our model is Reachability: given a graph G and vertices s and t, determine whether t can be reached from s. This can be decided in linear time via a depth-first search, and the shortest paths can be computed in polynomial time using Dijkstra’s algorithm (Dijkstra, 1959) or, with heuristic guidance, the A* algorithm (Hart et al., 1968). However, our setting differs fundamentally: the presence of traps with reload periods introduces temporal dynamics, and we must coordinate paths for multiple agents rather than a single one. Substantial work has also addressed reachability under uncertainty (Wagner and Choset, 2017; Shofer et al., 2023), in distributed settings (Čáp et al., 2015), and in faulty environments (Papadimitriou and Yannakakis, 1991; Fried et al., 2013; Fioravantes et al., 2025b); in contrast, we assume a fully-informed central planner and deterministic, fault-free environment. Dynamic and Temporal Environment. Recent research has explored path finding in graphs whose structure changes over time. In temporal graphs, edge availability is restricted to specific discrete time steps. While single-agent reachability remains polynomial-time solvable in temporal graphs (Wu et al., 2016b), multi-agent variants become computationally hard even on simple paths (Klobas et al., 2023). In a different direction, Carmesin et al. (2023) and Dvořák et al. (2025) studied self-deleting graphs, where visiting a vertex removes certain incident edges. Our model differs from both: all edges remain available at all times, but agents can be eliminated by traps that subsequently enter a reload period. Multi-Agent Path Finding. Our work is closely related to the extensively studied Multi-Agent Path Finding (MAPF) problem (Felner et al., 2017; Stern et al., 2019; Wang et al., 2025). In MAPF, each of k agents has a designated start and goal location, and the objective is to find collision-free paths minimizing makespan or total travel time. MAPF has been thoroughly analyzed from a computational complexity perspective (Surynek, 2010; Yu and LaValle, 2013; Nebel, 2024; Fioravantes et al., 2024, 2025a; Deligkas et al., 2025), and numerous algorithmic approaches have been developed (Sharon et al., 2015; Li et al., 2019; Surynek, 2022). Our problem differs in several key respects: (i) all agents share a common start and target, (i) certain vertices contain traps that eliminate agents, and (i) the objective is to maximize the number of surviving agents rather than minimizing travel time. These differences fundamentally change the problem structure. Pursuit-Evasion Games. Pursuit-evasion games (Parsons, 1976; Borie et al., 2011) study scenarios where pursuers attempt to capture evaders in a graph. The complexity of these problems depends heavily on the graph class and movement rules; for instance, determining the minimum number of pursuers needed to clear a graph is related to treewidth and pathwidth (Seymour and Thomas, 1993). While superficially similar, our model differs substantially. Our traps are statically placed rather than actively pursuing agents, they operate deterministically rather than strategically, and crucially, they need to reload after each capture—a feature not considered in classical pursuit-evasion problems. Token Reconfiguration. In token reconfiguration problems (Kornhauser et al., 1984; Hearn and Demaine, 2005; Călinescu et al., 2008; van den Heuvel, 2013), tokens placed on graph vertices must be moved from an initial configuration to a target configuration under various movement rules (sliding along edges or jumping to non-adjacent vertices). These problems are typically NP-complete and APX-hard on general graphs, though polynomial-time algorithms exist, e.g., for trees (Călinescu et al., 2008; Demaine et al., 2015). While our agents sometimes resemble moving tokens (and we use this analogy in some proofs), the key distinction is that our agents may be eliminated during transit, and the objective is to maximize the number of agents reaching the target. Video Games and Motion Planning. A surprising connection exists between our work and certain video games. For example, in Lemmings (McCarthy, 1998), agents spawn from a door and must be guided to a target, navigating hazards that become temporarily passable after eliminating an agent. The key difference is that Lemmings agents move autonomously and are controlled only indirectly via skill assignments, whereas we directly control agent movements. Moreover, Lemmings solutions can require exponentially many steps (Forišek, 2010), and the decision problem is PSPACE-complete (Viglietta, 2015); in contrast, we show that optimal plans in our model have polynomial length. Security Games. Finally, there are several game-theoretical models of games related to security applications, like Stackelberg (Tambe, 2012) or Blotto (Behnezhad et al., 2018; Kaźmierowski, 2025) games. The closest to ours are the recently introduced escape sensing games (Boehmer et al., 2024), where one player tries to move as many of her assets from the start to target location, while an adversary uses sensors to detect some of the moving assets. In contrast, in our work the behavior of traps is implicit from their reload times. s t =0 clock=0s t =1 clock=1s t =2 clock=2s t =3 clock=3s t =4 clock=4s t =5 clock=5 Figure 1: An illustration of our problem showing timesteps 0 to 55, as indicated by . There are two assets, red and blue. The reload time of the single trap r ( ) is c(r)=1c(r)=1. If the trap is depicted in light gray ( ), then it is inactive in this round. 3 The Model We model the space of possible moves as an undirected graph G=(V,E)G=(V,E) that we refer to as topology. Herein, the vertices represent certain important locations111We use the term location and vertex interchangeably. and the edges are shortest connections between them. We assume that traversing an edge takes one unit of time. There are two distinguished locations s,t⊆V\s,t\ V — the initial and target location, respectively. We use n to denote the number of vertices of G. For some V′⊆V V, we use G[V′]G[V ] to denote the subgraph of G induced by V′V . For some positive integer x, [x][x] represents the set 1,2,…,x\1,2,…,x\. We assume a set A of m assets222We prefer “assets” over “agents,” as the former is more general as it encompasses both autonomous and non-autonomous entities. a1,…,ama_1,…,a_m, which are placed at the initial location s at the beginning. Our goal is to move as many assets as possible to the target location t.333We do not consider other goals, such as the total time of all assets reaching the target location. The assets move in discrete rounds using edges of the graph, and no pair of assets can occupy the same vertex (except for s and t) on the same round. Moreover, a subset of locations R⊆V∖s,tR V \s,t\ are protected by traps r1,…,rτr_1,…,r_τ. These traps are known in advance, and each trap rir_i, i∈[τ]i∈[τ], is associated with its reload time c(ri)∈ℕc(r_i) . Initially, each trap is active. If an asset enters a location protected by some active trap rir_i, then the asset is eliminated, and rir_i becomes inactive. If rir_i is inactive, then assets can pass it without being eliminated. Trap rir_i remains inactive for c(ri)c(r_i) rounds after it was triggered, and then becomes active again. Example 1. For an illustration of the problem, see the instance in Figure˜1. There are two assets, red and blue, and a single trap with a reload time of one. Initially, both assets are placed on the initial vertex s. In the first round, the blue asset moves to the single neighbor of s and the red asset has to stay on s, as otherwise they would occupy the same vertex, which is forbidden. In the next round, both assets move one step toward t. The blue asset activates the trap, is eliminated, and the trap becomes inactive. Since the trap is inactive, the red asset can move to its location in the next round without being eliminated. However, in round 44, the red asset must leave this location, as the trap gets active again and would destroy the red asset. Finally, the red asset moves to the target location. The movement of assets is represented by a movement plan, or simply ‘plan.’ Formally, introducing a special location †∉V ∈ V for the eliminated assets, a plan ρ:A×ℕ0→V∪†ρ A×N_0→ V∪\ \ assigns every asset a∈Aa∈ A and timestep j∈ℕ0j _0 the location v∈V∪†v∈ V∪ of a at time j. The plan ρ is valid if all the following conditions are satisfied: 1. ρ(a,0)=sρ(a,0)=s for every a∈Aa∈ A and, for every j>0j>0, if ρ(a,j)=sρ(a,j)=s then ρ(a,j−1)=sρ(a,j-1)=s; that is, every asset starts at the initial location s and never returns there after leaving once, 2. if ρ(a,j)∈t,†ρ(a,j)∈\t, \ for some a∈Aa∈ A and j∈ℕ0j _0, then ρ(a,j+1)=ρ(a,j)ρ(a,j+1)=ρ(a,j); that is, assets cannot leave the target and special location after they reach them, 3. for every a∈Aa∈ A and every j∈ℕj it holds that if ρ(a,j)≠†ρ(a,j) = , then ρ(a,j)ρ(a,j) is adjacent to or equal to ρ(a,j−1)ρ(a,j-1); that is, at each round assets move along the edges of the topology or stay put, 4. for two distinct assets a,a′∈Aa,a ∈ A and j∈ℕ0j _0, ρ(a,j)=ρ(a′,j)ρ(a,j)=ρ(a ,j) if and only if ρ(a,j)∈s,t,†ρ(a,j)∈\s,t, \; that is, no two assets can stand on the same vertex v, unless v is the initial, target, or special location † , 5. if ρ(a,j)=r∈Rρ(a,j)=r∈ R for some a∈Aa∈ A and j∈ℕj and there is no a′∈Aa ∈ A and ℓ∈[c(r)] ∈[c(r)] such that ρ(a′,max0,j−ℓ)=rρ(a , \0,j- \)=r, then ρ(a,j+1)=†ρ(a,j+1)= ; that is, if an asset steps on an active trap, it is eliminated, 6. for every j∈ℕj , if there is a∈Aa∈ A such that ρ(a,j)∉t,†ρ(a,j) ∈\t, \, then for at least one a′∈Aa ∈ A we have ρ(a′,j)≠ρ(a′,j+1)ρ(a ,j) =ρ(a ,j+1); that is, as long as some asset remains, at least one asset moves each round, and 7. there is some len(ρ)∈ℕlen(ρ) such that for every a∈Aa∈ A it holds that ρ(a,len(ρ))∈t,†ρ(a,len(ρ))∈\t, \, and there is an asset a∈Aa∈ A with ρ(a,len(ρ)−1)∉t,†ρ(a,len(ρ)-1) ∈\t, \; that is, all assets either reach the target or are eliminated in finite time. Unless stated explicitly, we assume only valid plans. For a valid plan ρ of length len(ρ)len(ρ), we say that the plan ends at len(ρ)len(ρ), and we say that a∈Aa∈ A survived if ρ(a,len(ρ))=tρ(a,len(ρ))=t; otherwise, a failed. Example 2. A plan capturing the movement of the assets as described in Example˜1 is as follows (we assume that the internal vertices of G are, from left to right, v2v_2, v3v_3, and v4v_4). ρ 0 1 2 3 4 5 s v2v_2 v3v_3 † † † s s v2v_2 v3v_3 v4v_4 t It is easy to verify that this plan is valid and has length len(ρ)=5len(ρ)=5. We study the above-defined model through the lens of computational complexity of the following decision problem. Routing Plan Input: A set of assets A, a topology G=(V,E)G=(V,E), distinct initial and target vertices s,t∈Vs,t∈ V, a set of traps R⊆V∖s,tR V \s,t\, a reload time function c:R→ℕc R , and a goal k∈[|A|]k∈[|A|]. Question: Is there a valid plan ρ yielding at least k surviving assets? Remark 1. Unlike in (some variants of) Multi-Agent Path Finding, we do not explicitly forbid two assets from swapping positions across an edge. However, since our assets are indistinguishable, any swap of two assets aia_i and aja_j in a plan ρ can be eliminated by the following transformation: (i) At the time of the swap, let both assets wait at their respective pre-swap positions (i.e., before they would have crossed the edge). (i) After that time, assign to aia_i the remainder of aja_j’s plan, and to aja_j the remainder of aia_i’s plan. By exhaustively applying this transformation to all swaps in ρ, one obtains a modified plan without any swaps, achieving the same number of surviving assets, with identical temporal performance. Thus, when the application requires a plan without swaps, we may enforce this restriction without loss of optimality. 4 The Existence of Short Plans The properties required in the definition of a valid plan directly forbid some unreasonable solutions, such as infinite plans or keeping all assets at the initial location without moving any of them. However, it is not strong enough to forbid very lengthy plans, e.g., plans whose length is exponential in the input size. The following theorem shows that a plan ρ ensuring that k assets survive implies a short plan ρ′ρ with at least the same number of surviving assets. Theorem 1. Let ℐ=(G,s,t,R,c,A,k)I=(G,s,t,R,c,A,k) be an instance of Routing Plan and let ρ be a plan of length ℓ with at least k surviving assets witnessing that ℐI is a Yes-instance. Then, there is a plan ρ′ρ of length ℓ′∈poly(n,m) (n,m) with at least k surviving assets. Proof. Our first step is to transform the given network G into a vertex-colored graph that depends on ρ. Given the new colored graph, we interpret movement plans as graph-coloring reconfigurations and combine existing results with new observations to obtain our theorem. Definition 1. A simple transform of a network G=(V,E)G=(V,E) and a number k of surviving assets for some plan, is a graph G^=(V∪S∪T,E∪E′) G=(V∪ S∪ T,E∪ E ) such that vertex set S consists of m−1m-1 vertices arranged in a path attached at one of its ends to s∈Vs∈ V, vertex set T consists of k−1k-1 vertices arranged in a (different) path attached with one of its ends to t, and E′E contains the respective edges. To fully exploit the simple transform and capture the dynamics of moving assets, we define a specific graph coloring. Definition 2. For some timestep i∈ℕi , an i-snapshot of a network G=(V,E)G=(V,E) and a plan ρ with length at least i is tuple snapiG,ρ=(G^,h)snap^G,ρ_i=( G,h), where G^=(V∪S∪T,E∪E′) G=(V∪ S∪ T,E∪ E ) is the simple transform of G and ρ and h is a binary coloring of the vertices of G G such that: 1. for each v∈Vv∈ V, h(v)=1h(v)=1 if there is an asset a∈Aa∈ A that, according to ρ, occupies v at time i∈ℕi , h(v)=0h(v)=0 otherwise; 2. given that the number x of assets occupy the initial vertex s at time i, for (x−1)(x-1) vertices v∈Sv∈ S whose distance is at most x−1x-1 from s, h(v)=1h(v)=1, and h(v)=0h(v)=0 for the remaining vertices in S; and 3. given that the number y of assets occupy the terminal vertex t at time i, for (y−1)(y-1) vertices v∈Tv∈ T whose distance is at most (y−1)(y-1) from t, h(v)=1h(v)=1, and h(v)=0h(v)=0 for the remaining vertices in T. For readability, we use snapisnap_i whenever it is unambiguous. Conveniently, the number of timesteps needed to reach one snapshot from another is upper-bounded by a polynomial. Lemma 1. Given a network G=(V,E)G=(V,E), an empty set R=∅R= of traps, a plan ρ, and two snapshots snapiG,ρsnap^G,ρ_i and snapjG,ρsnap^G,ρ_j for i≤j≤len(ρ)i≤ j (ρ), there is a plan ρ′ρ of length len(ρ′)<(2m+n)2len(ρ )<(2m+n)^2 such that snap0G,ρ′=snapisnap^G,ρ _0=snap_i and snaplen(ρ′)G,ρ′=snapjsnap^G,ρ _len(ρ )=snap_j. Proof. By definition, snapshots snapiG,ρsnap^G,ρ_i and snapjG,ρsnap^G,ρ_j are in fact two different colorings hih_i and hjh_j, both with two colors, of the same graph G^=(V∪S∪T,E∪E′) G=(V∪ S∪ T,E∪ E ), which has n^:-m−1+n+k−1<2m+n n m-1+n+k-1<2m+n vertices, as k≤mk≤ m. Given this interpretation, to find the claimed moving plan means to obtain a 22-coloring hjh_j from that of hih_i via a series of certain steps. Specifically, in each step, we can swap colors of a pair of adjacent vertices of mutually distinct color. Indeed, such a step represents moving an asset from the vertex it occupies (colored 11 by definition of a snapshot) to some vertex that no asset occupies (colored 0 by the same definition). It is sufficient to only upper-bound the number of steps needed for transforming the colorings. To do so, we directly apply the result of Yamanaka et al. (2018, Lemma 1) stating that the upper bound is (n^2)<(2m+n)2 n2<(2m+n)^2, as we claim. ◀ We introduce a restricted snapshot that allows us to analyze snapshots of a certain part of the topology. Definition 3. Consider some timestep i∈ℕi , a network G=(V,E)G=(V,E), a plan ρ, an i-snapshot snapiG,ρ=(G^,h)snap^G,ρ_i=( G,h), where G^=(V∪S∪T,E∪E′) G=(V∪ S∪ T,E∪ E ), and a subset V′⊆V V. Let V V be a copy of V′V augmented with S if s∈V′s∈ V and with T if t∈V′t∈ V . Then, a V′V -restricted i-snapshot snapiG,ρ↾V′snap^G,ρ_i\!\! _V is a tuple (G^[V^],h↾V^)( G[ V],h\!\! _ V), where h↾V^h\!\! _ V is the restriction of h to vertices in V V. It is immediate that Lemma˜1 can be applied to a pair of snapshots restricted to a set of vertices that do not contain traps and form a connected component. Corollary 1. Given a network G=(V,E)G=(V,E), a set R⊂VR⊂ V of traps, a plan ρ, a subset V′⊆V V such that V′∩R=∅V ∩ R= and G[V′]G[V ] is a connected component, and two snapshots snapiG,ρ↾V′snap^G,ρ_i\!\! _V and snapjG,ρ↾V′snap^G,ρ_j\!\! _V for i≤j≤len(ρ)i≤ j (ρ) whose number of vertices colored 11 are the same, there is a plan ρ′ρ of length len(ρ′)<(2m+n)2len(ρ )<(2m+n)^2 such that snap0G,ρ′↾V′=snapiG,ρ↾V′snap^G,ρ _0\!\! _V =snap^G,ρ_i\!\! _V and snaplen(ρ′)G,ρ′↾V′=snapjG,ρ↾V′snap^G,ρ _len(ρ )\!\! _V =snap^G,ρ_j\!\! _V . We now show that contracting a part of a plan between two snapshots is sufficient to contract the whole plan. Lemma 2. Given an instance (G,s,t,R,c,A,k)(G,s,t,R,c,A,k) of Routing Plan, let ρ be a valid plan of length len(ρ)=x+δ+ylen(ρ)=x+δ+y in which at least k assets survive such that ρ does not activate any trap in timesteps x+1x+1 to x+δ−1x+δ-1, inclusive. If there is a (non-valid) plan ρ′ρ of length len(ρ′)=x+δ′len(ρ )=x+δ such that 1. for each asset a∈Aa∈ A and all i∈[x]i∈[x] it holds that ρ(a,i)=ρ′(a,i)ρ(a,i)=ρ (a,i); 2. snapx+δ′ρ′=snapx+δρsnap^ρ _x+δ =snap^ρ_x+δ; 3. plan ρ′ρ does not activate any trap in timesteps x+1x+1 to x+δ′−1x+δ -1 inclusive; and 4. δ′<δ <δ; then there exists a valid plan ρ′ρ of length len(ρ′)=x+δ′+ylen(ρ )=x+δ +y with at least k surviving assets. Proof. We build plan ρ′ρ by copying ρ′ρ and then extending it with the last y steps of ρ, adapting them if needed. Consider running ρ′ρ according to ρ′ρ until timestep x+δ′x+δ . Since snapx+δ′ρ′=snapx+δρsnap^ρ _x+δ =snap^ρ_x+δ, we now attempt to replicate in ρ′ρ every timestep j, in the natural order, from x+δ+1x+δ+1 to x+δ+yx+δ+y according to plan ρ. Note that we certainly succeed for the timestep x+δ+1x+δ+1 because snapx+δρ=snapx+δ′ρ′snap^ρ_x+δ=snap^ρ _x+δ . For some j, the replication might become impossible. That is, there is an asset a such that at timestep j of ρ this asset should occupy a vertex v occupied by another asset a′a . Our intention is to leave a intact at vertex v at timestep j but this, however, might invalidate the move of another asset at the same time, the one that potentially was supposed to take position at v instead of a. Hence, we collect those assets that we cannot move due to leaving a intact. On the graph, these assets can form either a path or a cycle. In both cases, we leave all of these assets intact at timestep j. Since we are considering the first such j for which mimicking ρ failed, the described situation can happen only because of some asset a′a that was destroyed at time j. This means, in particular, that a′a moves at time j to the destroyed state, † , in plan ρ. However, we also let a′a stay intact, as it is now not destroyed; for, in the opposite case a would have been able to take its place. It is not hard to verify that because of these alignments, we obtain a snapshot snapjρ′snap^ρ _j whose set O of vertices colored 11 is a superset (potentially a strict superset) of that of snapjρsnap^ρ_j. Put in words, the former contains all assets at the same vertices as that of the latter, and, second, the former has maybe even more assets. For the next step, we relabel the assets in our plan ρ such that they meet exactly the positions of vertices in our just-extended plan ρ′ρ . There will be assets that cannot be matched, and these will be deemed to stay put in ρ′ρ , unless another conflict like the described one appears. As a result of running the whole “copying” procedure, we obtain ρ′ρ that loses at most as many assets as the original plan ρ. Indeed, our procedure of copying never decreased the number of non-destroyed assets. Potentially, the remaining vertices might be directed to the final vertex, if that turns out to be possible. ◀ We combine the previous observations into the following crucial lemma, which upper-bounds the number of timesteps between two subsequent activations of traps. Lemma 3. Let k1,…,kqk_1,…,k_q, such that ki≤ki+1k_i≤ k_i+1 for each i∈[q]i∈[q] be the timesteps at which plan ρ activates traps. Then, if there is i∈[q−1]i∈[q-1] such that ki+1−ki>(2m+n)2k_i+1-k_i>(2m+n)^2, then there is a plan ρ′ρ such that snap0G,ρ′=snap0G,ρsnap^G,ρ _0=snap^G,ρ_0 and snaplen(ρ′)G,ρ′=snaplen(ρ)G,ρsnap^G,ρ _len(ρ )=snap^G,ρ_len(ρ) and such that in the trap activation series k1′,…,kq′k _1,…,k _q of ρ′ρ it holds that ki+1′−ki′<(2m+n)2k _i+1-k _i<(2m+n)^2. Proof. Let us fix i such that ki+1−ki>(2m+n)2k_i+1-k_i>(2m+n)^2, and let K⊆VK V be the set of vertices passed by at least one asset between timesteps kik_i and ki+1k_i+1, inclusive. Now, let us consider the graph G′=G[V∖R∪K]G =G[V R∪ K]. Note that the vertices of G′G are either non-trap vertices or vertices whose trap is deactivated between timesteps kik_i and ki+1k_i+1, so they can be considered non-trap vertices during that period. Let C1,C2,…CxC_1,C_2,… C_x be the connected components of G′G (clearly, there must be at least one). According to Corollary˜1, for each component CjC_j, j∈[x]j∈[x], there is a plan ρj _j such that snapki+1ρ↾Cj=snapki+1ρj↾Cjsnap_k_i+1^ρ\!\! _C_j=snap_k_i+1 _j\!\! _C_j and snapki+1−1ρ↾Cj=snapki+1−1ρj↾Cjsnap_k_i+1-1^ρ\!\! _C_j=snap_k_i+1-1 _j\!\! _C_j. Let L be the maximum length among these new plans ρ1,ρ2,…,ρx _1, _2,…, _x. We build a new plan ρ′ρ as follows. We start with copying the plan ρ for the first kik_i timesteps for all assets. Then, we let each group AjA_j of assets that remained in the respective connected component CjC_j at timesteps ki+1k_i+1 to ki+1−1k_i+1-1 follow their plan ρj _j for a len(ρj)len( _j) timesteps. For each such j that len(ρj)<Llen( _j)<L, we let the respective assets from group AjA_j stay put for the next L−len(ρj)L-len( _j) timesteps. Finally, we again let all assets follow the original plan ρ, which completes the construction of ρ′ρ . Naturally, snapyG,ρ′=snapyG,ρsnap^G,ρ _y=snap^G,ρ_y for each y∈[0,ki]y∈[0,k_i] as the respective plans are identical at these timesteps. By applying Corollary˜1 to plans ρ1,…,ρx _1,…, _x, copying these plans into ρ′ρ , and enforcing the same length of L by forced idling, it holds that snapki+LG,ρ′=snapki+1−1G,ρsnap^G,ρ _k_i+L=snap^G,ρ_k_i+1-1. Note that by the fact that the vertices of G′G are either non-trap vertices or vertices for which their trap is deactivated between timesteps kik_i and ki+1k_i+1, applying the forced idling does not activate any trap. Finally, for the remaining part of plans ρ and ρ′ρ , we note that they meet the conditions of Lemma˜2. So, we apply this lemma and get that in plan ρ′ρ there are at least as many surviving assets as in plan ρ. ◀ To get the result, we exhaustively apply Lemma˜3. ∎ Due to the previous theorem, we can focus only on deciding whether there is a short solution to the Routing Plan problem. Hence, for the rest of the paper, we assume only short solutions without explicitly specifying it. Therefore, we obtain the following. Theorem 2. Routing Plan is in NP. Proof. By Theorem˜1, if a given instance ℐI is a Yes-instance, then there exists a certificate (plan ρ) of polynomial length. Therefore, to verify the certificate, we simulate it, check its validity at every step, and observe whether at least k assets have been successful. ∎ 5 Efficient Algorithms The containment in NP established in the above section is just the first step on the way of understanding the hardness of Routing Plan. In particular, it does not imply efficient solvability, a trait potentially of practical relevance. So, can we do better and solve Routing Plan in polynomial time? Chasing the answer to the posed question, let us first make several helpful assumptions on the input of Routing Plan. By this, we will avoid distracting trivial cases and simplify the formal analysis of the computational complexity of our problem. To this end, let us fix some instance ℐ=(G,s,t,R,c,A,k)I=(G,s,t,R,c,A,k) of Routing Plan. Then, we assume without loss of generality that: 1. G contains an s-t path, as otherwise ℐI is trivially a No-instance; 2. G is connected; if not, using the breadth first search, we identify in polynomial time all components of G and narrow the topology down to the one containing s and t; 3. G[V∖R]G[V R] does not contain an s-t path; otherwise, ℐI is a Yes-instance because all assets survive by following the trap-free path in question (note that k≤|A|k≤|A|). Having identified and dismissed the obvious cases, our first result delivers good news. It turns out that Routing Plan is efficiently solvable for path topologies by following an intuitive run-wait-sacrifice plan. Here, each asset always moves towards the target—runs—if only the following location is unoccupied and is not an active trap, waits before an active trap r until it is followed by at least c(r)c(r) assets, and then sacrifices itself by stepping on r and thus letting the c(r)c(r) followers pass the now-inactive r. Contrary to the general simplicity of the run-wait-sacrifice approach, the proof of its optimality involves a complex and careful analysis. Theorem 3. If G is a path, Routing Plan is solvable in polynomial time. Proof. Our algorithm is built on top of auxiliary Claims˜1, 2 and 3 below that show what an optimal solution for path graphs looks like. Prior to focusing on the structure of a solution, we make several observations to get rid of trivial cases. Without loss of generality, we assume in the rest of the proof that (i) deg(s)=1 (s)=1, as by property 1 of a valid plan, no asset can return to s once it leaves it, (i) deg(t)=1 (t)=1, as by property 2 of valid plan, no asset can leave t once it reaches it. Obviously, both properties heavily rely on the fact that the topology under consideration is a simple path. As a consequence, we have that every solution plan follows the shortest s,ts,t-path. For the rest of the proof, we assume that V(G)=v1,…,vnV(G)=\v_1,…,v_n\, E(G)=vi,vi+1∣i∈[n−1]E(G)=\\v_i,v_i+1\ i∈[n-1]\, and s=v1s=v_1 and t=vnt=v_n. Previous properties show that the assets use solely vertices of the shortest s,ts,t-path. However, they do not necessarily ensure that assets are not “returning” to previously left locations. In our first auxiliary claim, we prove that we can indeed assume only plans where, in each step, assets are either waiting on their current locations, or they are moving closer to the target location t. Claim 1. †margin: If an instance ℐI of Routing Plan, where G is a path, admits a solution plan ρ, then it also admits a solution plan ρ′ρ such that for every a∈Aa∈ A and every timestep j we have ρ′(a,j+1)=ρ′(a,j)ρ (a,j+1)=ρ (a,j); ρ′(a,j)=viρ (a,j)=v_i and ρ′(a,j+1)=vi+1ρ (a,j+1)=v_i+1 for some i∈[n−1]i∈[n-1]; or ρ′(a,j+1)=†ρ (a,j+1)= . Next, we show that if there is an asset that can move to an empty vertex closer to the target, that is, a vertex that is not a trap and is not occupied by another asset, this asset can always move to this vertex. Claim 2. †margin: Let ρ be a solution plan, a∈Aa∈ A be an asset such that ρ(a,j)=viρ(a,j)=v_i, i∈[n−1]i∈[n-1], ρ(a,j+1)≠†ρ(a,j+1) = , vi+1v_i+1 is not an active trap in round j+1j+1, and there is no a′∈A∖aa ∈ A \a\ such that ρ(a′,j+1)=vi+1ρ(a ,j+1)=v_i+1. Then, there is a solution plan ρ′ρ such that ρ′(a,j+1)=vi+1ρ (a,j+1)=v_i+1. That is, according to Claim˜2, assets do not wait at the same vertex if they can move closer to the target location t. The same claim also implies that whenever an asset is at location vn−1v_n-1, it immediately moves to vn=tv_n=t in the next round (unless it is destroyed by a trap). However, the necessary condition for the movement of such asset is that the neighboring vertex is not a trap. In such situation, on the other hand, we show that waiting is highly desirable. However, for this, we need more notation. Let r1,…,rτr_1,…,r_τ be an ordering of traps such that dist(ri,t)>dist(ri+1,t)dist(r_i,t)>dist(r_i+1,t) for every i∈[τ−1]i∈[τ-1]. We call the sub-path between traps rir_i and ri+1r_i+1 a segment, and we denote it as SiS_i. Claim 3. †margin: Let ρ be a solution plan, a∈Aa∈ A be an asset such that ρ(a,j)=vℓρ(a,j)=v_ , ℓ∈[n] ∈[n], vℓ+1∈Rv_ +1∈ R, let the trap ri+1r_i+1 on vℓ+1v_ +1 be active in round j+1j+1. Then, there is a solution plan ρ′ρ such that a waits on vℓv_ and moves to vℓ+1v_ +1 only after all vℓ−1,…,vℓ−ℓ′v_ -1,…,v_ - , where ℓ′=min|Si|,c(ri+1),|b∈A∖a∣ρ(b,j)=vℓ′∧ℓ′≤ℓ| = \|S_i|,c(r_i+1),|\b∈ A \a\ ρ(b,j)=v_ ≤ \|\, are occupied by some assets. In other words, Claim˜3 says that no trap is activated if there are not enough assets to follow the destroyed asset. This rule forces us to pass traps efficiently. Our algorithm exploits the structure of solutions described by Claims˜1, 2 and 3. We arbitrarily fix an order of the assets, and let them leave s in consecutive rounds. After the first asset passes the first trap, we apply the following strategy for all assets in order. Each asset moves to the next vertex v if (i) v is unoccupied or (i) v is an active trap and there are enough assets to follow (according to Claim˜3). In all other cases, the asset in question stays put. Optimality and correctness of this approach follows from the previous claims. ∎ The run-wait-sacrifice strategy proves to be a crucial ingredient for extending the family of polynomial-time solvable instances of Routing Plan with the case of a collection of disjoint paths connecting the initial and target locations. Intuitively, this scenario models a collection of independent routes to reach a goal. After figuring out how many assets survive taking each of the available routes, the decision mostly depends on how many assets to deploy to each of the available methods. The latter task resembles making a change given a fixed set of coin denominations, a canonical example of dynamic programming (DP). Hence, we devise a DP-based algorithm similar to that solving the coin change problem, using run-wait-sacrifice as a subroutine. Proposition 1. †margin: If G∖s,tG \s,t\ is a disjoint union of paths, Routing Plan can be solved in polynomial time. Aiming at further exploiting the run-wait-sacrifice technique, we show that it performs well for topology structures reaching far beyond those having an “obviously” linear structure (or a collection thereof). For intuition, consider some single s-t path. Run-wait-sacrifice achieves optimality by waiting with passing a forthcoming trap until it “stores” enough assets (or as many of them as possible) on non-trap locations between the previous and the forthcoming trap. However, because assets are indistinguishable, the structure of the non-trap locations used for storage is irrelevant (given that it does contain any edge connecting to any other “storages”) for optimality. In particular, run-wait-sacrifice optimality retains after attaching an arbitrary connected component of non-trap vertices to some non-trap vertex of the path. Before formalizing the intuition above, let us introduce the notion of a condensed topology. Intuitively, we obtain a condensed topology from some topology by contracting every “connected component” of solely non-trap (and non-terminal) vertices of the topology into a “supernode” representing the contracted non-trap vertices; formally: Definition 4. Let G=(V,E)G=(V,E) be a topology, R⊆VR V be a collection of traps, =C1,C2,…,CkC=\C_1,C_2,…,C_k\ be a collection of connected components of graph G[V∖R∖s,t]G[V R \s,t\], where each CiC_i, i∈[k]i∈[k], has vertices VicV^c_i. Then, a condensed topology is an undirected graph G′=(V′,E′)G =(V ,E ), where V′=vi′:Ci∈∪R∪s,tV =\v _i C_i \∪ R∪\s,t\ and E′E contains an edge e=u,we=\u,w\, u,w⊂V′\u,w\⊂ V , if e∈Ee∈ E, and contains an edge vi,w\v_i,w\, i∈[k]i∈[k], w∈R∪s,tw∈ R∪\s,t\, if there is a vertex u′∈Ciu ∈ C_i such that u′,w∈E\u ,w\∈ E. Now, a careful analysis of the run-wait-sacrifice strategy allows us to extend our catalog of polynomial-time solvable instances of Routing Plan further to topologies whose condensations have a linear structure. Proposition 2. †margin: Let ℐ=(G,s,t,R,c,A,k)I=(G,s,t,R,c,A,k) be an instance of Routing Plan such that each r∈Rr∈ R has degree two in G and all s-t paths in the respective condensed topology are mutually disjoint. Then ℐI is solvable in polynomial time. As a natural next step beyond the topologies of a linear structure (or a collection thereof), we explore a broader class of trees. We thus start with stars, structurally very simple trees. By our initial assumptions on the topology G, the only non-trivial case to consider is that the start and target vertex are separated with one vertex being a trap. For this case, however, run-wait-sacrifice yields an optimal solution.444More precisely, “run-sacrifice” works better in this case, as the trap is a neighbor of the initial location, so no asset needs to wait. In passing, we mention that precisely the considered case is the reason why we cannot apply Proposition˜2, which disallows non-trap vertices connected solely to trap vertices. Proposition 3. †margin: If G is a star, then Routing Plan can be solved in (1)O\! (1 ) time. The crux of the positive results above lies mostly in exploiting the topology structure that limits the number of sensible ways to get from the start location to the target one. In particular, we never took a step at a trap vertex that did not bring us closer to the target. As we demonstrate in the following example, this may not be the case even for relatively small topologies. Sometimes, achieving optimality requires taking a strategic non-obvious detour from an s-t path, on which we sacrifice assets to use a certain trap vertex as “storage.” Example 3. Consider the following instance with 88 assets. s ×8× 8 11 22 22 22t =0 clock=0s ×1× 1 11 22 22 22 t =7 clock=7 Ignoring the upper trap yields the case of a path, for which we know that run-wait-sacrifice is optimal. Simulating run-wait-sacrifice shows that no asset survives, as to pass the two traps before t we would need three consecutive assets; this is impossible due to the trap with reload time one. Yet, there is a plan yielding 11 successful asset achievable by sending all assets one by one and using the upper trap. First, one asset activates the upper trap at time 55. This unlocks the situation depicted in the right picture at time 77 that can evolve in the next round to having the required three consecutive assets (at the untrapped location and its sideways neighbors). 6 Limits of Tractability s 11 8N+68N+6⋯·s8N+58N+5 11 4N4N⋯·s4N−14N-1 11 11⋯·sN5N^5 3N3Ntt 33⋮ ⋮ Figure 2: Illustration of the construction from Theorem˜4. The dark labels by the traps ( ) depict the respective reload times. We now turn to exploring the limits of efficient computability by showing the computational intractability of Routing Plan by proving NP-hardness. In general, our goal is to show the hardness for instances that are as constrained as possible, thereby precisely identifying the borders of tractability. Theorem 4. †margin: Routing Plan is NP-complete, even if each asset crosses at most six traps. Proof. We reduce from the NP-complete (Gonzalez, 1985) Restricted Exact Cover by 33-Sets problem. The input of this problem consists of a universe =u1,…,u3NU=\u_1,…,u_3N\ and a family of subsets =(S1,…,S3N)S=(S_1,…,S_3N) such that ∀j∈[3N]:Sj∈(3)∀ j∈[3N] S_j∈ U3 and each element u∈u appears in exactly 33 subsets S∈S . The goal is to decide whether there is a collection C⊆C of size N such that ⋃S∈CS= _S∈ CS=U. Let ℐI be an instance of the RX3C problem. We construct an equivalent instance J of Routing Plan using several gadgets. We first focus on them one by one, and then describe how they work together; Figure˜2 illustrates the whole construction. The core ingredient of our construction is a batch gadget B(x)B(x), which ensures that, assuming an optimal solution in terms of the number of lost assets and enough assets on the input, x assets leave the gadget in consecutive rounds; i.e., in one batch. Formally, the batch gadget is a star with x+1x+1 leaves v0,…,vxv_0,…,v_x and the center c. The leaf v0v_0 is called an input port and the leaf vxv_x is an output port. The gadget contains two traps. The first is placed on the input port with reload time one. The other is placed on the output port with reload time equal to the parameter x. The crucial idea of the gadget is that we need to store x−1x-1 assets on the leaves v1,…,vx−1v_1,…,v_x-1 and only after all the leaves are occupied, the trap on the output port is deactivated. Observe that due to the trap at the input port, we lose at least one asset for every asset that ever visits the center c, and we lose at least one asset due to the trap on the output port. Consequently, to pass x assets through the gadget, we lose at least x+2x+2 assets. Now, we formally describe the construction (see Figure˜2 for an illustration). We have three batch gadgets besides the initial vertex s and terminal vertex t. The first batch gadget B(8N+6)B(8N+6) is connected through its input port to the initial vertex. The second batch gadget B(4N)B(4N) is connected to the first batch gadget and is followed by 3N3N set vertices s1,…,s3Ns_1,…,s_3N, each protected by a trap with reload time 33. Next, we have 3N3N element vertices u1,…,u3Nu_1,…,u_3N, which are unprotected. There is an edge sj,ui\s_j,u_i\ if and only if ui∈Sju_i∈ S_j and all element vertices are connected with a guard vertex g. Moreover, there is a trap protecting the guard vertex with reload time 3N3N. The third batch gadget B(1)B(1) is also connected to the first gadget and is followed by a slowdown path of length N5N^5, which again ends in the guard vertex. The guard vertex is connected to the terminal vertex. Finally, we set A=a1,…,a16N+14A=\a_1,…,a_16N+14\ and k=3Nk=3N. The whole construction guarantees that the assets split between the set vertices, and the slowdown path. In an optimal solution, at some point, all element vertices are occupied by assets waiting for a couple of rounds for an asset using the slowdown path. This asset eventually deactivates the trap on the guard vertex and allows all assets waiting on the element vertices to pass through g to the terminal vertex t. ∎ Furthermore, by reducing from the NP-hard 3-Partition problem (Garey and Johnson, 1975), we rule out polynomial-time computability for trees with maximum degree 33. Theorem 5. †margin: Routing Plan is NP-complete, even if the topology G is a tree with maximum degree 33. In light of the established hardness, it is natural to consider approximations. Let Routing Plan∗ be a natural optimization variant of our problem, in which we seek the maximum number of assets that survive. Importantly, the proof of Theorem˜6 excludes a constant-factor approximation algorithm for Routing Plan∗ (under standard assumptions).555Routing Plan∗ is not only outside of APX but also APX-hard, as we show in Appendix B. Theorem 6. †margin: There is no polynomial-time algorithm yielding a multiplicative constant-factor approximation to Routing Plan∗, even if topology G is a tree with maximum degree 33. 7 Conclusions We introduced a new planning model for adversarial environments in which agents may be eliminated while traversing hostile terrain. We established that the problem is in NP despite the exponentially large plan space, proved NP-hardness even on trees of constant degree, and identified several tractable cases. Our framework captures applications ranging from humanitarian aid transportation in conflict zones to coordinated drone swarm navigation. Our results open multiple promising avenues for future research. In this initial study, we assumed complete knowledge of the environment. Since such full observability is rare in real-world scenarios, a natural extension is to incorporate uncertainty (Nikolova et al., 2006; Wagner and Choset, 2017; Shofer et al., 2023) in trap locations, reload times, or graph topology. Additionally, our hardness results motivate the design and analysis of efficient heuristics and approximations. While constant-factor approximations may be limited to special cases due to our inapproximability result, heuristics, albeit without formal optimality guarantees, could yield high-quality solutions in many practical instances. Finally, it is interesting to consider multiple start or target locations to broaden the model’s applicability to scenarios such as distributed supply networks or multi-objective delivery missions. Acknowledgments Andrzej Kaczmarczyk and Haifeng Xu were partially supported by the ONR Award N00014-23-1-2802. Šimon Schierreich was partially funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101002854). Additionally, Andrzej Kaczmarczyk and Šimon Schierreich were partially funded by the European Union under the project Robotics and advanced industrial production (reg. no. CZ.02.01.01/00/22_008/0004590). Most of this work was done while Šimon Schierreich was a Fulbright-Masaryk Fellow at Pennsylvania State University. References Ahmadi et al. [2021] S. Ahmadi, G. Tack, D. D. Harabor, and P. Kilby. A fast exact algorithm for the resource constrained shortest path problem. In Proceedings of AAAI-21, pages 12217–12224, 2021. Behnezhad et al. [2018] S. Behnezhad, A. Blum, M. Derakhshan, M. T. Hajiaghayi, M. Mahdian, C. H. Papadimitriou, R. L. Rivest, S. Seddighin, and P. B. Stark. From battlefields to elections: Winning strategies of blotto and auditing games. In Proceedings of SODA-18, pages 2291–2310, 2018. Boehmer et al. [2024] N. Boehmer, M. Han, H. Xu, and M. Tambe. Escape sensing games: Detection-vs-evasion in security applications. In Proceedings of ECAI-24, pages 3260–3267, 2024. Borie et al. [2011] R. B. Borie, C. A. Tovey, and S. Koenig. Algorithms and complexity results for graph-based pursuit evasion. Autonomous Robots, 31(4):317–332, 2011. Călinescu et al. [2008] G. Călinescu, A. Dumitrescu, and J. Pach. Reconfigurations in graphs and grids. SIAM Journal on Discrete Mathematics, 22(1):124–138, 2008. Čáp et al. [2015] M. Čáp, J. Vokřínek, and A. Kleiner. Complete decentralized method for on-line multi-robot trajectory planning in well-formed infrastructures. In Proceedings of ICAPS-15, pages 324–332, 2015. Carmesin et al. [2023] S. Carmesin, D. Woller, D. Parker, M. Kulich, and M. Mansouri. The hamiltonian cycle and travelling salesperson problems with traversal-dependent edge deletion. Journal of Computational Science, 74:102156, 2023. 10.1016/j.jocs.2023.102156. Deligkas et al. [2025] A. Deligkas, E. Eiben, R. Ganian, I. Kanj, and M. S. Ramanujan. Parameterized algorithms for multiagent pathfinding on trees. In Proceedings of AAMAS-25, pages 584–592, 2025. Demaine et al. [2015] E. D. Demaine, M. L. Demaine, E. Fox-Epstein, D. A. Hoang, T. Ito, H. Ono, Y. Otachi, R. Uehara, and T. Yamada. Linear-time algorithm for sliding tokens on trees. Theoretical Computer Science, 600:132–142, 2015. Dijkstra [1959] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959. 10.1007/BF01386390. Dvořák et al. [2025] M. Dvořák, D. Knop, M. Opler, J. Pokorný, O. Suchý, and K. Szilágyi. Pathfinding in self-deleting graphs. In Proceedings of ISAAC-25, pages 28:1–28:15, 2025. Felner et al. [2017] A. Felner, R. Stern, S. E. Shimony, E. Boyarski, M. Goldenberg, G. Sharon, N. R. Sturtevant, G. Wagner, and P. Surynek. Search-based optimal solvers for the multi-agent pathfinding problem: Summary and challenges. In Proceedings of SOCS-17, pages 29–37, 2017. Fioravantes et al. [2024] F. Fioravantes, D. Knop, J. M. Křišťan, N. Melissinos, and M. Opler. Exact algorithms and lowerbounds for multiagent path finding: Power of treelike topology. In Proceedings of AAAI-24, pages 17380–17388, 2024. Fioravantes et al. [2025a] F. Fioravantes, D. Knop, J. M. Křisťan, N. Melissinos, M. Opler, and T. A. Vu. Solving multiagent path finding on highly centralized networks. In Proceedings of AAAI-25, pages 23186–23193, 2025a. Fioravantes et al. [2025b] F. Fioravantes, D. Knop, N. Melissinos, and M. Opler. When agents break down in multiagent path finding. CoRR, abs/2508.03777, 2025b. Forišek [2010] M. Forišek. Computational complexity of two-dimensional platform games. In Proceedings of FUN-10, pages 214–227, 2010. 10.1007/978-3-642-13122-6_22. Fried et al. [2013] D. Fried, S. E. Shimony, A. Benbassat, and C. Wenner. Complexity of canadian traveler problem variants. Theoretical Computer Science, 487:1–16, 2013. Garey and Johnson [1975] M. R. Garey and D. S. Johnson. Complexity results for multiprocessor scheduling under resource constraints. SIAM Journal on Computing, 4(4):397–411, 1975. 10.1137/0204035. Garey and Johnson [1990] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., USA, 1990. Gonzalez [1985] T. F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293–306, 1985. 10.1016/0304-3975(85)90224-5. Hart et al. [1968] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100–107, 1968. Hearn and Demaine [2005] R. A. Hearn and E. D. Demaine. Pspace-completeness of sliding-block puzzles and other problems through the nondeterministic constraint logic model of computation. Theoretical Computer Science, 343(1-2):72–96, 2005. van den Heuvel [2013] J. van den Heuvel. The complexity of change. In Surveys in Combinatorics, volume 409 of London Mathematical Society Lecture Note Series, pages 127–160. Cambridge University Press, 2013. Jakob et al. [2012] M. Jakob, O. Vaněk, O. Hrstka, and M. Pěchouček. Agents vs. pirates: Multi-agent simulation and optimization to fight maritime piracy. In Proceedings of AAMAS-12, pages 37–44, 2012. Kaźmierowski [2025] S. Kaźmierowski. Equilibria of the colonel blotto games with costs. In Proceedings of AAAI-25, pages 13969–13976, 2025. Klobas et al. [2023] N. Klobas, G. B. Mertzios, H. Molter, R. Niedermeier, and P. Zschoche. Interference-free walks in time: Temporally disjoint paths. Autonomous Agents and Multi-Agent Systems, 37(1):1, 2023. 10.1007/s10458-022-09583-5. Kornhauser et al. [1984] D. Kornhauser, G. L. Miller, and P. G. Spirakis. Coordinating pebble motion on graphs, the diameter of permutation groups, and applications. In Proceedings of FOCS-84, pages 241–250, 1984. Li et al. [2019] J. Li, D. Harabor, P. J. Stuckey, H. Ma, and S. Koenig. Disjoint splitting for multi-agent path finding with conflict-based search. In Proceedings of ICAPS-19, pages 279–283, 2019. McCarthy [1998] J. McCarthy. Partial formalizations and the lemmings game. Technical report, Stanford University, Formal Reasoning Group, 1998. Nebel [2024] B. Nebel. The computational complexity of multi-agent pathfinding on directed graphs. Artificial Intelligence, 328:104063, 2024. Nikolova et al. [2006] E. Nikolova, M. Brand, and D. R. Karger. Optimal route planning under uncertainty. In Proceedings of ICAPS-06, pages 131–141, 2006. Papadimitriou and Yannakakis [1991] C. H. Papadimitriou and M. Yannakakis. Shortest paths without a map. Theoretical Computer Science, 84(1):127–150, 1991. 10.1016/0304-3975(91)90263-2. Parsons [1976] T. D. Parsons. Pursuit-evasion in a graph. In Theory and Applications of Graphs, volume 642 of Lecture Notes in Mathematics, pages 426–441, Berlin, Heidelberg, 1976. Springer. Phillips and Likhachev [2011] M. Phillips and M. Likhachev. SIPP: safe interval path planning for dynamic environments. In Proceedings of ICRA-11, pages 5628–5635, 2011. Ptacek and Newsham [1998] T. H. Ptacek and T. N. Newsham. Insertion, evasion, and denial of service: Eluding network intrusion detection. Technical report, Secure Networks, Inc., 1998. Rottkemper and Fischer [2013] B. Rottkemper and K. Fischer. Decision making in humanitarian logistics - A multi-objective optimization model for relocating relief goods during disaster recovery operations. In Proceedings of ISCRAM-13, 2013. Seymour and Thomas [1993] P. D. Seymour and R. Thomas. Graph searching and a min-max theorem for tree-width. Journal of Combinatorial Theory, Series B, 58(1):22–33, 1993. Sharon et al. [2015] G. Sharon, R. Stern, A. Felner, and N. R. Sturtevant. Conflict-based search for optimal multi-agent pathfinding. Artificial Intelligence, 219:40–66, 2015. Shofer et al. [2023] B. Shofer, G. Shani, and R. Stern. Multi agent path finding under obstacle uncertainty. In Proceedings of ICAPS-23, pages 402–410, 2023. Stern et al. [2019] R. Stern, N. R. Sturtevant, A. Felner, S. Koenig, H. Ma, T. T. Walker, J. Li, D. Atzmon, L. Cohen, T. K. S. Kumar, R. Barták, and E. Boyarski. Multi-agent pathfinding: Definitions, variants, and benchmarks. In Proceedings of SOCS-19, pages 151–158, 2019. Surynek [2010] P. Surynek. An optimization variant of multi-robot path planning is intractable. In Proceedings of AAAI-10, pages 1261–1263, 2010. Surynek [2022] P. Surynek. Problem compilation for multi-agent path finding: A survey. In Proceedings of IJCAI-22, pages 5615–5622, 2022. Tambe [2012] M. Tambe. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned. Cambridge University Press, 2012. Viglietta [2015] G. Viglietta. Lemmings is PSPACE-complete. Theoretical Computer Science, 586:120–134, 2015. 10.1016/j.tcs.2015.01.055. Wagner and Choset [2017] G. Wagner and H. Choset. Path planning for multiple agents under uncertainty. In Proceedings of ICAPS-17, pages 577–585, 2017. Wang et al. [2025] S. Wang, H. Xu, Y. Zhang, J. Lin, C. Lu, X. Wang, and W. Li. Where paths collide: A comprehensive survey of classic and learning-based multi-agent pathfinding. CoRR, abs/2505.19219, 2025. van Wassenhove [2006] L. N. van Wassenhove. Humanitarian aid logistics: Supply chain management in high gear. Journal of the Operational Research Society, 57(5):475–489, 2006. Wu et al. [2016a] F. Wu, S. D. Ramchurn, and X. Chen. Coordinating human-UAV teams in disaster response. In Proceedings of IJCAI-16, pages 524–530, 2016a. Wu et al. [2016b] H. Wu, J. Cheng, Y. Ke, S. Huang, Y. Huang, and H. Wu. Efficient algorithms for temporal path computation. IEEE Transactions on Knowledge and Data Engineering, 28(11):2927–2942, 2016b. 10.1109/tkde.2016.2594065. Yamanaka et al. [2018] K. Yamanaka, T. Horiyama, J. M. Keil, D. Kirkpatrick, Y. Otachi, T. Saitoh, R. Uehara, and Y. Uno. Swapping colored tokens on graphs. Theoretical Computer Science, 729:1–10, 2018. 10.1016/j.tcs.2018.03.016. Yu and LaValle [2013] J. Yu and S. M. LaValle. Structure and intractability of optimal multi-robot path planning on graphs. In Proceedings of AAAI-13, pages 1443–1449, 2013. Appendix Appendix A Omitted Proofs See 1 Proof. †margin: Let a be an asset such that there exists a time step j such that ρ(a,j)viρ(a,j)v_i and ρ(a,j+1)=vi−1ρ(a,j+1)=v_i-1 for some i∈[n]i∈[n]; we call such an asset returning. Notice that if i=1i=1, then the plan ρ is invalid, s by property 1, the assets cannot return to s. Hence, i≥2i≥ 2. Moreover, assume that a is a returning assets which returns in the smallest timestep. If here are multiple such assets, let a be the ‘leftmost’, i.e., the one located n location viv_i closest to s. First, assume that there is ℓ such that ρ(a,j+ℓ)=ρ(a,j)ρ(a,j+ )=ρ(a,j). That is, the asset a, after ℓ steps, returns to location viv_i. If vi∉Rv_i ∈ R, then we set ρ(a,j′)=viρ(a,j )=v_i for every j′∈[j,ℓ]j ∈[j, ]; that is, we make agent a wait on its location. Note that since the topology is a path, there cannot be an agent that is newly blocked by a, as there is no way how to circumvent it. Moreover, since vi∉Rv_i ∈ R, then a can wait on it’s location without being destroyed. ◀ See 2 Proof. †margin: Observe that the move ρ′(a,j+1)=vi+1ρ (a,j+1)=v_i+1 does not change the timing of any trap, as vi+1v_i+1 is either a regular vertex or an inactive trap (in round j+1j+1). Therefore, as the plan for each other asset remains the same, this change does not affect whether they succeed or not. Moreover, asset a moves to a neighboring empty vertex closer to t and continues its movement in round j+ℓj+ , where j+ℓj+ is the round in which a leaves vi+1v_i+1 in plan ρ. Hence, if a is successful in ρ, it also succeeds in ρ′ρ . ◀ See 3 Proof. †margin: We prove the claim by induction. Let S be the last segment and rlr_l and rpr_p be the left and right trap of this segment. By Claim˜2, whenever assets can move to an empty vertex closer to t, then they do so. In particular, all assets positioned on a vertex after rpr_p in round j will move closer to t in round j+1j+1, and ultimately they will reach t as soon as possible. This implies that there is no asset waiting on the vertex after rpr_p (except in the case where this vertex is t, but in this case, we can simply subdivide the edge rp,t\r_p,t\ without changing the solution). Moreover, let ρ be a plan in which a deactivates trap rpr_p sooner than when the whole segment is full (slightly abusing the terminology, by full we mean that there are ℓ′=min|S|,c(rp),|b∈A∖a∣ρ(b,j)=vℓ′∧ℓ′≤ℓ| = \|S|,c(r_p),|\b∈ A a ρ(b,j)=v_ ≤ \|\ assets following a). We let a wait once the segment saturates (this necessarily happens due to Claim˜2). Only then asset a deactivates the trap and at least ℓ′ assets succeed. Observe that there is no agent waiting on the first vertex of S (the one in N(rℓ)∩SN(r_ )∩ S), so we can use the same procedure for all segments preceding S, finishing the proof. ◀ See 1 Proof. †margin: Our algorithm is a dynamic programming algorithm over the disjoint paths we obtain by deleting s and t from the topology G, which uses the algorithm for a path as a sub-procedure. We formalize our approach using Algorithm˜1. Intuitively, the core of the algorithm is a dynamic programming table DP[i,B]DP[i,B], which stores the maximum number of successful assets on the sub-graph G[⋃j=1iV(Pj)∪s,t]G[ _j=1^iV(P_j)∪\s,t\] if we have exactly B assets available. Once the table is correctly computed, we simply check whether DP[ℓ,|A|]≥kDP[ ,|A|]≥ k and return an appropriate response. Algorithm 1 A dynamic programming algorithm finding an optimal solution for Routing Plan if G∖s,tG \s,t\ is a disjoint union of paths. Input: A set of assets A, a topology G with an initial and target location s and t such that G∖s,tG \s,t\ is a disjoint union of paths P1,…,PℓP_1,…,P_ , a set of traps R, a reload time function c. Output: The maximum number of successful assets. 1:return SolveRec(ℓ,|A| ,|A|) 2:function SolveRec(i,Bi,B) 3: if i=1i=1 then 4: return SolvePath(P1,BP_1,B) 5: end if 6: if DP[i,B]=undefinedDP[i,B]= undefined then 7: r←−∞r←-∞ 8: for ∀b∈[B]0∀ b∈[B]_0 do 9: x←SolvePath(Pi,b)x← SolvePath(P_i,b) 10: y←SolveRec(i−1,B−b)y← SolveRec(i-1,B-b) 11: r←maxr,x+yr← \r,x+y\ 12: end for 13: DP[i,B]←rDP[i,B]← r 14: end if 15: return DP[i,B]DP[i,B] 16:end function We show correctness of the computation by induction over i. First, let i=1i=1. Then, G[V(Pi)∪s,t]G[V(P_i)∪\s,t\] is a simple path. Therefore, we can use the algorithm of Theorem˜3 to find an optimal plan solving such a sub-graph, which is exactly what the basic step of Algorithm˜1; see line ˜4. Now, let i>1i>1 and assume that DP[j,β]DP[j,β] is computed correctly for all j∈[i]j∈[i] and β∈[|A|]0β∈[|A|]_0. Recall that in any valid plan, once an asset leaves s, it can never return to it. Hence, if we decide to send some assets through PiP_i, these assets never visits any vertex of paths P1,…,Pi−1P_1,…,P_i-1, as G∖s,tG \s,t\ is a collection of disjoint paths. Hence, if we decide to use PiP_i for some b∈[B]0b∈[B]_0 assets, the solution for PiP_i is independent of the solution on P1,…,Pi−1P_1,…,P_i-1. Moreover, on graph G[V(Pi)∪s,t]G[V(P_i)∪\s,t\], an optimal solution follows the algorithm of Theorem˜3, which is also what our algorithm does on line ˜9. Moreover, by induction hypothesis, if we combine this partial plan with the result return on line ˜10, we clearly obtain an optimal solution for this b. Since we try all possible b∈[B]0b∈[B]_0, the algorithm is indeed optimal. The size of the dynamic programming table DPDP is (|V(G)|⋅m)O\! (|V(G)|· m ), and each cell can be computed in (m⋅Tpath)O\! (m· T_path ) time, where TpathT_path is the running time of the algorithm returning the maximum number of successful assets if G is a path. By Theorem˜3, Tpath∈poly(|ℐ|)T_path (|I|), so our approach also runs in polynomial time. ∎ See 2 Proof. †margin: We transform instance ℐI to an equivalent new instance J whose topology is collection of disjoint paths connecting the start and target locations. Then, invoking Proposition˜1, we solve ℐI. Let G′=(V′,E′)G =(V ,E ) be a condensed topology of G=(V,E)G=(V,E). We obtain a new instance =(H,s,t,R,c,A,k)J=(H,s,t,R,c,A,k) of Routing Plan, in which the graph H is constructed as follows. We start by taking the copy of G′G as H. Then, for each vertex v′∈V′∖(s,t∪R)v ∈ V (\s,t\∪ R), we substitute this vertex in H with a path of length equal to the number of vertices of the connected component represented by v′v in a way that the two neighbors of the substituted v′v are connected to the two ends of the path. The equivalence of ℐI and J, follows mainly from the fact that the ideas and proofs from Claims˜2 and 3 apply to ℐI almost without changes. The only difference is that for ℐI, we do not order vertices according to the path between the previous and the next trap, but according to the distance from the next trap (breaking possible ties arbitrarily). Having that and the fact that each trap has degree two in G, allows us to easily transform each solution of ℐI to J and vice versa, which proves the proposition. ∎ See 3 Proof. †margin: Let ℐ=(G,s,t,R,c,A,k)I=(G,s,t,R,c,A,k) be a Routing Plan instance, where G is a star. We give a plan that implies that Routing Plan is solvable on G in (1)O\! (1 ) time. The plan is to move the assets one after another with no delay from s to t, call this plan ρ. We show that ρ is optimal and that it is easy to compute the number k′k of assets that succeed. Before we start, note that the unique path from s to t must pass through a trap vertex, as per our assumptions on G (from the beginning of the section on our algorithms). Let r be the trap vertex with the reload time c:-c(r)c c(r) (recall that c(r)>0c(r)>0) on the s-t path and let us decompose the number m>0m>0 of assets into m:-d(c+1)+qm d(c+1)+q with d≥0d≥ 0 and 0≤q<c+10≤ q<c+1. Under ρ, every eliminated asset allows at most c(v)c(v) assets to pass. So, for every c+1c+1 assets there are c assets that pass. Thus, if q=0q=0, we lose exactly d=m/c+1d= mc+1 assets and get k′=m−m/c+1=cm/c+1k =m- mc+1= cmc+1 successful ones. However, if 1≤q<c+11≤ q<c+1, then right after the above-mentioned d⋅(c+1)d·(c+1) assets move past the trap (or are eliminated thereon), the trap becomes active. So, we lose one more asset so that the remaining q−1q-1 (possibly 0) assets can pass. Since q<(c+1)q<(c+1) implies q−1<cq-1<c, we conclude that k′=m−⌊m/c+1⌋−1=m−⌈m/c+1⌉k =m- mc+1 -1=m- mc+1 succeed. Hence, altogether we get k′=m−⌈m/c+1⌉k =m- mc+1 successful assets, regardless of whether q=0q=0 or q≠0q≠ 0. To see the optimality of ρ, consider some m consecutive rounds such that in the first round r is active. By definition, there must be at least one elimination for every block c of rounds when r is inactive. Since the sequence of rounds starts with r being active, then r can be inactive for at least ⌈m/c+1⌉ mc+1 rounds in the whole sequence. Since ρ moves m assets through r consecutively and r is initially active, we obtain that out of m rounds r is active through at most k′=m−⌈m/c+1⌉k =m- mc+1 rounds. So, we conclude that ρ is optimal. Finally, to solve instance ℐI it is enough to verify in (1)O\! (1 ) time whether k≤m−⌈m/c+1⌉k≤ m- mc+1 indicating a Yes-instance. ∎ See 4 Proof of correctness. †margin: For correctness, assume that ℐI is a Yes-instance and X⊆X is an exact cover of U. Starting with the first round, in consecutive rounds, the assets move from the initial vertex to the input part, and, eventually, continue to some leaf of the first batch gadget. Every odd asset is destroyed by the trap protecting the input port of B(8N+6)B(8N+6). Once all leaves of B(8N+6)B(8N+6) are full, the asset a16N+12a_16N+12 waits for one round at the center of the first batch gadget. In the other round, asset a16N+12a_16N+12 inactivates the trap on the output port and is replaced by asset a16N+14a_16N+14 in the center of B(8N+6)B(8N+6). Now, all assets leave, in lexicographic order, the first batch gadget in consecutive rounds, which takes 8N+68N+6 rounds and, therefore, none of them is destroyed by the trap on the output port of B(8N+6)B(8N+6). First four assets leaving B(8N+6)B(8N+6), namely a16N+14a_16N+14, a2a_2, a4a_4, and a6a_6, crosses by the same strategy the third batch gadget B(1)B(1); consequently, only the asset a6a_6 passes the output port of B(1)B(1) without being destroyed and continues, without waiting, through the slowdown path towards the guard vertex. All the remaining assets use a path via the second batch gadget, again using the same strategy. 4N+14N+1 of them are destroyed on the input port of B(4N)B(4N), and one is destroyed at the output port. Hence, exactly 8N+2−(4N+1)−1=4N8N+2-(4N+1)-1=4N assets successfully pass B(4N)B(4N). Now, let X=Si1,…,SiNX=\S_i_1,…,S_i_N\. Then, the first four assets leaving B(4N)B(4N) go through set vertex si1s_i_1, the second four assets through set vertex si2s_i_2, etc. Obviously, the first asset in each group is destroyed, but all three remaining assets can pass through the set vertex to an empty element vertex. Note that each set vertex is adjacent to exactly three element vertices, and since X is an exact cover, there is always an empty element vertex for each passing asset. Once all element vertices are occupied, these assets are waiting till the asset a6a_6 inactivates the trap on the guard vertex and, after that, all of them reach the terminal vertex. As was shown, exactly 3N3N assets succeeded in this plan, so J is indeed a Yes-instance. In the opposite direction, let J be a Yes-instance and ρ be a plan such that at least 3N3N assets succeed. We split the proof into several claims. First, we show that the “upper path” is used at least once. Claim 4. The trap on the output port of B(4N)B(4N) is deactivated at least once. Proof. For the sake of contradiction, assume that all assets go through the slowdown path. Then, each of them needs to cross B(1)B(1). By the properties of the batch gadget, to pass 11 asset, at least 1+2=31+2=3 assets are destroyed. As at most 8N+68N+6 assets can cross B(8N+6)B(8N+6), we obtain that at most ⌊(8N+6)⋅14⌋=⌊2N+32⌋=2N+1 (8N+6)· 14 = 2N+ 32 =2N+1 assets enters the slowdown path, which contradicts that at least 3N3N assets succeed. ◀ Similarly, the “bottom part” is used at least once. In fact, we show a stronger property that it is actually used by exactly one asset. Claim 5. The trap on the output port of B(1)B(1) is deactivated exactly once. Proof. First, assume that it is not deactivated at all. For the sake of contradiction, assume that all assets go through the B(4N)B(4N) gadget. Then, as 8N+68N+6 assets leave B(8N+6)B(8N+6), at least two assets are destroyed on the output port of B(4N)B(4N), and to pass them over the input port of B(4N)B(4N), two more assets are destroyed. After the first deactivation of the trap in the output port, at most 4N4N assets pass through the output port, and the same number of assets are destroyed to open their path through the input port. Consequently, after the second inactivation of the output port, at most (8N+6−4−4N−4N)/2=2/2=1(8N+6-4-4N-4N)/2=2/2=1 asset can pass. That is, there are at most 4N+14N+1 assets surviving after the output port of B(4N)B(4N). Each of them needs to pass through the traps on the set-vertices. These traps have a reload time 33, meaning that at most ⌊3⋅4N/4⌋=3N 3· 4N/4 =3N can pass through them. Finally, at least one of them is destroyed on the guard-vertex, meaning that at most 3N−13N-1 assets successfully reach t, which is a contradiction with ρ being a solution. ◀ Consequently, we see that the trap on the output port of B(4N)B(4N) is also deactivated exactly once. Next, we prove that exactly N (disjoint) traps placed on the set vertices are deactivated. By Claim˜5, exactly four assets enter the batch gadget at the beginning of the slow-down path. Hence, exactly 8N+28N+2 assets enter B(4N)B(4N) and exactly 4N4N of them leave this gadget in consecutive rounds by Claim˜4. Since exactly one asset follows the slowdown path, we need to pass exactly 3N3N agents to the element vertices. The traps on the set vertices have reload time 33 and therefore we lose at least 4N/4=N4N/4=N agents by deactivating them. That is, necessarily exactly N of these traps are deactivated. Moreover, each set vertex is adjacent to exactly three element vertices, and there is no way for assets to move from one element vertex to another without deactivating a trap. Hence, if a trap on a set vertex is deactivated twice, then their neighboring element vertices must be empty. This can only happen if the guard trap is activated. However, the asset a using the slowdown part can reach it at the beginning of the round N5N^5, while the last asset leaves B(4N)B(4N) in time at the latest 12N+312N+3, which means that a reaches the guard trap once it is active again, and we have only 3N−13N-1 successful agents, which is in contradiction to the fact that ρ is a solution. Hence, all assets must pass the guard trap in consecutive rounds, which means that we had to use exactly N distinct set vertices. The set vertices used correspond exactly to the solution of ℐI. This finishes the proof. ∎ See 5 Proof. †margin: s s(yi)+1s(y_i)+1Element Gadget:…s(yi)+1s(y_i)+1 c(v)c(v)Destruction Gadget: c(v)−1c(v)-1… c(v)−ℓ+1c(v)- +1 100n5T5+100nT+(3T+19+3n)100n^5T^5+100nT+(3T+19+3n)… 3T+20+3n3T+20+3n (n+3)T+3n+20+n(T+7)−(T+6)⌈n(T+7)2T+13⌉(n+3)T+3n+20+n(T+7)-(T+6) n(T+7)2T+13 Infrastructure s(yi)+1s(y_i)+1…s(yi)+1s(y_i)+1… … … s(yk)+1s(y_k)+13n3n Element Gadgets, ya≠yb,∀ya,yb∈Yy_a≠ y_b,∀ y_a,y_b∈ Y 2T+122T+12 2T+122T+12…T+6T+6 Trapsn(T+7)n(T+7) Empty Nodes (n+3)T+18+n(T+7)−(T+6)⌈n(T+7)2T+13⌉(n+3)T+18+n(T+7)-(T+6) n(T+7)2T+13 … 11Blocking Infrastructurett Figure 3: Construction of the reduction on trees The problem is in NP by Theorem˜2. To show it is NP-hard, we reduce from 3-Partition. 3-Partition [Garey and Johnson, 1990, see p. 224] Input: Set Y of 3n3n elements, a bound T∈ℤ+T ^+, and a size s(y)∈ℤ+s(y) ^+ for each y∈Yy∈ Y such that T/4<s(y)<T/2T/4<s(y)<T/2 and ∑y∈Ys(y)=nT _y∈ Ys(y)=nT. Question: Can Y be partitioned into n disjoint sets Y1,Y2,…,YnY_1,Y_2,…,Y_n of size exactly 33 such that for 1≤i≤n1≤ i≤ n, ∑y∈Yis(y)=T _y∈ Y_is(y)=T? Let P be an instance of 3-Partition such that n≥3n≥ 3 is odd and n<Tn<T.666This problem is still NP-complete since (3n3) 3n3 is polynomial in n, and we can add a large number x>nx>n to all s(y)s(y) and increase T by 3x3x We construct an instance of Routing Plan =(G,s,t,R,c,A,k)J~=~(G,s,t,R,c,A,k), where G is a tree of maximum degree 3, in polynomial time and show that P is a Yes-instance if and only if J is a Yes-instance. For clarity we divide the reduction into 3 sections, "Construction", "Conception", which gives the intuition, and "Correctness". Construction. First, we define some gadgets, that we illustrate in Figure˜3: 1. A destruction gadget consists of ℓ traps arranged in order as a path v1,v1,…,vℓv_1,v_1,…,v_ , with the reload times of the vertices being c(v1),c(v1)−1,…,c(v1)−ℓ+1c(v_1),c(v_1)-1,…,c(v_1)- +1, respectively. A destruction gadget is defined by a length ℓ and a starting reload time c(v1)c(v_1). The purpose of each of these gadgets in our reduction is to destroy ℓ assets while allowing c−ℓ+1c- +1 to pass through for every c+1c+1 assets that move immediately one after another. 2. An element gadget represents an element from the original set. For each element y∈Yy∈ Y, the corresponding element gadget of y is a path starting with a trap with a reload time of s(y)+1s(y)+1 refered to as the guard vertex, followed by s(y)+1s(y)+1 empty vertices. The element is represented by storing s(y)+1s(y)+1 assets in the empty vertices, which to obtain a Yes-instance must happen (to be shown). Now we formally construct the tree graph G. This construction is shown in Figure˜3. 1. Define a destruction gadget that has starting reload time 100n5T5+100nT+(3T+19+3n)100n^5T^5+100nT+(3T+19+3n) and length 100n5T5+100nT100n^5T^5+100nT. We refer to this destruction gadget as the limiting infrastructure. And call the nodes in this limiting infrastructure viLv_i^L, i∈1,…,100n5T5+100nTi∈\1,…,100n^5T^5+100nT\. We then connect s only to v1Lv_1^L. 2. Followed by and connected to v100n5T5+100nTLv_100n^5T^5+100nT^L in the limiting infrastructure is a path of traps of length 3n+13n+1, all with reload time (n+3)T+3n+20+n(T+7)−(T+6)⌈n(T+7)2T+13⌉(n+3)T+3n+20+n(T+7)-(T+6) n(T+7)2T+13 . Call the nodes in this path viMv_i^M, i∈1,…,3n+1i∈\1,…,3n+1\. 3. For each of the nodes v∈viMv∈ v_i^M, i∈1,…,3ni∈\1,…,3n\, there is an element gadget connected to v such that; 1) For some y∈Yy∈ Y, we construct the corresponding element gadget and attach it to v by connecting v and the guard vertex and 2) no 2 element gadgets correspond to the same y 4. There is also a subtree, call it the final subtree, connected to v3n+1Mv_3n+1^M. This subtree consists of T+6T+6 traps arranged in a path, each with reload time 2T+122T+12. The start of the path is connected to v3n+1Mv_3n+1^M. Followed by the last trap in this path is another path of n(T+7)n(T+7) empty nodes. This subtree is artificial and is not used to represent any elements, but will be used for the proof. 5. Next, we have a destruction gadget with starting reload time (n+3)T+19+n(T+7)−(T+6)⌈n(T+7)2T+13⌉−1(n+3)T+19+n(T+7)-(T+6) n(T+7)2T+13 -1 and also length (n+3)T+19+n(T+7)−(T+6)⌈n(T+7)2T+13⌉−1(n+3)T+19+n(T+7)-(T+6) n(T+7)2T+13 -1. This gadget is connected on the starting trap to v3n+1Mv_3n+1^M. Notice that this gadget ends with a trap with reload time 11. This construction will be refered to as the blocking infrastructure. We then connect the final trap with reload time 11 in the blocking infrastructure to t. 6. We have the number m=(n+1)(100n5T5+100nT+(3T+20+3n))m=(n+1)(100n^5T^5+100nT+(3T+20+3n)) of assets and k=1k=1 7. Finally, note that G can be constructed in polynomial time in T and n, as there are only a polynomial number of verticies in these. Conception. The idea is to construct G such that it only allows batches of assets to pass through the limiting infrastructure sequentially before a long delay. We then force each batch of assets to have T+7T+7 assets enter the final subtree. This deactivates the traps on viMv_i^M. The assets then can choose which element gadgets to enter. This represents choosing elements for our subsets in 3-Partition. Finally we make sure that if and only if all the subtrees are filled is there a solution to 3-Partition. We ensure this via the blocking infrastructure. Note when we refer to some assets exiting "during the final batch", it means that the final batch and the assets in the subtrees form a single long sequence, while "before the final batch" refers to when this doesn’t happen (i.e. there is a gap somewhere). It should also be noted that the final batch is artificial, meaning it doesn’t represent a subset, but rather is used to force the previous batch to enter the subtrees, otherwise they could start trying to mode through the destruction gadget without entering the subtrees. Note that a sequence of x traps starting with reload time x and ending at reload time 11 can only be passed by 1 asset iff there are x+1x+1 assets sequentially moving forward. This will be used to enforce the mentioned long delays and to more easily think about the correctness of this construction. Correctness. We start by stating a useful observation. Observation 1. If there is a sequence of c(v),c(v)−1,…,1c(v),c(v)-1,…,1 traps, an asset is able to pass this sequence if and only if there are c(v)+1c(v)+1 assets moving one-by-one sequentially for some c(v)+1c(v)+1 time steps. First, notice that 3T+20+3n3T+20+3n assets pass the limiting infrastructure before a 100n5T5+100nT100n^5T^5+100nT time delay. We call the 3T+20+3n3T+20+3n assets that pass a batch of assets. The last of these is called the final batch of assets. Based on m, the limiting infrastructure passes n+1n+1 batches, each consisting of 3T+20+3n3T+20+3n assets. To prove correctness, we split the claim into lemmas. Lemma 4. If during the final batch not all non-trap nodes in the subtrees are filled, then J is a No-instance Proof. The blocking infrastructure has length (n+3)T+19+n(T+7)−(T+6)⌈n(T+7)2T+13⌉−1(n+3)T+19+n(T+7)-(T+6) n(T+7)2T+13 -1. If even a single non-trap node is unfilled in the subtrees, then at most there can be (n+3)T+19+n(T+7)−(T+6)⌈n(T+7)2T+13⌉−1(n+3)T+19+n(T+7)-(T+6) n(T+7)2T+13 -1 assets that arrive at the blocking infrastructure, by summing the number of remaining assets and calculating the loss of assets when exiting eachsubtree and adding to this the number of assets in the final batch. By ˜1, this means that there will be no successful assets. For the element gadgets this calculation is obvious, each element gadget loses one asset when exiting. For the final subtree, let e be the number of assets inside this subtree by the final batch. Say that there are e<n(T+7)e<n(T+7) assets and they are guarded by T+6T+6 traps. We need to show that for any e, there is no case where an equal or greater number of assets can exit compared to when e=n(T+7)e=n(T+7). The assets can be processed in groups of 2T+132T+13, since every T+6T+6 assets allow T+7T+7 assets to pass. Now it remains to show that n(T+7)mod(2T+13)>T+6n(T+7) (2T+13)>T+6, that is losing even a single asset means less than the desired number of assets can exit this subtree. Since n is odd, we write this as 2p+12p+1, so (2p+1)(T+7)mod(2T+13)(2p+1)(T+7) (2T+13). Expanding gives us 2kT+T+14k+72kT+T+14k+7, and the modulo is T+k+7T+k+7. Now, since n<Tn<T by our variant or 3-Partition, T+6<T+k+7<2T+13T+6<T+k+7<2T+13 for k≥1k≥ 1. Meaning that losing even a single asset results in a loss in the possible number of assets that can exit. This means that having a single subtree that is not filled will result in a No-instance. ◀ Lemma 5. If J is a Yes-instance of Routing Plan, then at each round T+7T+7 assets enter the final subtree Proof. Consider the contrapositive, that if at some round a number of assets not equal to T+7T+7 enter the final subtree, then J is a No-instance of Routing Plan. For each batch, at most T+7T+7 assets can enter the empty nodes of the final subtree, since there are 3T+20+3n3T+20+3n assets per batch, the number of assets that reach the final subtree is 3T+20+3n−(3n+1)=3T+193T+20+3n-(3n+1)=3T+19. Since there is a path of T+6T+6 traps on this subtree, these must be deactivated before allowing T+7T+7 assets into the empty nodes. But then the path of traps will reactivated one after another, needing another T+6T+6 assets before allowing any assets to enter the empty nodes. Now if at some round <T+7<T+7 assets enter this subtree, then the subtree will not be full, resulting in a No-instance by Lemma˜4. ◀ Lemma 6. Any Yes-instance in which assets have entered subtrees cannot have assets leaving subtrees before the final batch. Nor can it have a guard vertex that is activated twice before the final batch. Proof. Assume there is some asset who either leaves the subtrees before the final batch or some guard vertex is deactivated more than twice. Incorporating this into the lower bound, we find that we lose more assets than allowed from the nTnT term in the required (n+3)T+18+n(T+7)−(T+6)⌈n(T+7)2T+13⌉(n+3)T+18+n(T+7)-(T+6) n(T+7)2T+13 assets to pass. ◀ Lemma 7. Each batch that passes through the limiting infrastructure can lose at most T+10+3nT+10+3n before exiting during the final batch Proof. There are n batches before the final batch and (3T+20+3n)(3T+20+3n) assets per batch. We lose 3n+13n+1 assets activating all viMv_i^M and T+6T+6 assets activating the final subtree. Another T+7T+7 assets enter the final subtree. So we only have T+6T+6 assets remaining which must enter the element gadgets. We have 3n3n element gadgets. By the restriction T/4<s(y)<T/2T/4<s(y)<T/2, we must choose exactly 3 to enter, otherwise for all the subtrees to be full a trap must be activated more than twice, which doesn’t happen according to Lemma˜6. ◀ Lemma 8. If J is a Yes-instance then all assets enter element gadgets before the next batch arrives Proof. Since viMv_i^M is much shorter than the delay between batches, the assets must enter subtrees to survive till the next batch, otherwise we lose more assets than allowed for some batch. ◀ Notice the general idea is to set it up so that even an additional loss of one asset in aggregate, or if one of the subtrees is not filled will result in 0 assets being able to pass. The above lemmas show that all the assets are only pushed through when the final batch of assets arrives. (→)(→) If J is a Yes-instance, then P is also a Yes-instance. If we have a Yes-instance, then all the non-trap nodes in the subtrees are filled by contraposition of Lemma˜4. For each batch before the final batch, T+7T+7 assets will enter the final subtree by Lemma˜5. This will deactivate all viMv_i^M. From this, we always choose 3 subtrees as discussed in Lemma˜7 and Lemma˜8. So, for each batch, the 3 subtrees that are chosen will correspond to a YiY_i in P. Since these assets will only leave during the final batch by Lemma˜6, no element y∈Yy∈ Y can be chosen more than once and the union of YiY_i is Y. And since this applies for each batch, we have a Yes-instance to P. (←)(←) If P is a Yes-instance to 3-Partition, then J is also a Yes-instance. For each batch, we can enter the 3 element gadgets corresponding to the 3 chosen elements in 3-Partition. Since we have a Yes-instance to 3-Partition, by the final batch all the subtrees will be filled by this plan. Then, in total we have (n+3)T+18+n(T+7)−(T+6)⌈n(T+7)2T+13⌉(n+3)T+18+n(T+7)-(T+6) n(T+7)2T+13 assets that reach the blocking infrastructure, which gives us one successful asset. ∎ See 6 Proof. †margin: Consider any instance P of 3-Partition and construct an instance of Routing Plan∗ J to be the corresponding instance of P described in Theorem˜5. Now assume that there is a constant factor approximation algorithm that runs in polynomial time. Since k=1k=1 and the assets are discrete, any constant factor approximation algorithm would have to result in at least 11 successful asset, including on J. But this would solve 3-Partition in polynomial time, which is a contradiction unless P=NP=NP. ∎ Appendix B APX-hardness of Routing Plan∗ Applying a slightly different reduction, we show Routing Plan∗ is generally APX-hard. However, our proof (inspired by an APX-hardness proof by Viglietta [2015]) drops the structural restrictions that the topology is a tree of maximum degree 33. Theorem 7. Routing Plan∗ is APX-hard. Proof. For the sake of clarity and completeness, let us start with a formal definition of Routing Plan∗. Routing Plan∗ Input: An undirected graph G=(V,E)G=(V,E), an initial and target vertex s,t∈Vs,t∈ V, a set of traps R, a reload time function c:R→ℕc R , and a set of assets A. Question: What is the maximum number of successful assets? In our proof, we will reduce from Max 3 SAT-3, which is APX-hard (for a detailed and accessible treatment of Max 3 SAT-3, see a draft of the book by erikd.demaineComputationalIntractabilityGuide2025). Here, given a constrained SAT formula φ where every clause has at most 33 literals and every variable occurs at most 33 times, we ask for the maximum number of clauses that can be satisfied by an assignment. Let P be an instance of Max 3 SAT-3 and =(G,s,t,R,c,A,k)J=(G,s,t,R,c,A,k) be an instance of Routing Plan∗. We show an L-reduction from P to J. Let the number of variables in P be V and the number of clauses be C. Note that we set V≥10V≥ 10 so we can easily think about the construction in terms of degree. V4+V3V^4+V^3… 2525 2424Literal X… 1212 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 2424Literal ¬X X… 1212 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 Figure 4: Variable Gadget for APX-hardness reduction. 4C4C 4C4C 11 11 11Literal 2Literal 3Literal 1 Figure 5: Clause Gadget for APX-hardness reduction. s 3V+C−13V+C-1 V4+V3V^4+V^3… 2525 2424X3X_3… 1212 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 2424¬X3 X_3… 1212 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3Var 2 V4+V3V^4+V^3… 2525 2424X1X_1… 1212 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 2424¬X1 X_1… 1212 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3 V3V^3Variable Gadgets V6+V5+3V+4CV^6+V^5+3V+4C… V3+V2+3V+4CV^3+V^2+3V+4C V3+V2+3V^3+V^2+3V… 3V3VSlowdown Path 4C4C 4C4C 11 11 11 4C4C 4C4C 11 11 11 4C4C 4C4C 11 11 11Clause Gadgets… CCt Blocking Infrastructure Figure 6: Construction for showing APX-hardness. The formula for this construction is (X1∨¬X3)∧(X1∨X3)∧(¬X1∨¬X3)(X_1 X_3) (X_1 X_3) ( X_1 X_3). The blue and red lines are used to differentiate positive and negative literals respectively. Construction. We will use gadgets defined in Theorem˜5. First, we define a few more gadgets and the infrastructure for the reduction 1. The variable gadget v1iv_1^i represents the ithi^th variable. It first consists of a destruction gadget defined by starting trap with reload time V4+V3V^4+V^3 and length V4+V3−23V^4+V^3-23. The last trap of this gadget has reload time 2424. Following this last trap, the path splits into 2 more paths, each a destruction gadget starting with reload time 23 and length 12, ending with reload time 12. We call each of these end nodes. Following the end of both these paths are 3 length 2 paths, all with reload times V3V^3, all connected to the end node. The variable gadget is illustrated in Figure˜4 2. The start of the clause gadget is just 2 traps in a path with reload time 4C4C in a path, followed by 3 reload time 1 traps also arranged in a path. The second of these is connected to a single empty node. The third of these will later be connected to the literals of the corresponding clause. This is illustrated in Figure˜5. 3. The blocking infrastructure is a destruction gadget that starts with a trap with a reload time of 3V+C−13V+C-1 and has length 3V3V. This destruction gadget ends with a trap with a reload time of C. 4. The slowdown path is a destruction gadget starting with reload time V3+V2+3V^3+V^2+3V and ending with reload time 3V3V. Now we construct the full graph G. A complete sample construction is shown in Figure˜6. 1. First, we connect a destruction gadget with starting reload V6+V5+3V+4CV^6+V^5+3V+4C and length V6+V5−V3−V2+1V^6+V^5-V^3-V^2+1. Note that it has ending reload time V3+V2+3V+4CV^3+V^2+3V+4C to s. We call this destruction gadget vLv^L and call the last node vaLv_a^L 2. We also connect all the variable gadgets directly to s by the starting trap of the first destruction gadget of v1iv_1^i. 3. We connect each of the 3 paths length 2 paths in each variable gadget to the last reload time 1 trap of a clause gadget. This connection is made for all the corresponding literals of the clause in our original problem that the clause gadget represents. 4. We then connect to vaLv_a^L the C clause gadgets (on the first 4C4C reload time trap) and the slowdown path (on the starting trap) 5. The last node of the slowdown path and the empty node of all the clause gadgets are connected to the starting trap of the blocking infrastructure. 6. Finally we set m=V6+V5+3V+4C+1+V(V4+V3+1)=V6+2V5+V4+4V+4C+1m=V^6+V^5+3V+4C+1+V(V^4+V^3+1)=V^6+2V^5+V^4+4V+4C+1; Each variable gadget requires V4+V3+1V^4+V^3+1 assets, and V6+V5+3V+4C+1V^6+V^5+3V+4C+1 assets go through vLv^L 7. This construction takes polynomial time in C and V because the number of nodes is polynomial in these variables. Conception. Choosing the path to take here corresponds to choosing an assignment for the variable. Note that you never want to split the vertices so that you enter both paths at the same time or reroute assets from some variable gadget to another. These variable gadgets then will allow the empty node in the clause gadgets to enter. But because of the slowdown path, the assets have to wait a long time there, by which time all assets not on the empty node will have died and the activated traps reset. This limits each clause gadget to contributing to only a single successful assets. Finally the assets that pass through the slowdown path will deactivate all the traps on the blocking infrastructure. allowing any assets previously on the empty nodes to pass. Correctness. Now, we move on to the proof. We first prove the following lemmas. Lemma 9. The slowdown path is deactivated Proof. Assume the slowdown path is not used. We set the destruction gadget vLv^L to have length V6+V5−V3−V2−1>V(V4+V3)+V3+V2+3V^6+V^5-V^3-V^2-1>V(V^4+V^3)+V^3+V^2+3V, which holds for V≥3V≥ 3. This means we cannot reroute the assets to pass vLv^L twice in 2 batches. We cannot pass any variable gadgets either as V4+V3>V3+V2+3V^4+V^3>V^3+V^2+3V. So we have an additions V3+V2+3V^3+V^2+3V assets which can pass the clause gadgets. By a counting argument, since to pass an asset through the clause 2 assets from the variable gadget are needed, at most 3V3V assets can pass from the clause gadget to the blocking infrastructure. We have 0 assets coming from the slowdown path since by assumption it is not used. Since we need at least 3V+13V+1 assets to arrive at the blocking infrastructure, for any asset to reach t, we have k=0k=0 ◀ Lemma 10. Some variable gadget and some clause gadget is used (deactivated) Proof. Assume the variable gadget is not used. Then 0 assets can pass through the clause gadget because of the 2 reload time 11 traps. There are also no assets that make it through the slowdown path, as at most 3V3V assets arrive here at a time before a long delay, but there are also 3V3V traps on the blocking infrastructure. The proof for the case of no clause gadget being deactivated is the same. ◀ Next, the variable gadget can only choose one of either a true assignment or a false assignment. Since if we assume this is not the case, after the choosing of the first assignment, the V3V^3 trap reload time must be used before the next group of assets arrives for the second assignment. And this means the traps on the clause gadgets must also have assets before the reload time of the variable gadget is reset. By the time the next group of assets arrives for the assignment of the variables, the traps of the clause gadget will already be reset and there will be no assets that can pass again. Now, we show that each clause can only contribute one asset, which is stored on the final regular vertex of the clause gadget. Assume that we can contribute more than one asset from a clause gadget. Then the assets must pass through the 3 reload time 1 traps. But afterwards, because of the very long slowdown path we must wait first until the assets on that path are sufficiently close. But then the trap will reactivate and kill one of the assets, since there is no place to store them. Neither can we wait at the start of the clause gadget, for the same reason; by the time the slowdown path reaches the end, the trap will have been reset. This reduction preserves the optimal value since any plan that has a maximum of k successful trials can be converted to an assignment that has exactly k true clauses by setting the corresponding literals and vice versa. Because of this one-to-one mapping between Routing Plan∗ instances with a max of k successful trials and Max 3 SAT-3 with a max of k true clauses (i.e., you can convert J instances with maximum k successful assets to P instances with maximum k true clauses and vice versa), this reduction is an L-reduction and this completes the proof. ∎