← Back to papers

Paper deep dive

Language Model Teams as Distributed Systems

Elizabeth Mieczkowski, Katherine M. Collins, Ilia Sucholutsky, Natalia Vélez, Thomas L. Griffiths

Year: 2026Venue: arXiv preprintArea: cs.MAType: PreprintEmbeddings: 64

Abstract

Abstract:Large language models (LLMs) are growing increasingly capable, prompting recent interest in LLM teams. Yet, despite increased deployment of LLM teams at scale, we lack a principled framework for addressing key questions such as when a team is helpful, how many agents to use, how structure impacts performance -- and whether a team is better than a single agent. Rather than designing and testing these possibilities through trial-and-error, we propose using distributed systems as a principled foundation for creating and evaluating LLM teams. We find that many of the fundamental advantages and challenges studied in distributed computing also arise in LLM teams, highlighting the rich practical insights that can come from the cross-talk of these two fields of study.

Tags

ai-safety (imported, 100%)csma (suggested, 92%)preprint (suggested, 88%)

Links

PDF not stored locally. Use the link above to view on the source site.

Intelligence

Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 94%

Last extracted: 3/13/2026, 12:08:18 AM

Summary

The paper proposes a principled framework for designing and evaluating LLM teams by treating them as distributed systems. It identifies four core shared properties—independence, communication, concurrency, and fallibility—and demonstrates that classical distributed computing principles, such as Amdahl's Law, can predict the performance and scalability limits of multi-agent LLM systems.

Entities (4)

Elizabeth Mieczkowski · researcher · 99%Distributed Systems · theoretical-framework · 98%LLM teams · system-architecture · 98%Amdahl's Law · mathematical-model · 95%

Relation Signals (2)

LLM teams sharespropertieswith Distributed Systems

confidence 95% · LLM teams share four core properties with distributed systems: independence, concurrency, communication, and fallibility.

Amdahl's Law predictsperformanceof LLM teams

confidence 92% · Figure 2 shows that Amdahl's Law provides a clear bound on the efficiency gains achievable by LLM teams.

Cypher Suggestions (2)

Map the relationship between distributed system properties and LLM team challenges · confidence 90% · unvalidated

MATCH (d:Entity {type: 'Distributed System Property'})-[r:CAUSES_CHALLENGE]->(c:Entity {type: 'LLM Challenge'}) RETURN d.name, r.type, c.name

Find all theoretical frameworks applied to LLM teams · confidence 85% · unvalidated

MATCH (t:Entity {name: 'LLM teams'})-[:APPLIES_FRAMEWORK]->(f:Entity) RETURN f.name

Full Text

63,428 characters extracted from source content.

Expand or collapse full text

LANGUAGE MODEL TEAMS AS DISTRIBUTED SYSTEMS Elizabeth Mieczkowski 1∗ Katherine M. Collins 1,2,3 Ilia Sucholutsky 4 Natalia Vélez 1 Thomas L. Griffiths 1 1 Princeton University 2 Massachusetts Institute of Technology 3 University of Cambridge 4 New York University ABSTRACT Large language models (LLMs) are growing increasingly capable, prompting recent interest in LLM teams. Yet, despite increased deployment of LLM teams at scale, we lack a principled framework for addressing key questions such as when a team is helpful, how many agents to use, how structure impacts performance — and whether a team is better than a single agent. Rather than designing and testing these possibilities through trial-and-error, we propose using distributed systems as a principled foundation for creating and evaluating LLM teams. We find that many of the fundamental advantages and challenges studied in distributed computing also arise in LLM teams, highlighting the rich practical insights that can come from the cross-talk of these two fields of study. Keywords Large language models· collaboration· LLM teams· distributed computing 1 Introduction “Adding manpower to a late software project, makes it later.” – Frederick P. Brooks, The Mythical Man Month [1] The quest for artificial intelligence (AI) has long captivated human minds [2], and the remarkable capabilities of large language models (LLMs) have sparked widespread excitement that we are approaching this goal. Yet, single models remain fundamentally limited in memory, context, and ability. A natural response is to compose them into teams, or collections of interacting agents that can divide work, communicate, and pool their individual resources. The promise is compelling: just as people achieve far more together than alone [3,4], teams of LLM agents could extend capabilities beyond those of any single monolithic model. However, achieving this promise requires addressing several questions: when do teams help, when do they hurt, and how should they be designed and deployed to maintain performance and efficiency? Failing to appropriately address these questions can have substantial implications. Individual LLM calls are already resource-intensive, demanding substantial compute and energy to query, and racking up hefty monetary costs to users. These resource demands grow rapidly as agents exchange messages and iterate on shared work. Without careful orchestration to address coordination challenges raised by teams [5], agents may overwrite one another, produce redundant outputs, conflict in decisions, propagate errors through chains of reasoning, and reinforce incorrect conclusions through mutually sycophantic exchanges. The journey from single- to multi-agent LLM systems mirrors the historical evolution of computing itself [6,7]. Early computing systems relied on single processors, but as demands for scale grew, engineers turned to distributed architectures that combined many machines to achieve greater capacity and robustness (Figure 1A). However, this also brought about new challenges that have defined core research topics in distributed computing for decades, such as issues of coordination, consistency, and task assignment (Figure 1B) . LLM teams appear to be following a similar path; as single models approach limits in context and reliability, there is increasing interest in assembling teams of interacting agents to overcome their individual constraints. However, the design and deployment of these teams have revealed many additional complexities, such as limited scaling depending on the task [8,9], reduced collective problem-solving abilities [10], and amplifications of errors, false confidence, and confusion [11]. Viewing LLM teams through the lens ∗ Code available at https://github.com/emieczkowski/distributed-llm-teams arXiv:2603.12229v1 [cs.MA] 12 Mar 2026 PREPRINT of distributed systems (Figure 1C) both explains observed inefficiencies and generates actionable solutions, providing principled design guidelines, testable hypotheses, and normative benchmarks for evaluating team behavior across varying tasks and conditions. Rather than optimizing LLM teams through trial and error, this framework offers a rigorous foundation for understanding where coordination breaks down and how to improve it. In this paper, we outline concrete guidelines from distributed computing for designing and evaluating efficient LLM teams, backed by initial empirical demonstrations. We begin by reviewing existing approaches to LLM team design and their limitations (Section 2), then establish the formal correspondence between LLM teams and distributed systems (Section 3). Section 4 provides an empirical test of this analogy. We first show that LLM teams collaborating in simplified settings with pre-assigned tasks exhibit efficiency gains that generally mirror classic scaling laws in distributed computing. We then extend this evaluation to self-coordinating decentralized teams, finding that LLM teams exhibit the same challenges, such as consistency conflicts, architectural tradeoffs, communication overhead, and stragglers, that distributed systems theory anticipates. Finally, Section 5 outlines future directions and limitations of this framework. These connections position distributed computing as a principled foundation for designing and evaluating LLM teams. As these teams are deployed, failure to address these scalability and coordination concerns risks more than inefficient computation. Poorly designed systems may waste enormous computational resources while producing unreliable outputs. Because these failures emerge from interactions among agents, they can be difficult to diagnose or correct without principled design frameworks. A formal foundation for LLM team architectures – and deciding whether to implement a team in the first place – therefore offers a path toward deploying these systems efficiently, robustly, and responsibly at scale. 2 Background 2.1 The rise of LLM teams LLM teams, or multi-agent systems in which multiple language model agents coordinate through task decomposition, role specialization, and inter-agent communication, are increasingly being deployed in both research and production. Experimental studies show that sufficiently capable LLM agents can successfully cooperate in simple economic settings like common-pool resource problems [12]. Researchers have argued that such teams may be particularly valuable for scientific discovery, where distributing subtasks across agents can increase effective capacity, enable parallel exploration, and reduce individual errors through cross-checking [13]. Several frontier labs are deploying agentic coding teams that allow users to coordinate multiple LLM instances working together on shared coding tasks [14,15]. Despite this enthusiasm, there is a growing need to evaluate these systems rigorously, both in terms of individual models [16] and their emergent collective behaviors [17]. 2.2 Benefits, limitations, and risks of LLM teams There is growing empirical evidence that the performance of multi-agent LLM teams relative to single agents is mixed and highly task-dependent. On one hand, teams can outperform individual agents by aggregating complementary reasoning paths, enabling cross- checking, and distributing long-horizon context across multiple agents. LLM teams have demonstrated impressive scientific discovery capabilities, including the design of novel nanobodies [18] and autonomous hypothesis-generation pipelines for chemistry [19]. Controlled studies further show that ensembles of interacting LLM agents can improve accuracy by enabling diversity and debate [20,21], and by decomposing contexts that would exceed a single agent’s limits across multiple agents [22]. Early benchmarks reinforced this optimistic view: increasing the number of agents answering a question and asking them to vote upon the correct output often increased accuracy [23]. However, subsequent work has identified important limitations and failure modes. Performance gains from adding agents are neither linear nor guaranteed; instead, they depend strongly on factors such as agent heterogeneity and coordination structure [10,8]. Multi-agent systems tend to benefit on parallelizable or decomposable tasks but can degrade on sequential or tightly coupled workflows due to communication overhead and coordination errors [9]. Moreover, LLM agents are often biased towards being helpful and agreeable rather than honest [24], which may suppress productive disagreement and reduce collective problem-solving quality [10]. Interacting agents can also amplify misinformation and adversarial behavior, uncontrollably consuming resources and reporting task completion despite failures [11]. Together, these findings suggest that the advantages of LLM teams are conditional rather than universal, motivating a deeper analysis of when and why teams outperform single agents. 2 PREPRINT A. Benefits of teams & distributed systems Number of agents / processors Performance i) Scalability i) Fault tolerance B. Introduced coordination challenges Consistency: How can nodes/agents avoid conflicts? def function_1(x,y): return int(x) return x, y def function_2(x,y): x = function_1(x,y) ... Overhead: How can we minimize communication and coordination costs? team size # of messages Architecture: How should we structure nodes/agents? Centralized Decentralized Scheduling: Which nodes/agents should complete which jobs, and when? Tradeoffs: additional costs, energy usage, consistency versus availability, ... Stragglers: What if one node/agent is taking too long? Scheduler Task 1 Task 2 C. Shared properties 1. Independence: local context, partial observability 3. Communication: message passing Some of my functions depend on task_2.py. I’l ask my teammate what those outputs look like. Here is what you need to know! 2. Concurrency: simultaneous execution task_1.py task_2.py I will now create and update task_1.py. I will now create and update task_2.py. 4. Fallibility: faults, errors task_2.py Nothing I do is working. I’m going to return an empty file. failed_api_call task_1.py My task is to create task_1.py. I will think about this implementation... My task is to create task_2.py. I will think about this implementation... task_2.py Figure 1: LLM Teams as Distributed Systems. Distributed computing provides a principled framework for analyzing and designing LLM teams. A. Both LLM team and distributed systems research pursue similar goals: leveraging scalability to improve performance and achieving fault tolerance through mechanisms such as redundancy, replication, and consensus. B. At the same time, LLM teams inherit fundamental complexities familiar from distributed systems but absent in single-agent settings, including consistency conflicts, architectural trade-offs, communication overhead, stragglers, task scheduling, and increased compute, energy, and monetary costs. C. LLM teams share four core properties with distributed systems: independence (each agent or node operates on local context without automatic access to global state); concurrency (multiple agents or nodes execute tasks simultaneously); communication (information is exchanged through message passing); and fallibility (agents or nodes may produce errors or undergo faults). 2.3 Existing design approaches A major line of work seeks to improve LLM teams by drawing inspiration from human collaboration and organizational design. The challenges faced by LLM teams resemble many that we regularly face when collaborating with people (e.g., dividing tasks, coordinating actions, communicating). Frameworks such as MetaGPT, AgentVerse, and AutoGen explicitly structure agents into familiar roles (e.g., planner, engineer, reviewer) [25,26,27], and hierarchical coordi- nation patterns resembling managerial workflows can improve multi-agent performance [28,29]. Beyond role-based hierarchies, other work explores alternative coordination structures. Some approaches emphasize deliberative collectives, in which multiple agents debate or critique one another to improve reasoning accuracy [20,21], though the gains from debate are sensitive to team design and communication [30]. Separate lines of work treat the structure of multi-agent communication as its own design variable, finding that topology substantially affects scaling and performance [31]. More recent work aims to automate topology selection by learning task-adaptive communication graphs [32]. 2.4 The need for a formal framework Despite rapid progress in building multi-agent LLM systems, there remains limited systematic understanding of when and why teams outperform individual models, or how their structure should be designed for a given task. Existing design approaches largely propose coordination patterns heuristically, often by analogy to human organizations or empirical benchmarking, rather than deriving them from formal properties of tasks or workflows. Evaluations also focus predominantly on accuracy, leaving questions of efficiency, cost, and robustness largely unaddressed. This gap makes it difficult to anticipate when LLM teams will improve performance versus introduce coordination overhead, redundancy, or failure modes. A formal framework transforms this problem. Rather than discovering failure modes empirically after deployment, such a framework allows us to derive conditions under which teams are expected to succeed or struggle, turning design decisions that are currently made by intuition into ones that can be reasoned about, compared, and tested. 3 PREPRINT 3 A framework for evaluating LLM teams as distributed systems Single language model agents have limited capacity: context windows bound how much information they can access at once, memory limits what can be retained, and reasoning, execution, or tool use all require time. Additionally, single agents are prone to faults such as hallucinations, missing relevant context, or failing to respond, all of which disrupt the task they are responsible for. As a result, no single agent can do everything at once, and we might not want it to. These challenges are not unique to LLM agents. Individual machines suffer from similar constraints; they are bounded by memory, time, and unpredictable failures or crashes. These exact problems have driven decades of research in distributed systems, suggesting that its theories offer a natural and rigorous foundation for understanding LLM teams. 3.1 Shared properties To begin, let us make this analogy precise. An LLM team shares four key properties of the core structure of a distributed system (Figure 1C). 1.Independence: LLM agents are independent, maintaining their own local contexts with only partial observ- ability of the state of the broader task and team. Similarly, nodes in a distributed system operate with local memory and have no global clock or state. In both cases, agents or nodes might be heterogeneous, having different capabilities due to distinct prompts, base models, or processing times. 2. Communication: LLM teams coordinate through communication. Rather than sharing state directly, they exchange messages such as prompts to divide and integrate work. This mirrors how nodes in a distributed system exchange packets to coordinate computation. 3.Concurrency: In an LLM team, multiple agents are working on tasks simultaneously. This parallelism can increase speed and coverage, but also introduces major coordination problems: agents may act using stale information, produce conflicting outputs, or overwrite each other’s progress. Distributed systems face the same problems when operating in parallel on shared data. Ensuring that all nodes maintain consistency requires synchronization protocols that determine how and when nodes exchange updates and commit results. 4.Fallibility: LLM agents can hallucinate, stall, or produce incorrect outputs that propagate through a team. Nodes in a distributed system can crash, fall out of sync, or return corrupted results. In both cases, the system must be designed to tolerate these faults gracefully. These four properties establish a structural correspondence between LLM teams and distributed systems, offering a new analytical scaffold. This is not to say that LLM teams conform to every assumption made in distributed computing. For example, communication in LLM teams occurs in natural language rather than fixed, formally specified protocols, making it ambiguous or shaped by pragmatic interpretation. Similarly, traditional distributed systems models often assume well-defined failure modes, whereas LLM failures can be semantic and probabilistic. We believe that these mismatches sharpen the usefulness of this analogy rather than undermine it. Where the analogy holds, we inherit decades of existing theories by which we can generate concrete predictions about LLM team behavior. Where it breaks down, the gap itself becomes informative, revealing where new theory is needed and providing a baseline against which deviations become visible and measurable. 3.2 Predicting LLM team performance using this framework Applying this framework to LLMs yields specific predictions about how behavior should change as a result of task and team structure. Here, we test two specific predictions: (1) how task structure affects the efficiency gains achievable through division of labor, and (2) how team architecture shapes coordination costs. First, this framework yields specific predictions about how performance should change with team size depending on the task. Work in distributed systems has formally characterized how efficiency should scale depending on how much of a task can be executed simultaneously by multiple nodes [33,34,35]. We predict that the same efficiency limits should apply to LLMs, such that teams will exhibit stronger efficiency gains when tasks are highly parallel compared to ones with serial dependencies. This framework also yields predictions about the tradeoffs between different team architectures. Centralized architec- tures, in which one node delegates tasks and integrates results, reduce overhead by routing communication through fewer channels. They also improve consistency by enforcing that only one agent updates shared resources or dependencies at a time. However, because task assignments are fixed, these systems are also vulnerable to variability within nodes; one slow agent (or “straggler”) can delay the team as a whole. Decentralized architectures mitigate this risk by allowing tasks to be assigned dynamically, but at the cost of greater coordination overhead and elevated risk of conflicts when multiple 4 PREPRINT agents concurrently modify shared resources. Together, these principles motivate the prediction that centralized and decentralized teams have distinct strengths and vulnerabilities: decentralized teams should exhibit greater coordination overhead (e.g., number of messages, idle rounds) and conflict rates (e.g., file conflicts, rewrites, intermediate test failures), while centralized teams should show greater vulnerability to stragglers. We tested these correspondences by assigning teams of LLM agents to three collaborative coding tasks: implementing a math utilities library, analyzing simulated data, and rendering an SVG file (for more details, see Appendix A.1). Teams were composed of 1, 2, 3, 4, or 5 homogeneous agents drawn from a single base model (Claude-Sonnet-4-6, Gemini 3-Flash, or GPT-5.2). In each of these tasks, agents were presented with twenty programming subtasks, each specifying a description, required output, and corresponding test file. There were three task structures per domain: a highly parallel task in which eighteen subtasks were mutually independent, a mixed task with ten subtasks forming a sequential dependency chain and remaining ten independent, and a highly serial task with sixteen interdependent subtasks. We tested two assignment schemes. In Experiment 1, coordination challenges were minimized by pre-assigning tasks to different agents to isolate the effects of task structure on scalability. In Experiment 2, this was relaxed by prompting agents to both select which tasks they would perform and then complete them. 4 Results 4.1 Amdahl’s Law predicts scalability in LLM teams A central motivation for distributed systems is scalable performance: if large-scale computing tasks are decomposed across many nodes, increasing system size can improve efficiency in terms of completion times or throughput. However, decades of research demonstrate that scalability is often neither linear nor guaranteed [33,34,35]. Even before accounting for coordination overhead, classical scalability laws demonstrate that performance gains depend primarily on parallelizability, or the extent to which a task can be executed concurrently. The ability for multiple nodes to execute a task in parallel is constrained by the inherent structure of the task itself. Bottlenecks due to locking, sequential dependencies between subtasks, shared memory accesses, or resource contention force nodes to wait, limiting parallelism [33,36]. Amdahl’s Law formalizes how these constraints limit speedupSwith s available processors under fixed workloads: S latency (s) = 1 (1− p) + p s ,(1) wherepis the parallelizable fraction of the workload and1− pis inherently serial. For example, when 95% of a task is parallelizable, a speedup of 20×is achievable, and even thousands of nodes can contribute before efficiency plateaus. In contrast, if only 50% of a task can be parallelized, then even with infinitely many processors and no coordination overhead, the maximum speedup is only 2×, and that plateau is reached with far fewer nodes. The same efficiency limits should apply to LLM teams. Distributing work across multiple agents can improve performance only to the extent that the underlying task is parallelizable: highly decomposable tasks with independent components should exhibit larger speedups than those with tight interdependencies or contention. Emerging evidence supports this prediction, showing that LLM teams perform better on decomposable than sequential tasks [9]. This raises a natural question: can classical scalability laws such as Amdahl’s Law mechanistically explain when scaling LLM teams yields genuine efficiency gains? We tested this question directly in Experiment 1 using our collaborative coding platform. Parallelizability was manipulated by varying prespecified dependencies between subtasks, resulting in a set of highly parallel, mixed, and sequential tasks. To simplify coordination, we preallocated tasks to agents to isolate the effect of task structure on scalability. We measured end-to-end wall-clock completion time as a function of team size, and calculated speedup as the ratio of single-agent to N-agent completion time,S(N) = T(1)/T(N), allowing direct comparison between observed speedups and the limits predicted by Amdahl’s Law. Figure 2 shows that Amdahl’s Law provides a clear bound on the efficiency gains achievable by LLM teams. As predicted, speedup differed across parallelizability conditions (Kruskal-Wallis:H = 61.4,p < 0.001). Highly parallel tasks benefited most from distributing work across agents, where independent subtasks allowed clean partitioning and balanced workloads. Tasks with mixed dependencies generally exhibited less speedup when the number of agents increased, and tasks with highly serial dependencies gained relatively no improvement. Pairwise comparisons confirmed the predicted ordering (p 0.9 > p 0.5 MWUU = 18493,p < 0.001;p 0.5 > p 0.2 U = 17847,p < 0.001). However, even in the highly parallel condition, speedup remained significantly below the Amdahl bound (Wilcoxon signed-rank, M = 2.19×,p < 0.001). GPT-5.2 and Gemini 3-Flash were the main drivers of this effect; Claude Sonnet 4.6 alone did not significantly plateau below the bound (p = 0.45). 5 PREPRINT LLM Team Scaling vs. Amdahl’s Law Highly Parallel (p=0.9) Speedup Number of Teammates Mixed (p=0.5)Highly Serial (p=0.2) Figure 2: Scalability. A comparison of LLM team scalability to Amdahl’s Law, which predicts theoretical speedup based on the proportion of serial dependencies in a task. Teams of agents were given preassigned tasks of three types (coding a math utilities library, creating a data analysis pipeline, and SVG rendering) and three dependency structures (parallel, mixed, or serial). Each trial type was repeated five times to account for variance in API latency, and efficiency was measured using wall-clock time in seconds. Speedup represents how much faster a team completed their task compared to the one-agent baseline. Highly parallel tasks generally benefited more from scaling team size than mixed or serial tasks, as predicted by Amdahl’s Law, although results depended on model type. 4.2 Tradeoffs arise with architectural choices Distributed systems also offer insight into the coordination challenges that arise when tasks and information are divided between nodes. Architectural choices introduce tradeoffs between efficiency, consistency, and robustness. System architectures range from centralized, where a single coordinator manages shared state and task assignment, to decentralized, where components coordinate through local decisions and communication [7]. Each approach has well-known trade-offs. Centralized coordination simplifies consistency challenges, but introduces communication bottlenecks and single points of failure. Decentralized coordination improves robustness and parallelism, but increases negotiation overhead, redundancy, and the risk of conflicting decisions. In LLM teams, similar decisions must be made about whether a central leader (whether human or LLM) should assign tasks to agents, or whether agents will autonomously decide how to allocate work among themselves. In Experiment 2, we tested the performance of decentralized LLM teams in our suite of collaborative coding tasks. Rather than pre-assigning tasks to agents, agents were given the ability to self-claim them while communicating with their teammates. We again tested teams of 1, 2, 3, 4, or 5 agents to directly compare performance of these self-coordinating teams to the centralized ones in Experiment 1. Figure 3A shows that decentralized coordination often reduces efficiency relative to the centralized teams in Figure 2. Across all runs, preassigned teams achieved significantly higher speedup than decentralized teams (Mann–Whitney U = 155523,p < 0.001), with median speedups of1.36×versus0.88×respectively. This pattern held within each model individually (allp ≤ 0.01). In the following sections, we examine whether the predicted costs of decentralization—specifically, consistency conflicts and communication overhead—contributed to these efficiency differences. 4.3 Coordination leads to consistency conflicts One of the most fundamental challenges that arises from parallel coordination is maintaining consistency: the degree to which different nodes observe a coherent view of shared state despite concurrent updates and communication delays [7]. In order for multiple nodes to make simultaneous progress on a task, distributed systems commonly replicate a shared state (e.g., data, files, memory, or intermediate results) across them. However, once multiple replicas of the same information exist, they may be updated independently and at different times. As a result, nodes can temporarily hold different versions of what is intended to be the same underlying information, which must be reconciled to execute the task correctly [37,38]. When one node modifies information that another node is still using, their views of the task can diverge, producing conflicts and downstream errors. 6 PREPRINT Self-Coordinating LLM Teams Highly Parallel (p=0.9) Speedup Number of Teammates Mixed (p=0.5)Highly Serial (p=0.2) A. Speedup vs. Amdahl’s Law B. Types of consistency conflicts C. Test failures during execution Mean events per run Figure 3: Self-coordinating (decentralized) LLM teams. In Experiment 2, agents needed to not only complete tasks but also decide on assignments themselves. A. Scalability: Speedup is substantially lower for self-coordinating than preassigned teams due to consistency conflicts and communication overhead. This difference is especially stark for highly parallel tasks. B. Consistency conflicts: In self-coordinating teams, agents exhibit conflicts like writing to the same file simultaneously (pink), rewriting a file that another agent previously wrote (yellow), and attempting to complete a function before its dependencies have been finished (brown). These problems do not arise when tasks are preassigned by a central coordinator. C. Test failures: Failed test cases per round reveal that decentralized teams exhibit higher rates of intermediate failure due to these conflicts. LLM teams similarly operate over a shared state that must be constantly updated as agents progress, such as task plans, code repositories, analysis outputs, design documents, and evolving context representations. Because multiple agents may read, modify, and use these shared states concurrently, maintaining adequate consistency becomes a central challenge of LLM teams. Otherwise, agents may overwrite each other’s work, make changes that break downstream tasks, or make progress with an outdated version of results. Without clear protocols for assignment, we observed three types of consistency violations directly in the decentralized teams in Experiment 2 (Figure 3B). Decentralized teams exhibited a substantial number of concurrent writes, in which two or more agents edit the same file simultaneously and silently overwrite each other’s work. We also observed a significant number of rewrites, in which an agent would completely overwrite a file written by a teammate in a previous round. Finally, we observed temporal consistency violations, in which an agent would attempt to implement a task out of order without its predecessor being implemented yet. Because of these consistency conflicts, decentralized teams produced substantially more failed tests (Figure 3C). Across all runs, decentralized teams generated significantly more test failures than preassigned teams (Mann-WhitneyU = 287013,p < 0.001). Median failures were 19 for decentralized teams compared to 4 for preassigned teams, and this pattern held within each model individually (all 7 PREPRINT Coordination Overhead in Decentralized Teams A. Communication overhead Number of Teammates Extra messages (decentralized - preassign) B. Idleness Number of Teammates Extra idle rounds (decentralized - preassign) Figure 4: Coordination overhead. Decentralized teams introduce greater coordination overhead, which worsens with more collaborators. A. Communication costs: Each line represents the difference in the number of messages sent when tasks are preassigned vs. decentralized. B. Idle costs: Each line represents the difference in agents remaining idle when tasks were preassigned versus decentralized. Importantly, these agents were still using tokens and sending messages; they just did not complete a task within an idle round. p < 0.001). These findings are consistent with the distributed systems prediction that decentralized teams will pose a greater risk of consistency conflicts. 4.4 Larger and decentralized teams incur compounding overhead In order for interconnected nodes to coordinate effectively, they must send packets of information (or messages) back and forth to update each other. Sending these messages is not instantaneous, thus incurring substantial costs and overhead. In distributed systems, these communication costs typically depend on how nodes are structured [39,7]. Decentralized architectures require agents to communicate directly with each other, increasing coordination overhead. In the worst case,nagents can induceO(n 2 )communication channels, resulting in both delays in processing and a polynomial increase in messages exchanged. In contrast, centralized architectures mediate interactions through a single coordinator, reducing coordination complexity to O(n) as each agent communicates with the central leader. Similarly, LLM teams must exchange messages with each other as tasks progress, which should thus theoretically worsen in decentralized teams. To test this correspondence directly, we measured coordination overhead in two complementary ways (Figure 4). First, decentralized teams exchanged substantially more messages than ones with preassigned tasks (Mann-WhitneyU = 311551,p < 0.001), and these message counts increased with team size (r = 0.483,p < 0.001). Second, self-coordinating teams suffered from more idle rounds, or interaction steps in which agents communicated but did not complete any task progress (Mann-WhitneyU = 289672,p < 0.001). These idle rounds reflected delays and contention introduced by peer-to-peer coordination, such as agents trying to work on the same tasks but being blocked or sending messages back and forth about what to do next. Thus, as predicted by prior findings in distributed computing, decentralized LLM teams accumulate substantially more communication and coordination overhead than preassigned teams, which often worsen with more agents. 4.5 Decentralized teams mitigate straggler delays Another well-documented source of inefficiency in distributed systems is the presence of stragglers: processors that take an unusually long or unpredictable amount of time to complete their assigned task [40,41]. Because many distributed computations synchronize after set phases, overall progress is frequently determined by the slowest workers. In LLM teams, agents must similarly synchronize their outputs (e.g., implementing subtasks before testing, merging results), so downstream progress depends on all agents completing their assigned work. Variability in reasoning time, tool latency, or context complexity can therefore produce LLM stragglers whose delays stall the entire team. Distributed systems mitigate this problem through replication [42]. Algorithms like MapReduce duplicate slow or late-stage tasks across multiple workers and accept the earliest completion [40,41]. A similar advantage emerged in self-coordinating teams in Experiment 2 (Figure 5). When agents were not locked into fixed assignments, they could flexibly pick up unfinished tasks, allowing faster agents to compensate for slower ones. To measure straggler 8 PREPRINT Straggler Gap (Preassign vs. Decentralized) Highly Parallel (p=0.9) Mixed (p=0.5)Highly Serial (p=0.2) Number of Teammates Figure 5: Straggler analysis. When task assignments are fixed (preassign), performance is more susceptible to agent variability in the form of stragglers: agents that take substantially longer to complete their assigned tasks. This gap arises more often with models that exhibit greater variance in API latency, such as Claude Sonnet 4.6 and GPT-4.1 (see vertical axes), and worsens on mixed or serial tasks where workloads are naturally uneven. When task assignments are decentralized, work can be dynamically reallocated when one agent stalls. The straggler gap is quantified as the difference between the maximum and mean latency within each round, or how many extra seconds the average agent waited for the slowest teammate. Error bars represent standard deviation. impact directly, we computed a straggler gap, or the difference between the slowest agent’s completion time and the mean completion time of the other agents. This gap was significantly larger in preassigned than decentralized teams (Mann-WhitneyU = 8889359,p < 0.001), with a median delay of 2.64 seconds compared to 1.42 seconds. The effect was consistent across models (allp < 0.001), and increased with team size in both coordination modes (preassign: r = 0.23,p < 0.001; decentralized:r = 0.35 p < 0.001). Finally, this gap was significantly larger for mixed and serial tasks than for parallel ones within the preassign condition (KWH = 137.5,p < 0.001; mixed vs. parallel: median 3.91s vs.1.73s, MWUp < 0.001), consistent with the intuition that a single straggler on the sequential dependency chain forces all downstream work to wait. This finding provides further evidence that core tradeoffs in distributed systems also emerge in LLM teams. While fixed assignments and centralized coordination can reduce overhead and improve consistency, they are more vulnerable to variability across agents, a disadvantage which becomes particularly pronounced in tasks with greater dependencies. Taken together, these results suggest that LLM teams are vulnerable to coordination challenges that are similar to those faced by distributed systems. In this way, distributed systems offer basic design principles that can be used to understand the tradeoffs faced by centralized and decentralized teams, and to select the team architecture that is best suited for a particular task. 4.6 Cost-efficiency tradeoffs Finally, a long-recognized challenge in distributed systems is that splitting work across machines incurs additional computational costs. Scaling a system requires increased energy consumption and operational resources, resulting in an inevitable tradeoff between performance and energy or budget [43]. This tradeoff is certainly arising in LLM teams. While certain tasks benefit substantially from LLM teams in terms of wall-clock time, distributing work across multiple agents introduces additional computational costs, including token 9 PREPRINT Table 1: Efficiency–cost tradeoffs across task structures and models. Claude-Sonnet-4-6Gemini-3-FlashGPT-5.2 ParallelMixedSerialParallelMixedSerialParallelMixedSerial NSpeedupTokenSpeedupTokenSpeedupTokenSpeedupTokenSpeedupTokenSpeedupTokenSpeedupTokenSpeedupTokenSpeedupToken Preassigned 11.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x 21.80x1.57x1.50x0.97x0.76x3.96x0.59x11.76x0.60x2.73x1.18x1.63x1.12x1.84x1.26x0.74x0.96x1.49x 32.42x1.06x1.14x1.46x0.59x15.39x1.13x4.98x1.26x2.51x0.68x4.25x1.60x1.26x1.10x1.89x0.61x6.24x 42.78x1.16x1.48x1.36x0.65x7.02x1.64x3.63x1.47x1.93x0.53x4.60x1.44x1.89x1.25x3.03x1.02x5.66x 53.35x1.33x1.30x3.04x0.79x5.76x1.14x7.53x0.94x2.72x1.12x6.87x1.36x2.48x1.14x2.55x0.96x3.40x Decentralized 11.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x1.00x 21.50x1.04x0.80x1.71x0.80x1.48x0.85x2.02x0.68x1.99x0.76x1.84x1.02x0.95x1.27x1.29x0.86x1.58x 30.94x3.66x0.73x2.34x0.99x1.45x0.68x3.43x0.79x2.45x0.62x2.85x1.08x1.06x1.19x1.61x0.81x2.32x 40.56x3.91x0.68x3.75x0.70x3.35x0.80x3.01x0.71x3.55x0.56x4.04x1.30x1.44x1.14x2.21x0.74x3.30x 50.83x4.65x0.74x3.00x0.65x6.24x0.84x4.20x0.69x3.86x0.56x4.94x1.09x2.80x0.98x2.39x0.74x3.33x usage, budget, and energy consumption. Because teams of agents communicate, synchronize, manage consistency, and occasionally cheer each other on (see Appendix 6.2), token consumption often outpaces speedup depending on task structure and model (Table 1). We computed the per-run difference between token multiplier and speedup, where positive values indicate costs outpacing speedup. Preassigned teams showed a small but significant excess overall (Wilcoxon signed-rankp = 0.013, median= 0.02), driven primarily by the inefficiency of teams performing serial tasks (serial: mean token multiplier 5.83×vs. speedup1.13×,p < 0.001). Decentralized teams showed a consistently larger gap across all parallelizability conditions (Wilcoxonp < 0.001, median= 1.17), with token costs scaling strongly with team size (Spearmanρ = 0.40, p < 0.001) while speedup did not (ρ =−0.07, p = 0.15). These findings highlight that deciding to deploy LLM teams is not purely a performance question in terms of speed or accuracy, but an efficiency-cost optimization problem as well, one that can have huge implications for energy consumption and budgets. Extensions of scaling laws have previously incorporated resource costs when evaluating parallel systems [44]. Analogous formulations may therefore provide a useful framework for evaluating LLM team architectures under token or compute budgets. 5 Discussion LLM teams are being increasingly deployed in practice, yet their design remains largely ad-hoc. Teams are assembled, tested, and adjusted through trial and error, with little principled guidance for when coordination will succeed or fail. We argue that distributed systems theory can give us this foundation. The challenges that arise when agents collaborate, such as maintaining consistency, managing synchronization, reducing communication overhead, and absorbing failures, are not novel or unique to LLM teams. They are well-characterized problems that the distributed systems community has spent decades investigating. We propose that distributed systems provide a conceptual framework for designing LLM teams, anticipating their limits, and diagnosing how they fail. We demonstrate this framework’s utility across two domains. First, we show that the scalability limits predicted by Amdahl’s Law arise in LLM teams just as they do in distributed systems (Experiment 1). Second, we show that coordination challenges inherent to distributed systems emerge in LLM teams as well (Experiment 2). Across both, we characterize a fundamental architectural tension between centralized and decentralized team structures, a tradeoff inherited from distributed computing that anyone designing or deploying LLM teams must navigate. Finally, we show that beyond raw speedup, LLM teams introduce meaningful computational overhead in token usage. 5.1 Limitations and future directions Our experiments provide evidence that supports parallels between LLM teams and distributed systems, but there are many potential avenues for extending this framework in the future. Our tasks relied on prespecified dependency structures, which depart from real-world settings where dependencies must be inferred or discovered dynamically. Generalizing to tasks such as text analysis, research synthesis, or open-ended reasoning would test whether the theoretical framework extends beyond structured programming. Future work should also examine complementary scaling laws, including Gustafson’s Law, which models performance under scalable workloads, and Gunther’s Universal Scalability Law, which captures non-monotonic scaling due to coordination and contention overhead [34, 35]. 10 PREPRINT Additionally, our teams consist entirely of homogeneous agents drawn from the same base model. Prior work suggests that heterogeneous teams, composed of agents with different strengths or base models, can yield accuracy benefits [8]. Formalizing how diversity interacts with coordination and scalability, and exploring if these teams can be optimized with algorithms from heterogeneous load balancing, is a natural extension [45]. Beyond overcoming individual constraints on memory and computation, distributed systems provide an additional benefit: fault tolerance, or the ability to gracefully withstand failures that impact individual nodes [46,47]. Systems are designed so that local failures like node crashes, unresponsiveness, or corrupted outputs do not halt overall progress. LLM agents are similarly susceptible to unpredictable faults, including task abandonment, errors, and hallucinations that produce arbitrary or misleading outputs [48,49]. Distributed systems address faults through redundancy, verification, and consensus mechanisms, which should be explored in future work to improve LLM team robustness. Finally, a related direction concerns the potential application of scheduling and load balancing protocols to improving task assignment [50,51,52,53]. As LLM teams are deployed in large-scale settings, they will encounter not only well- defined “static” problems but also dynamic and partially observable workloads that emerge as tasks are being executed. Ad-hoc negotiation among agents could worsen the coordination overhead and consistency conflicts documented in this paper. Load balancing algorithms from distributed systems offer a principled alternative, potentially improving efficiency by reducing coordination overhead while maintaining utilization across available agents. 5.2 Conclusion Distributed computing provides a rigorous and practical framework for understanding LLM team behavior, generating testable predictions, explaining observed inefficiencies, and pointing to concrete design principles. The stakes are high in developing such a principled framework for evaluating these multi-agent systems as they are deployed at scale. LLM teams that are poorly coordinated will not merely underperform: they will propagate errors, generate conflicting outputs, and incur substantial costs in terms of compute, energy, and tokens. As these systems scale, these inefficiencies will further compound. We hope that grounding the design of these teams in a formal framework will offer a path towards systems that are not only more capable, but more predictable, efficient, and responsible. References [1] Frederick P Brooks Jr. The Mythical Man-Month: Essays on Software Engineering. Pearson Education, 1995. [2] Alan Turing. Computing machinery and intelligence. Mind, 59(236):433, 1950. [3]Joseph Henrich. The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press, 2015. [4]Michael Tomasello, Malinda Carpenter, Josep Call, Tanya Behne, and Henrike Moll. Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5):675–691, 2005. [5] Umang Bhatt, Sanyam Kapoor, Mihir Upadhyay, Ilia Sucholutsky, Francesco Quinzan, Katherine M Collins, Adrian Weller, Andrew Gordon Wilson, and Muhammad Bilal Zafar. When should we orchestrate multiple agents? arXiv preprint arXiv:2503.13577, 2025. [6]Cristian Ramon-Cortes, Pol Alvarez, Francesc Lordan, Javier Alvarez, Jorge Ejarque, and Rosa M Badia. A survey on the distributed computing stack. Computer Science Review, 42:100422, 2021. [7] Maarten Van Steen and Andrew S Tanenbaum. Distributed Systems. distributed-systems.net, 2023. [8]Yingxuan Yang, Chengrui Qu, Muning Wen, Laixi Shi, Ying Wen, Weinan Zhang, Adam Wierman, and Shangding Gu. Understanding Agent Scaling in LLM-Based Multi-Agent Systems via Diversity. arXiv preprint arXiv:2602.03794, 2026. [9]Yubin Kim, Ken Gu, Chanwoo Park, Chunjong Park, Samuel Schmidgall, A Ali Heydari, Yao Yan, Zhihan Zhang, Yuchen Zhuang, Mark Malhotra, et al. Towards a science of scaling agent systems. arXiv preprint arXiv:2512.08296, 2025. [10]Aneesh Pappu, Batu El, Hancheng Cao, Carmelo di Nolfo, Yanchao Sun, Meng Cao, and James Zou. Multi-agent teams hold experts back. arXiv preprint arXiv:2602.01011, 2026. [11]Natalie Shapira, Chris Wendler, Avery Yen, Gabriele Sarti, Koyena Pal, Olivia Floody, Adam Belfki, Alex Loftus, Aditya Ratan Jannali, Nikhil Prakash, et al. Agents of chaos. arXiv preprint arXiv:2602.20021, 2026. [12]Giorgio Piatti, Zhijing Jin, Max Kleiman-Weiner, Bernhard Schölkopf, Mrinmaya Sachan, and Rada Mihalcea. Cooperate or collapse: Emergence of sustainable cooperation in a society of LLM agents. Advances in Neural Information Processing Systems, 37:111715–111759, 2024. 11 PREPRINT [13]Terry Jingchen Zhang, Wenyuan Jiang, Yongjin Yang, Yinya Huang, Sirui Lu, Bernhard Schölkopf, and Zhijing Jin. Position: Science is Collaborative—LLM for Science Should Be Too. [14]Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, et al. Chatdev: Communicative agents for software development. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, pages 15174–15186, 2024. [15]Anthropic. How we built our multi-agent research system.https://w.anthropic.com/engineering/ multi-agent-research-system, June 2025. Anthropic Engineering Blog. [16]Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W Crandall, Nicholas A Christakis, Iain D Couzin, Matthew O Jackson, et al. Machine behaviour. Nature, 568(7753):477–486, 2019. [17] Richard Willis, Jianing Zhao, Yali Du, and Joel Z Leibo. Evaluating collective behaviour of hundreds of llm agents. arXiv preprint arXiv:2602.16662, 2026. [18] Kyle Swanson, Wesley Wu, Nash L Bulaong, John E Pak, and James Zou. The Virtual Lab of AI agents designs new SARS-CoV-2 nanobodies. Nature, 646(8085):716–723, 2025. [19]Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature, 624(7992):570–578, 2023. [20] Aanisha Bhattacharyya, Abhilekh Borah, Yaman Kumar Singla, Rajiv Ratn Shah, Changyou Chen, and Balaji Krishnamurthy. Social Agents: Collective Intelligence Improves LLM Predictions. In The Fourteenth International Conference on Learning Representations. [21] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. In Forty-first International Conference on Machine Learning, 2024. [22] Yusen Zhang, Ruoxi Sun, Yanfei Chen, Tomas Pfister, Rui Zhang, and Sercan Arik. Chain of agents: Large language models collaborating on long-context tasks. Advances in Neural Information Processing Systems, 37:132208–132237, 2024. [23] Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, and Deheng Ye. More agents is all you need. arXiv preprint arXiv:2402.05120, 2024. [24]Ryan Liu, Theodore R Sumers, Ishita Dasgupta, and Thomas L Griffiths. How do large language models navigate conflicts between honesty and helpfulness? arXiv preprint arXiv:2402.07282, 2024. [25]Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, et al. MetaGPT: Meta programming for a multi-agent collaborative framework. In The Twelfth International Conference on Learning Representations, 2023. [26] Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, et al. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In The Twelfth International Conference on Learning Representations, 2023. [27]Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, et al. Autogen: Enabling next-gen LLM applications via multi-agent conversations. In First Conference on Language Modeling, 2024. [28]Xudong Guo, Kaixuan Huang, Jiale Liu, Wenhui Fan, Natalia Vélez, Qingyun Wu, Huazheng Wang, Thomas L Griffiths, and Mengdi Wang. Embodied LLM agents learn to cooperate in organized teams. arXiv preprint arXiv:2403.12482, 2024. [29]Mengkang Hu, Yuhang Zhou, Wendong Fan, Yuzhou Nie, Bowei Xia, Tao Sun, Ziyu Ye, Zhaoxuan Jin, Yingru Li, Qiguang Chen, et al. Owl: Optimized workforce learning for general multi-agent assistance in real-world task automation. arXiv preprint arXiv:2505.23885, 2025. [30]Haolun Wu, Zhenkun Li, and Lingyao Li. Can LLM Agents Really Debate? A Controlled Study of Multi-Agent Debate in Logical Reasoning. arXiv preprint arXiv:2511.07784, 2025. [31]Chen Qian, Zihao Xie, Yifei Wang, Wei Liu, Kunlun Zhu, Hanchen Xia, Yufan Dang, Zhuoyun Du, Weize Chen, Cheng Yang, et al. Scaling large language model-based multi-agent collaboration. arXiv preprint arXiv:2406.07155, 2024. [32]Guibin Zhang, Yanwei Yue, Xiangguo Sun, Guancheng Wan, Miao Yu, Junfeng Fang, Kun Wang, Tianlong Chen, and Dawei Cheng. G-designer: Architecting multi-agent communication topologies via graph neural networks. arXiv preprint arXiv:2410.11782, 2024. 12 PREPRINT [33]Gene M Amdahl. Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the April 18-20, 1967, Spring Joint Computer Conference, pages 483–485, 1967. [34] John L Gustafson. Reevaluating Amdahl’s law. Communications of the ACM, 31(5):532–533, 1988. [35] Neil J Gunther. A general theory of computational scalability based on rational functions. arXiv preprint arXiv:0808.1431, 2008. [36] Mark D Hill and Michael R Marty. Amdahl’s law in the multicore era. Computer, 41(7):33–38, 2008. [37] Lamport. How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE transactions on computers, 100(9):690–691, 1979. [38]Maurice P Herlihy and Jeannette M Wing. Linearizability: A correctness condition for concurrent objects. ACM Transactions on Programming Languages and Systems (TOPLAS), 12(3):463–492, 1990. [39] Nancy A Lynch. Distributed algorithms. Elsevier, 1996. [40] Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1):107–113, 2008. [41]Matei Zaharia, Andy Konwinski, Anthony D Joseph, Randy H Katz, and Ion Stoica. Improving MapReduce performance in heterogeneous environments. In Osdi, volume 8, page 7, 2008. [42] Da Wang, Gauri Joshi, and Gregory Wornell. Using straggler replication to reduce latency in large-scale parallel computing. ACM SIGMETRICS Performance Evaluation Review, 43(3):7–11, 2015. [43]Ricardo Gonzalez and Mark Horowitz. Energy dissipation in general purpose microprocessors. IEEE Journal of solid-state circuits, 31(9):1277–1284, 1996. [44] Andrew S Cassidy and Andreas G Andreou. Beyond Amdahl’s law: An objective function that links multiprocessor performance gains to delay and energy. IEEE Transactions on Computers, 61(8):1110–1126, 2011. [45] Shoukat Ali, Tracy D Braun, Howard J Siegel, Anthony A Maciejewski, Noah Beck, Ladislau Bölöni, Muthu- cumaru Maheswaran, Albert I Reuther, James P Robertson, Mitchell D Theys, and Bin Yao. Characterizing resource allocation heuristics for heterogeneous computing systems. In Advances in Computers, volume 63, pages 91–128. Elsevier, 2005. [46] Leslie Lamport. The implementation of reliable distributed multiprocess systems. Computer Networks (1976), 2(2):95–114, 1978. [47] Jean-Claude Laprie. Dependable computing and fault-tolerance. Digest of Papers FTCS-15, 10(2):124, 1985. [48] Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. Hallucination is inevitable: An innate limitation of large language models. arXiv preprint arXiv:2401.11817, 2024. [49] Mert Cemri, Melissa Z Pan, Shuyi Yang, Lakshya A Agrawal, Bhavya Chopra, Rishabh Tiwari, Kurt Keutzer, Aditya Parameswaran, Dan Klein, Kannan Ramchandran, et al. Why do multi-agent LLM systems fail? arXiv preprint arXiv:2503.13657, 2025. [50]Tracy D Braun, Howard J Siegel, Noah Beck, Ladislau L Bölöni, Muthucumaru Maheswaran, Albert I Reuther, James P Robertson, Mitchell D Theys, Bin Yao, Debra Hensgen, and Richard F Freund. A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems. Journal of Parallel and Distributed Computing, 61(6):810–837, 2001. [51]S Selvakumar and C Siva Ram Murthy. Static task allocation of concurrent programs for distributed computing systems with processor and resource heterogeneity. Parallel Computing, 20(6):835–851, 1994. [52]Sandeep Sharma, Sarabjit Singh, and Meenakshi Sharma. Performance analysis of load balancing algorithms. World Academy of Science, Engineering and Technology, 38(3):269–272, 2008. [53]Remzi H Arpaci-Dusseau and Andrea C Arpaci-Dusseau. Operating Systems: Three Easy Pieces, volume 1. Arpaci-Dusseau Books, LLC Madison, WI, USA, 2018. A Appendix A.1 Empirical setup Agent architecture Each experiment used a team ofN ∈ 1, 2, 3, 4, 5teammate agents, all backed by the same LLM (GPT-5.2, Gemini-3-Flash, or Claude-Sonnet-4-6), with temperature0.7and a maximum of8,192output tokens. Each agent maintains a persistent conversation history across all rounds of a run, accumulating task list updates and its own prior replies as context. 13 PREPRINT Each round, agents receive a system-injected task list showing the current status and owner of all 20 subtasks, then generates a free-text response. The orchestrator parses XML action tags from this response and executes them sequentially. Available actions are<claim_task>,<edit_file>,<run_tests>, and<complete_task>. All teammate LLM calls within a round are issued concurrently via asyncio.gather. Agents write code to a shared repository. Concurrent writes are mediated by a file-based pessimistic lock (atomic file creation). If an agent attempts to edit a file already held by another agent, the lock is denied and the failure is returned as feedback. When an agent calls<run_tests />, the orchestrator runs the fullpytestsuite and returns the output to that agent as a system message within the same round. Coordination schemes In Experiment 1, agents were preassigned tasks. Tasks distributed programmatically before the run via topological sort and round robin. Each agent receives an explicit system message listing exactly which task IDs are theirs before the run starts, and is instructed not to claim others’ tasks. Serial dependency chains were assigned to a single agent. All teammate agents completed their tasks fully in parallel each round. In Experiment 2, agents were asked to self-claim tasks autonomously. Every round before acting, each agent receives a system-injected task list showing the current status and ownership of all 20 tasks. Agents cannot claim a task until all its dependencies are marked done, enforced by the orchestrator. Concurrent writes to the shared repo are mediated by a file-based lock (atomic open("x")). A denied lock is returned as feedback to the agent. Task set-up Each benchmark consisted of twenty subtasks, each requiring the agent to implement a single Python function in its own file. Subtask specifications were fully pre-written, including precise function signatures, docstrings, and expected behavior; agents were instructed not to decompose tasks further. Each subtask file was validated against a pre-written pytest suite that was available to agents via a <run_tests /> action. We used three thematically distinct benchmarks: •MathUtils20: A numerical utility library. The dependency chain builds a statistical analysis pipeline:add → safe_add→ sum_list→ mean→ variance→. . . , where each function imports and delegates to the previous one. •DataAnalysis: A sales data pipeline over a shared hard-coded dataset. The sequential chain builds progres- sively:get_records→ remove_invalid→ add_revenue→ total_revenue→. . . . Independent tasks implement standalone filtering, grouping, and sorting utilities, each with their own inline sample data. •SVGRendering: An SVG rendering library. The dependency chain begins with low-level formatting primitives (fmt_num→ fmt_coord) and builds up to composite elements. Independent tasks implement self-contained shape and element generators (make_rect, make_circle, make_line, etc.). We manipulated task parallelizability by varying the dependency structure among the twenty subtasks. In the highly parallel condition (p = 0.9,C = 2), tasks 1–2 form a two-step chain and tasks 3–20 are mutually independent. In the mixed condition (p = 0.5,C = 10), tasks 1–10 form a strict sequential chain and tasks 11–20 are independent. In the highly serial condition (p = 0.2,C = 16), tasks 1–16 form a single sequential chain and tasks 17–20 are independent. The same dependency structure was applied across all three benchmarks. Agents were blocked from claiming a subtask until all its declared dependencies were marked complete by the system. 12345 Team size (n teammates) 0 20 40 60 80 API reply latency (s) Per-call API latency by team size (all models, preassign runs) 12345 Team size (n teammates) 0 20 40 60 80 100 API reply latency (s) Per-call API latency by team size (all models, decentralized runs) Figure 6: API Latency comparison across team sizes. 14 PREPRINT Evaluation A run was marked successful if all 20 subtasks reacheddonestatus within 60 rounds. Task completion was self-reported by agents via<complete_task>; the system did not independently verify test passage. Each condition (p ∈ 0.2, 0.5, 0.9,N ∈ 1, 2, 3, 4, 5) was repeated 5 times. We report success rate (fraction of runs completing all tasks), mean wall-clock time across successful runs, and observed speedup: mean wall-clock time at N = 1divided by mean wall-clock time atNteammates. Wall-clock time, rounds to completion, tasks completed, lock conflicts, and total input/output tokens were also recorded per run. Importantly for analysis, there was not large variation in API latency by team size (Figure 6). Exclusion of timed-out runsEach experimental trial was subject to a maximum number of 80 orchestration rounds. Runs that did not complete all assigned tasks within this limit were excluded from analysis and repeated up to two additional times; only final failed runs are excluded. Models required the following number of retries due to timeouts (only in the decentralized condition): Claude-Sonnet-4-6: 26, Gemini-3-Flash: 5, GPT-5.2: 60. A.2 Examples of inefficient runs Agents tended to over-communicate when tasks were mixed or serial to prevent themselves from remaining idle, leading to substantial messaging overhead. For example, in this particular mixed-task run with Gemini-3-Flash, 45% of replies were wasted messages that yielded no productivity gains, such as: • R15: I am currently idle and waiting for tasks. Let me know if you reassign any tasks to me. •R25: I am still idle since there are no unassigned tasks available. I’m ready to help out if you’d like to reassign any of Dev1’s pending tasks to me! •R48: Great job Dev1 on completing task-9! I am still idle as the remaining tasks are all assigned to Dev1. Let me know if you decide to unassign any of them so I can jump in! • R52: Great job Dev1 on completing task-10! • R56: I see that task-15 is now complete! Great job, Dev1. • R57: Looks like Dev1 is working on the final task (task-20)! I’l stand by and cheer them on. A similar pattern emerged with a three-agent Gemini team on a mixed task: •R5: I’ve completed all my assigned tasks. I’l continue to stand by while Dev2 and Dev3 work on their remaining tasks. If I should take over any of their pending tasks to speed up the process, please let me know! •R17: I see that Dev2 and Dev3 are currently working on the final two tasks. I will remain on standby until they finish and the project is complete. Great work, team! Serial dependency chains introduced a different failure mode: when one agent became a straggler, others would stall and exhibit highly repetitive behavior. In one run with Claude (team size of 4, decentralized), Dev3 sent the following message 13 consecutive rounds in a row: Still blocked on task-5 (Dev3). All code for tasks 6–16 is written and tests pass. Standing by. Coordination failures also arose when agents concurrently claimed the same task and implemented it differently. In round one of a Gemini run (team size of 3), all three agents independently claimed the same task: •Dev1: I have claimed task-1: Implementfmtnum. This function will format a number for SVG use, rounding to 2 decimal places... •Dev2: I have claimed task-1: Implementfmtnum. This function will format a number for use in SVG attributes. •Dev3: “I have claimed task-1: Implementfmtnum. This function will format a number for SVG use, rounding to 3 decimal places... 15