← Back to papers

Paper deep dive

Intrinsic Barriers and Practical Pathways for Human-AI Alignment: An Agreement-Based Complexity Analysis

Aran Nayebi

Year: 2025Venue: arXiv preprintArea: Formal/TheoreticalType: TheoreticalEmbeddings: 130

Abstract

Abstract:We formalize AI alignment as a multi-objective optimization problem called $\langle M,N,\varepsilon,\delta\rangle$-agreement, in which a set of $N$ agents (including humans) must reach approximate ($\varepsilon$) agreement across $M$ candidate objectives, with probability at least $1-\delta$. Analyzing communication complexity, we prove an information-theoretic lower bound showing that once either $M$ or $N$ is large enough, no amount of computational power or rationality can avoid intrinsic alignment overheads. This establishes rigorous limits to alignment *itself*, not merely to particular methods, clarifying a "No-Free-Lunch" principle: encoding "all human values" is inherently intractable and must be managed through consensus-driven reduction or prioritization of objectives. Complementing this impossibility result, we construct explicit algorithms as achievability certificates for alignment under both unbounded and bounded rationality with noisy communication. Even in these best-case regimes, our bounded-agent and sampling analysis shows that with large task spaces ($D$) and finite samples, *reward hacking is globally inevitable*: rare high-loss states are systematically under-covered, implying scalable oversight must target safety-critical slices rather than uniform coverage. Together, these results identify fundamental complexity barriers -- tasks ($M$), agents ($N$), and state-space size ($D$) -- and offer principles for more scalable human-AI collaboration.

Tags

ai-safety (imported, 100%)alignment-training (suggested, 80%)formaltheoretical (suggested, 92%)theoretical (suggested, 88%)

Links

PDF not stored locally. Use the link above to view on the source site.

Intelligence

Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 93%

Last extracted: 3/12/2026, 5:45:32 PM

Summary

The paper formalizes AI alignment as a multi-objective optimization problem called ⟨M,N,ε,δ⟩-agreement. It proves information-theoretic lower bounds on communication complexity, demonstrating that alignment overheads are intrinsic and unavoidable as the number of tasks (M) or agents (N) increases. The authors provide achievability certificates (algorithms) and show that reward hacking is globally inevitable in large task spaces, suggesting that scalable oversight should prioritize safety-critical slices.

Entities (4)

⟨M,N,ε,δ⟩-agreement · framework · 100%No-Free-Lunch Principle · principle · 95%Reward Hacking · phenomenon · 95%Communication Complexity · field-of-study · 90%

Relation Signals (3)

Communication Complexity establishes Alignment Overheads

confidence 95% · Analyzing communication complexity, we prove an information-theoretic lower bound showing that... no amount of computational power or rationality can avoid intrinsic alignment overheads.

⟨M,N,ε,δ⟩-agreement formalizes AI Alignment

confidence 95% · We formalize AI alignment as a multi-objective optimization problem called ⟨M,N,ε,δ⟩-agreement

Large task spaces causes Reward Hacking

confidence 90% · with large task spaces (D) and finite samples, reward hacking is globally inevitable

Cypher Suggestions (2)

Identify constraints or barriers associated with alignment frameworks. · confidence 90% · unvalidated

MATCH (f:Framework)-[:HAS_BARRIER]->(b:Constraint) RETURN f.name, b.description

Find all frameworks related to AI alignment and their properties. · confidence 85% · unvalidated

MATCH (f:Framework)-[:ADDRESSES]->(p:Problem {name: 'AI Alignment'}) RETURN f.name, f.properties

Full Text

129,277 characters extracted from source content.

Expand or collapse full text

Intrinsic Barriers and Practical Pathways for Human-AI Alignment: An Agreement-Based Complexity Analysis Aran Nayebi Abstract We formalize AI alignment as a multi-objective optimization problem called ⟨M,N,ε,δ⟩ M,N, ,δ -agreement, in which a set of N agents (including humans) must reach approximate (ε ) agreement across M candidate objectives, with probability at least 1−δ1-δ. Analyzing communication complexity, we prove an information-theoretic lower bound showing that once either M or N is large enough, no amount of computational power or rationality can avoid intrinsic alignment overheads. This establishes rigorous limits to alignment itself, not merely to particular methods, clarifying a “No-Free-Lunch” principle: encoding “all human values” is inherently intractable and must be managed through consensus-driven reduction or prioritization of objectives. Complementing this impossibility result, we construct explicit algorithms as achievability certificates for alignment under both unbounded and bounded rationality with noisy communication. Even in these best-case regimes, our bounded-agent and sampling analysis shows that with large task spaces (D) and finite samples, reward hacking is globally inevitable: rare high-loss states are systematically under-covered, implying scalable oversight must target safety-critical slices rather than uniform coverage. Together, these results identify fundamental complexity barriers—tasks (M), agents (N), and state-space size (D)—and offer principles for more scalable human-AI collaboration. Figure 1: Mapping our ⟨M,N,ε,δ⟩ M,N, ,δ -agreement to current RLHF/DPO/Constitutional AI pipelines. Framework No‑CPA Approx Multi‑M Multi‑N Hist. Bnd. Asym. Noise Upper Lower Aumann (1976) × × × × ✓ × × × × × Aaronson ⟨ε,δ⟩ ,δ (2005) × ✓ × ✓ ✓ ✓ × ✓ ✓ ✓ Almost CP (Hellman and Samet 2012; Hellman 2013) ✓ × × ✓ ✓ × × × × × CIRL (Hadfield‑Menell et al. 2016) × ✓ × × × ✓ × ✓ ✓ × Iterated Amplification (Christiano et al. 2018) ✓ ✓ × × ✓ ✓ × ✓ ✓ × Debate (Irving et el. 2018; Cohen et al. 2023, 2025) ✓ × × × ✓ ✓ × ✓ ✓ × Tractable Agreement (Collina et al. 2025) ✓ ✓ × ✓ ✓ ✓ × × ✓ × ⟨M,N,ε,δ⟩ M,N, ,δ ‑agreement (Ours) ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Table 1: Positive capabilities (✓) across frameworks. No‑CPA: no common‑prior assumption (CPA); Approx: allows ε ‑approximate agreement; Multi‑M / Multi‑N: supports multiple tasks / many agents; Hist.: handles rich (non‑Markovian) histories; Bnd.: works for computationally bounded agents; Asym.: tolerates asymmetric evaluation or interaction costs; Noise: robust to noisy messages or judgments; Upper: provides explicit upper bounds (algorithms)—these can be useful as achievability certificates rather than prescriptions; Lower: proves lower bounds. Our ⟨M,N,ε,δ⟩ M,N, ,δ ‑agreement satisfies every criterion. 1 Introduction Rapid progress in artificial intelligence (AI) technologies, increasingly deployed across critical economic and societal domains, underscores the importance of ensuring these systems align with human intentions and values—a challenge known as the value alignment problem (russell2015research; amodei2016concrete; soares2018value). Current alignment research frequently addresses immediate practical concerns, such as preventing jailbreaks in large language models (ji2023ai; guan2024deliberative; hubinger2024sleeper). While essential, these approaches largely focus on specific AI architectures and lack general, theoretically proven guarantees for alignment as systems approach human-level general capability. Existing theoretical frameworks, notably AI Safety via Debate (irving2018ai; brown2023scalable; brown2025avoiding) and Cooperative Inverse Reinforcement Learning (CIRL) (hadfield2016cooperative), have significantly advanced our understanding by providing formal guarantees of alignment in specific scenarios. Debate effectively leverages interactive proofs to isolate misalignment through zero-sum debate games, though it relies critically on exact verification by a correct and unbiased human judge and computational tractability constraints. CIRL successfully formulates alignment as a cooperative partial-information game reducible to a POMDP, allowing an elegant characterization of optimal joint policies under shared uncertainty (hadfield2016cooperative). However, CIRL implicitly assumes common priors and employs a Markovian assumption, potentially limiting agents’ ability to leverage richer historical contexts for alignment. While these methods represent important theoretical progress, their simplifying assumptions restrict broader applicability and leave open questions about alignment scenarios involving diverse knowledge states, richer agent interactions, or more complex objectives. This underscores a crucial theoretical gap: no unified framework currently addresses alignment under minimal assumptions while rigorously identifying intrinsic barriers independent of specific modeling choices. We propose that prior alignment approaches implicitly rely on underlying conceptual foundations involving iterative reasoning, mutual updating, common knowledge, and convergence under shared frameworks. To bridge this gap, we explicitly formalize these elements within an assumption-light framework called ⟨M,N,ε,δ⟩ M,N, ,δ -agreement (§3), which models alignment as a multi-objective optimization problem involving minimally capable agents and allows us to rigorously analyze alignment in highly general contexts. In ⟨M,N,ε,δ⟩ M,N, ,δ -agreement, a group of agents (including humans) must achieve approximate consensus across multiple objectives with high probability. We show in Table 1 that our framework generalizes previous alignment approaches by relaxing their strong assumptions, thus enabling analysis under a broad set of conditions. We then rigorously establish intrinsic, method-independent complexity-theoretic barriers to alignment, formalizing a fundamental “No-Free-Lunch” principle in Proposition 1: attempting to encode all human values inevitably incurs alignment overheads, regardless of agent computational power or rationality. Complementing this impossibility result, we also provide explicit algorithms in §5, not as prescriptions, but as achievability certificates for both computationally unbounded and bounded rational agents, alongside closely matching lower bounds in §4. Taken together, our results yield guidelines (§6) clarifying the overall landscape of alignment and providing practical pathways for more scalable human-AI collaboration. 2 Related Work We summarize the key assumptions and features of previous alignment and agreement frameworks in Table 1, positioning our ⟨M,N,ε,δ⟩ M,N, ,δ ‑agreement framework within the literature. Where earlier methods typically require common priors, exact agreement, single‑objective settings, or Markovian dynamics, our framework drops these assumptions, scales to many tasks and agents, tolerates noisy non‑Markovian exchanges, and supports bounded, cost‑asymmetric participants. Because it operates at the scalar‑reward level that dominates real‑world AI‑safety work, it can absorb any previous protocol—including two‑agent, no‑common‑prior schemes such as collina2025tractable—and lift them to our more general multi‑task, multi‑agent, asymmetric, noisy setting, obtaining universal lower bounds and closely matching upper bounds for broad classes of natural protocols. 3 ⟨M,N,ε,δ⟩ M,N, ,δ -Agreement Framework Setup. For M tasks [M]:=1,…,M[M]:=\1,…,M\ and N agents (humans and AIs) [N]:=1,…,N[N]:=\1,…,N\, each task j∈[M]j∈[M] has finite state-space SjS_j with |Sj|=Dj|S_j|=D_j, bounded111Since SjS_j is finite, note that any fj:Sj→ℝf_j:S_j can be rescaled to [0,1][0,1]. objective fj:Sj→[0,1]f_j:S_j→[0,1], and probability simplex Δ​(Sj)⊆ℝDj (S_j) ^D_j. Each agent i∈[N]i∈[N] begins with an individual prior ℙjiP^\,i_j over SjS_j; we do not assume a common prior (CPA). The prior distance, as introduced by hellman2013almost, is: νj=min⟨x1,x2⟩∈ℙji×ℙjk,p∈Δ​(Sj)⁡‖x1−p‖1+‖x2−p‖1 _j\;=\; _ x_1,x_2 ^i_j×P^k_j,\;p∈ (S_j)\|x_1-p\|_1+\|x_2-p\|_1 (1) measures disagreement between any pair of priors; νj=0 _j=0 iff a true common prior exists. Note by the triangle inequality, νj≥‖ℙji−ℙjk‖1 _j≥\|P^i_j-P^k_j\|_1 if these priors are sets of prior distributions per agent, and holds with equality for the typical setting of single prior distributions per agent. This notion of prior distance captures the smallest change in beliefs needed for agents to share a common prior—a measure of how close the agents already are to agreement. Information flow. At round t≥0t≥ 0 each agent i holds a knowledge partition Πji,t=Cj,ki,tk ^i,t_j= \C^\,i,t_j,k \_k of SjS_j. Each cell Cj,ki,t⊆SjC^\,i,t_j,k S_j is a set of states, so Πji,t​(sj) ^i,t_j(s_j) is the set of states in SjS_j that agent i finds possible at time t, given that the true state of the world is sj∈Sjs_j∈ S_j. Agents broadcast real-valued messages mji,t∈[0,1]m^i,t_j∈[0,1] and refine their partitions: Πji,t+1⊆Πji,t ^i,t+1_j ^i,t_j, and update their posterior belief distributions τji,tτ^i,t_j, giving rise to the type profile across the N agents τjt=(τj1,t,…​τjN,t)τ^t_j=(τ^1,t_j,…τ^N,t_j). All knowledge partitions are common knowledge, ensuring the standard aumann1976agreeing; aumann1999interactive update semantics, but without assuming CPA. Please see Appendix §B for full details. ⟨M,N,ε,δ⟩ M,N, ,δ ‑Agreement Criterion. Fix tolerances εj,δj∈(0,1) _j, _j∈(0,1). After T rounds, the agents ⟨M,N,ε,δ⟩ M,N, ,δ -agree if: Pr(|ℙji[fj∣Πji,T]−ℙjk[fj∣Πjk,T]|≤εj)>1−δj,∀i,k∈[N]∀j∈[M]. split& \! ( |\,E_P^i_j [f_j ^\,i,T_j ]-E_P^k_j [f_j ^\,k,T_j ] |≤ _j )>1- _j,\\ & ∀ i,k∈[N] ∀ j∈[M]. split (2) Exact agreement is the special case εj=δj=0 _j= _j=0. Why this “best‑case” model matters. ⟨M,N,ε,δ⟩ M,N, ,δ subsumes classical exact (and inexact) agreement results and all prior alignment formalisms that we examine in Table 1 (visualized in Figure 1). If alignment is hard even here—with fully rational and computationally unbounded agents, ideal message delivery, and no exogenous adversary—then practical settings with bounded rationality, noisy channels, or strategic misreporting can only be harder (therefore we want to avoid them). §5.2 quantifies exactly how much harder. Notation. Let D:=maxj∈[M]⁡DjD:= _j∈[M]D_j when used in the context of upper bounds (and D:=minj∈[M]⁡DjD:= _j∈[M]D_j for lower bounds), let ε:=minj∈[M]⁡εj := _j∈[M] _j, and write ℙ,P,E for probability and expectation. The power‑set function is ​(⋅) P(·). All omitted proofs and implementation details—e.g. explicit message formats, the spanning‑tree protocol for refinement, and the LP procedure ConstructCommonPrior used in §5.2—are provided in the Appendix. 4 Lower Bounds Below is the best lower bound we can prove for ⟨M,N,ε,δ⟩ M,N, ,δ -agreement, across all possible communication protocols: Proposition 1 (General Lower Bound). There exist functions fjf_j, input sets SjS_j, and prior distributions ℙjii∈[N]\P_j^i\^i∈[N] for all j∈[M]j∈[M], such that any protocol among N agents needs to exchange Ω​(M​N2​log⁡(1/ε)) (M\,N^2\, (1/ ) ) bits222Note, unlike our upper bounds in Theorem 1 and Proposition 4, we use bits in the lower bound in order to apply to all possible protocols (continuous or discrete), regardless of how many bits are encoded per message. The upper bounds have to use messages (rounds) to describe either a continuous protocol (potentially infinitely many bits) as in Theorem 1, or a discrete protocol as in Proposition 4. to achieve ⟨M,N,ε,δ⟩ M,N, ,δ -agreement on fjj∈[M]\f_j\_j∈[M], for ε bounded below by minj∈[M]⁡εj _j∈[M] _j. Thus, by Proposition 1, there does not exist any ⟨M,N,ε,δ⟩ M,N, ,δ -agreement protocol (deterministic or randomized) that can exchange less than Ω​(M​N2​log⁡(1/ε)) (M\,N^2\, (1/ ) ) bits for all fj,Sjf_j,S_j, and prior distributions ℙjii∈[N]\P_j^i\^i∈[N]. For if it did, then we would reach a contradiction for the particular construction in Proposition 1. Note that the linear dependence on M can mean an exponential number of bits in the lower bound if we have that many distinct tasks (or agents), e.g. if M=Θ​(D)M= (D) for a large task state space D. By considering the natural subclass of smooth protocols—where agents’ posterior beliefs at ⟨M,N,ε,δ⟩ M,N, ,δ -agreement time must not diverge more than their initial priors, measured in total variation distance—we obtain a strictly improved lower bound: Proposition 2 (“Smooth” Protocol Lower Bound). Let the number of tasks M≥2M≥ 2, and for each task j∈[M]j∈[M], let the task state space size Dj>2D_j>2, ε≤εj ≤ _j, δj<ν/2 _j<ν/2, and 0<ν≤10<ν≤ 1. Furthermore, assume the protocol is smooth in that the total variation distance of the posteriors of the agents once ⟨M,N,ε,δ⟩ M,N, ,δ -agreement is reached is ≤c​ν≤ cν for c<12−δjνc< 12- _jν. There exist functions fjf_j, input sets SjS_j, and prior distributions ℙjii∈[N]\P_j^i\^i∈[N] with prior distance νj≥ν _j≥ν, such that any smooth protocol among N agents needs to exchange: Ω​(M​N2​(ν+log⁡(1/ε))) (M\,N^2\, (ν+ (1/ ) ) ) bits to achieve ⟨M,N,ε,δ⟩ M,N, ,δ -agreement on fjj∈[M]\f_j\_j∈[M]. Both lower bounds in Propositions 1 and 2 demonstrate that gaining consensus on a small list of M values that we want AI systems to have, will be essential for scalable alignment. Finally, we consider a related smoothness condition—namely, the broad class of bounded-Bayes-factor (BBF) protocols—in which each message bit alters message likelihoods by at most a constant multiplicative factor. This assumption naturally captures realistic message-passing behavior, since rational and bounded agents typically update their beliefs incrementally rather than abruptly shifting posterior distributions after receiving a single message. Under this mild condition, we examine a natural setting: agents initially separated by prior distance ν first establish a common prior by satisfying the canonical equalities of hellman2012common (displayed in Algorithm 2 in Appendix §G.3), and subsequently condition on this shared prior to achieve ⟨M,N,ε,δ⟩ M,N, ,δ -agreement. They showed that for tight and connected knowledge partitions (defined below), these canonical equalities are automatically preserved under standard Bayesian updating; hence, our construction needs no further behavioral constraints beyond standard Bayesian rationality. Under these reasonable conditions, our lower bound strengthens to include an extra multiplicative factor of D:=minj⁡DjD:= _jD_j, the smallest state-space size across the M tasks. Thus, this refined lower bound more closely matches the general upper-bound results from §5 (cf. Algorithm 1) for this protocol class within an additive polynomial term in M,N,εM,N, , and δ. Proposition 3 (Canonical-Equality BBF Protocol Lower Bound). Let M≥2M≥ 2 be the number of tasks and let each task j have a finite state‑space SjS_j with size Dj>2D_j>2. For every j, let the initial knowledge profiles of the N agents, (Πj1,0,…,ΠjN,0)( _j^1,0,…, _j^N,0), be 1. connected: the alternation graph on states is connected, i.e. ⋀iΠji,0=Sj _i _j^i,0=\S_j\, so every two states are linked by an alternating chain of states; and 2. tight: that graph becomes disconnected if any edge is removed (unique chain property). Assume the message‑passing protocol is BBF(β)(β) for some β>1β>1: every b‑bit message mji,tm_j^i,t satisfies β−b≤Pr⁡[mji,t∣sj,Πji,t−1​(sj)]/Pr⁡[mji,t∣sj′,Πji,t−1​(sj′)]≤βbβ^-b\!≤\! [m_j^i,t s_j, ^i,t-1_j(s_j)]/ [m_j^i,t s _j, ^i,t-1_j(s _j)]\!≤\!β^\,b. Then there exist payoff functions fj:Sj→[0,1]f_j:S_j\!→\![0,1] and priors ℙjii∈[N]\P_j^\,i\_i∈[N] with pairwise distance νj≥ν _j≥ν, 0<ν≤10<ν≤ 1, such that any BBF(β)(β) protocol attaining ⟨M,N,ε,δ⟩ M,N, ,δ ‑agreement via the canonical equalities of hellman2012common must exchange at least Ω​(M​N2​[D​ν+log⁡(1/ε)]),D:=minj∈[M]⁡Dj, (M\,N^2\,[\,Dν+ (1/ )\,] ), D:= _j∈[M]D_j, bits in the worst case (implicit constant =1/log⁡β=1/ β), where the accuracy parameter 0<ε≤εj<10< ≤ _j<1. 5 Convergence of ⟨M,N,ε,δ⟩ M,N, ,δ -agreement Given these lower bounds, a natural question is whether ⟨M,N,ε,δ⟩ M,N, ,δ -agreement is achievable at all—especially since the agents begin without a common prior. In this section, we demonstrate that it is indeed achievable, providing explicit algorithms and upper bounds on convergence not only for idealized, unbounded agents but also under realistic constraints such as message discretization and computational boundedness. Here we prove the general upper bound: Theorem 1. N rational agents will ⟨M,N,ε,δ⟩ M,N, ,δ -agree with overall failure probability δ across M tasks, as defined in (2), after T=O​(M​N2​D+M3​N7ε2​δ2)T=O (MN^2D+ M^3N^7 ^2δ^2 ) messages, where D:=maxj∈[M]⁡DjD:= _j∈[M]D_j and ε:=minj∈[M]⁡εj := _j∈[M] _j. For an explicit algorithm, see Algorithm 1—we detail the reasoning behind this algorithm below. Algorithm 1 ⟨M,N,ε,δ⟩ M,N, ,δ -Agreement 0: N agents with initial partitions Πji,0i=1N\ ^i,0_j\_i=1^N for each task j∈[M]j∈[M]; protocol P; ConstructCommonPrior defined in Algorithm 2; ⟨ε,δ⟩ ,δ -agreement protocol A 0: Agents reach ⟨εj,δj⟩ _j, _j -agreement for all M tasks 1: for j←1j← 1 to M do 2: t←0t← 0 3: repeat 4: t←t+1t← t+1 5: for all agent i∈[N]i∈[N] do 6: send mji,tm^i,t_j via P 7: Πji,t←RefinePartition​(Πji,t−1,mj⋅,t) ^i,t_j← RefinePartition ( ^i,t-1_j,m^·,t_j ) 8: end for 9: ℂℙj←ConstructCommonPrior(Πji,ti=1N,CP_j← ConstructCommonPrior\! (\ ^i,t_j\_i=1^N, 10: τji,ti=1N)\τ^i,t_j\_i=1^N ) 11: until ℂ​ℙj≠CP_j≠ Infeasible 12: Condition all agents on ℂ​ℙjCP_j 13: RunCPAgreement​(,,ℂ​ℙj,fj,εj,δj) RunCPAgreement (A,P,CP_j,f_j, _j, _j ) 14: end for First, we need to figure out at most how many messages need to be exchanged to guarantee at least one proper refinement. To do so, we will have the N agents communicate using the “spanning-tree” protocol of aaronson2005complexity, which we generalize to the multi-task, no common prior, setting below: Lemma 1 (Proper Refinement Message Mapping Lemma). If N agents communicate via a spanning‐tree protocol for task j, where gj∈ℕg_j is the diameter of the chosen spanning trees, then as long as they have not yet reached agreement, it takes O​(gj)=O​(N)O(g_j)=O(N) messages before at least one agent’s knowledge partition is properly refined. Proof. Let GjG_j be a strongly connected directed graph with vertices v∈[N]v∈[N] (one per agent), enabling communication of expectations Eji,tE^i,t_j along edges. (We need the strongly connected requirement on GjG_j, since otherwise the agents may not reach agreement for trivial reasons if they cannot reach one another.) Without loss of generality, let S​Pj1SP^1_j and S​Pj2SP^2_j be minimum-diameter spanning trees of GjG_j, each rooted at agent 1, with S​Pj1SP^1_j pointing outward from agent 1 and S​Pj2SP^2_j inward toward agent 1, each of diameter at most gjg_j. Define orderings j1O^1_j (resp. j2O^2_j) of edges in S​Pj1SP^1_j (resp. S​Pj2SP^2_j) so each edge (i→k)(i→ k) appears only after edges (ℓ→i)( → i), except when i is the root (or leaf, in inward trees). Construct AgentOrderingjAgentOrdering_j by cycling through j1,j2,…O^1_j,O^2_j,…, where in each round t the tail agent of AgentOrderingj​(t)AgentOrdering_j(t) sends its current expectation. Thus, every block of O​(gj)O(g_j) transmissions forwards each agent’s updated message along both trees, reaching all others. Consequently, disagreement between any agents i and k leads to at least one agent receiving a “surprising” message within these O​(gj)O(g_j) transmissions (worst-case occurs when i,ki,k are on opposite ends of GjG_j), causing a partition refinement. Thus, without agreement, at least one refinement occurs every O​(gj)O(g_j) messages. Note gj=O​(N)g_j=O(N) if GjG_j is a worst-case ring topology; more favorable topologies yield gj≪Ng_j N, but we assume worst-case generality to subsume any specific cases. ∎ Next, we prove an important (for our purposes) lemma, which is an extension of hellman2013almost’s result on almost common priors to our M-function message setting: Lemma 2 (Common Prior Lemma). If N agents have prior distance νj _j, as defined in (3), for a task j∈[M]j∈[M] with task state space SjS_j, then after O​(N2​Dj)O (N^2D_j ) messages, they will have a common prior ℂ​ℙjCP_j with probability 1 over their type profiles. Once the agents reach a common prior ℂ​ℙjCP_j, they can then condition on that for the rest of their conversation to reach the desired 1−δj1- _j εj _j-agreement threshold (cf. Step 12 of Algorithm 1). We assume this is O​(1)O(1) to compute for now as the agents are computationally unbounded, but we will remove this assumption in §5.2, and instead use Algorithm 2 (Appendix §G.3) for an efficient explicit construction via LP feasibility of posterior belief ratios. Therefore, for each task j, we have reduced the problem now to Aaronson’s ⟨ε,δ⟩ ,δ -agreement framework (aaronson2005complexity), and as he shows, the subsequent steps conditioning on a common prior become unbiased random walks with step size roughly εj _j. With some slight modifications, this allows us to give a worst-case bound on the number of remaining steps in our ⟨M,N,ε,δ⟩ M,N, ,δ -agreement setting: Lemma 3. For all fjf_j and ℂ​ℙjCP_j, the N agents will globally ⟨εj,δj⟩ _j, _j -agree after O​(N7/(δj​εj)2)O (N^7/ ( _j _j )^2 ) additional messages. Proof. By aaronson2005complexity, the N agents will pairwise ⟨εj,δj⟩ _j, _j -agree after O​((N​gj2)/(δj​εj)2)O ( (Ng_j^2 )/ ( _j _j )^2 ) messages when they condition on ℂ​ℙjCP_j, where gjg_j is the diameter of the spanning-tree protocol they use. Furthermore, we will need to have them ⟨εj,δj/N2⟩ _j, _j/N^2 -agree pairwise so that they globally ⟨εj,δj⟩ _j, _j -agree. Taking gj=O​(N)g_j=O(N) for the worst-case ring topology gives us the above bound. ∎ By Lemmas 2 and 3, for each j∈[M]j∈[M], we need O​(N2​Dj+N7(δj​εj)2)O (N^2D_j+ N^7 ( _j _j )^2 ) messages for the N agents to reach ⟨M=1,N,εj,δj⟩ M=1,N, _j, _j -agreement. Next, select a uniform δ such that δj≤δ/M _j≤δ/M, for all j∈[M]j∈[M]. Therefore, by a union bound, we get the full upper bound in Theorem 1 with total probability ≥1−δ≥ 1-δ, across all M tasks, by maximizing the bound above by taking D:=maxj∈[M]⁡DjD:= _j∈[M]D_j and ε:=minj∈[M]⁡εj := _j∈[M] _j, and scaling by M. 5.1 Discretized Extension A natural extension of Theorem 1 is if the agents do not communicate their full real-valued expectation (which may require infinitely many bits), but a discretized version of the current expectation, corresponding to whether it is above or below a given threshold (defined below), e.g. “High”, “Medium”, or “Low” (requiring only 2 bits). We prove convergence in this case, and show that the bound from Theorem 1 remains unchanged in this setting. Discretization is important to show convergence and complexity analysis for, since this most closely matches real-world constraints (e.g. LLM agents use discrete, real-valued tokens), as opposed to infinite-bit real valued messages. Proposition 4 (Discretized Extension). If N agents only communicate their discretized expectations, then they will ⟨M,N,ε,δ⟩ M,N, ,δ -agree with overall failure probability δ across M tasks as defined in (2), after T=O​(M​N2​D+M3​N7ε2​δ2)T=O (MN^2D+ M^3N^7 ^2δ^2 ) messages, where D:=maxj∈[M]⁡DjD:= _j∈[M]D_j and ε:=minj∈[M]⁡εj := _j∈[M] _j. Our discretized three‑bucket protocol itself is general and imposes no BBF constraint—in Appendix §F we show it can be made BBF(3)(3)‑compliant with small overhead. Thus, by the lower bound from Proposition 3, for the broad and natural class of canonical-equality BBF protocols, our upper bound in Proposition 4 is tight up to an additive polynomial term after converting from messages to bits. 5.2 Computationally Bounded Agents Thus far, we analyzed computationally unbounded agents, implicitly assuming O​(1)O(1) time for constructing and sending messages, finding common priors, and sampling distributions. Even under these idealized conditions, the linear scaling in Theorem 1 becomes significant if the task space D or number of tasks M is exponentially large. However, realistic agents, such as current LLMs, are computationally bounded, and message passing may be noisy, e.g., due to obfuscated intent (barnes2020debateobf). Thus, we now analyze the complexity of N computationally bounded rational agents. Moreover, since querying humans typically costs more than querying AI agents, we differentiate between q humans (each taking THT_H time steps) and N−qN-q AI agents (each taking TA​IT_AI time steps), encompassing recent multi-step reasoning models (openai_o1; gemini_2_0_flash). Without loss of generality, we assume uniform times within these two groups and analyze complexity based on two basic subroutines: Requirement 1 (Basic Capabilities of Bounded Agents). We expect the agents to be able to: 1. Evaluation: The N agents can each evaluate fj​(sj)f_j(s_j) for any state sj∈Sjs_j∈ S_j, taking time Teval,aT_eval,a steps for a∈H,A​Ia∈\H,AI\. 2. Sampling: The N agents can sample from the unconditional distribution of any other agent, such as their prior ℙjiP_j^i, taking time Tsample,aT_sample,a steps for a∈H,A​Ia∈\H,AI\. We treat these subroutines as black boxes: agents lack explicit descriptions of fjf_j and distributions, learning about them solely through these operations. Analogous to CIRL (hadfield2016cooperative), this setup captures realistic alignment scenarios where the correctness of a task outcome can be verified without specifying each intermediate step. Consequently, our complexity results are broadly applicable, expressed in terms of Teval,HT_eval,H, Teval,A​IT_eval,AI, Tsample,HT_sample,H, and Tsample,A​IT_sample,AI. These minimal subroutines enable agents to estimate each other’s expectations, an essential capability for alignment. Importantly, exact computation is unnecessary; probabilistic evaluation in polynomial time suffices (as will become clear in the proof of Theorem 2, due to the exponential blow-up). The sampling subroutine further serves as a bounded version of the standard assumption that agents know each other’s knowledge partitions through shared states (aumann1976agreeing; aumann1999interactive). This corresponds to agents possessing a bounded “theory of mind” (ho2022planning) about one another. Finally, as we can no longer assume O​(1)O(1) time complexity for constructing a common prior (unlike in the unbounded agent setting), we introduce an explicit randomized polynomial-time algorithm for doing so with high probability, Algorithm 2. We refer the reader to Appendix §G.3 for proofs related to Algorithm 2. Specifically, Lemma 7 (correctness), Lemma 8 (runtime analysis), and Lemma 9 (inexact posterior access setting). In what follows, define TN,q:= T_N,q= q​Tsample,H+(N−q)​Tsample,A​I q\,T_sample,H+(N-q)\,T_sample,AI +q​Teval,H+(N−q)​Teval,A​I. +\,q\,T_eval,H+(N-q)\,T_eval,AI. The above considerations lead to the following theorem in the bounded agent setting: Theorem 2 (Bounded Agents Eventually Agree). Let there be N computationally bounded rational agents (consisting of 1≤q<N1≤ q<N humans and N−q≥1N-q≥ 1 AI agents), with the capabilities in Requirement 1. The agents pass messages according to the sampling tree protocol (detailed in Appendix §G.2) with branching factor of B≥1/αB≥ 1/α, and added triangular noise of width ≤2​α≤ 2α, where ε/50≤α≤ε/40 /50≤α≤ /40. Let δfind_CPδ^find\_CP be the maximal failure probability of the agents to find a task-specific common prior across all M tasks, and let δagree_CPδ^agree\_CP be the maximal failure probability of the agents to come to ⟨M,N,ε,δ⟩ M,N, ,δ -agreement across all M tasks once they condition on a common prior, where δfind_CP+δagree_CP<δ^find\_CP+δ^agree\_CP<δ. For the N computationally bounded agents to ⟨M,N,ε,δ⟩ M,N, ,δ -agree with total probability ≥1−δ≥ 1-δ, takes time O​(M​TN,q​(BN2​D​ln⁡(δfind_CP/(3​M​N2​D))ln⁡(1/α)+B9​M2​N7(δagree_CP​ε)2)).O (M\,T_N,q\! (B^\,N^2D \! (δ^find\_CP/(3MN^2D) ) (1/α)\;+\;B^\, 9M^2N^7(δ^agree\_CP )^2 ) ). In other words, just in the first term alone, exponential in the task space size D and number of agents N (and exponential in the number of tasks M in the second term). So if the task space size is in turn exponential in the input size, then this would already be doubly exponential in the input size! We now clarify why we let B be a parameter, and give a concrete example of how bad this exponential dependence can be. Intuitively, we can think of B as a “gauge” on how distinguishable the bounded agents are from “true” unbounded Bayesians, and will allow us to give an explicit desired value for B. Recognizing the issue of computational boundedness of agents in the real world, hanson2003bayesian introduced the notion of Bayesian wannabes: agents who estimate expectations as if they had sufficient computational resources. He showed that disagreement among Bayesian wannabes stems from computational limitations rather than differing information. Extending this idea, aaronson2005complexity proposed a protocol ensuring that bounded agents appear statistically indistinguishable from true Bayesians to an external referee—effectively a “Bayesian Turing Test” (turing1950computing) for rationality. Thus, B explicitly quantifies this notion of bounded Bayesian indistinguishability. We consider the M-function, N-agent generalization of this requirement (and without common priors (CPA)), which we call a “total Bayesian wannabe”: Definition 1 (Total Bayesian Wannabe). Let the N agents have the capabilities in Requirement 1. For each task j∈[M]j∈[M], let the transcript of T messages exchanged between N agents be denoted as Γj:=⟨mj1,…,mjT⟩ _j:= m_j^1,…,m_j^T . Let their initial, task-specific priors be denoted by ℙjii∈[N]\P_j^i\^i∈[N]. Let ℬ​(sj)B(s_j) be the distribution over message transcripts if the N agents are unbounded Bayesians, and the current task state is sj∈Sjs_j∈ S_j. Analogously, let ​(sj)W(s_j) be the distribution over message transcripts if the N agents are “total Bayesian wannabes”, and the current task state is sj∈Sjs_j∈ S_j. Then we require for all Boolean functions333Without loss of generality, we assume that the current task state sjs_j and message transcript Γj _j are encoded as binary strings. Φ​(sj,Γj) (s_j, _j), ‖ℙΓj∈​(sj)sj∈j​[Φ​(sj,Γj)=1]−ℙΓj∈ℬ​(sj)sj∈j​[Φ​(sj,Γj)=1]‖1≤ρj,∀j∈[M], aligned P_ subarrayc _j (s_j)\\ s_j∈ S_j subarray\! [ (s_j, _j)=1 ]\\[4.0pt] \;-\;P_ subarrayc _j (s_j)\\ s_j∈ S_j subarray\! [ (s_j, _j)=1 ] aligned _1\;≤\; _j, ∀ j∈[M], where j:=ℙjii∈[N] S_j:=\P_j^\,i\_i∈[N]. We can set ρj∈ℝ _j as arbitrarily small as preferred, and it will be convenient to only consider a single ρ:=minj∈[M]⁡ρjρ:= _j∈[M] _j without loss of generality (corresponding to the most “stringent” task j). We will show in Appendix §G.3 that matching this requirement amounts to picking a large enough value for B, giving rise to the following corollary to Theorem 2: Corollary 1 (Total Bayesian Wannabes Agree). Let there be N total Bayesian wannabes, according to Definition 1 (e.g. consisting of 1≤q<N1≤ q<N humans and N−q≥1N-q≥ 1 AI agents). Let the branching factor of the sampling tree protocol be the same as before, B≥1/αB≥ 1/α, with added triangular noise of width ≤2​α≤ 2α, where ε/50≤α≤ε/40 /50≤α≤ /40. Let δfind_CPδ^find\_CP be the maximal failure probability of the agents to find a task-specific common prior across all M tasks, and let δagree_CPδ^agree\_CP be the maximal failure probability of the agents to come to ⟨M,N,ε,δ⟩ M,N, ,δ -agreement across all M tasks once they condition on a common prior, where δfind_CP+δagree_CP<δ^find\_CP+δ^agree\_CP<δ. For the N “total Bayesian wannabes” to ⟨M,N,ε,δ⟩ M,N, ,δ -agree with total probability ≥1−δ≥ 1-δ, takes time O(MTN,q(BN2​D​ln⁡(δfind_CP/(3​M​N2​D))ln⁡(1/α)+(11/α)729​M6​N21(δagree_CP​ε)6ρ−18​M2​N7(δagree_CP​ε)2)). splitO\! (M\,T_N,q\! (&B^\,N^2D \! (δ^find\_CP/(3MN^2D) ) (1/α)\\ &+(11/α) 729M^6N^21(δ^agree\_CP )^6\,ρ^- 18M^2N^7(δ^agree\_CP )^2 ) ). split In other words, exponential time in the task space D, and by (18), and with a large base in the second term if the “total Bayesian wannabe” threshold ρ is made small. Sharing a common prior amounts to removing the first term, yielding upper bounds that are still exponential in ε and δ. The proofs of Theorem 2 and Corollary 1 are quite technical (spanning 7 pages), so we defer them to Appendix §G for clarity. The primary takeaway here is that computational boundedness can result in a severely exponential time slowdown in the agreement time, and especially so if you want the bounded agents to be statistically indistinguishable in their interactions with each other from true unbounded Bayesians. For example, even for N=2N=2 agents with a common prior and liberal agreement threshold of ε=δ=1/2 =δ=1/2 and “total Bayesian wannabe” threshold of ρ=1/2ρ=1/2 on one task (M=1M=1), then α≥1/100α≥ 1/100, the number of subroutine calls (not even total runtime) would be around: O​((1100)1528823808(1/4)6(1/2)2304(1/4)2)≈O​(101013.27979),O ( (1100 ) 1528823808 (1/4 )^6 (1/2 ) 2304 (1/4 )^2 )≈ O (10^10^13.27979 ), would already far exceed the estimated (Munafo_Notable_Numbers, pg. 19) number of atoms in the universe (∼4.8×1079 4.8× 10^79)! This illustrates the power of the unbounded Bayesians we considered earlier in §4, and why the lower bounds there are worth paying attention to in practice. Finally, note that in general under a sampling tree protocol, this exponential blow-up in task state space size D is unavoidable (e.g. for rare, potentially unsafe, events): Proposition 5 (Needle-in-a-Haystack Sampling Tree Lower Bound). Let TN,q,sample:=q​Tsample,H+(N−q)​Tsample,A​IT_N,q,sample:=qT_sample,H+(N-q)T_sample,AI. For any sampling-tree protocol, a single task and a single pair of agents can be instantiated so that the two agents’ priors differ by prior distance ≥ν≥ν, yet the protocol must pre‑compute at least Ω​(ν−1) (ν^-1 ) unconditional samples before the first online message. Consequently, for a particular “needle” prior construction of ν=Θ​(e−D)ν= (e^-D ), we get lower bounds that are exponential in the task state space size D, needing Ω​(M​TN,q,sample​eD) (M\,T_N,q,sample\,e^D ) wall-clock time. 6 Discussion Why study a “Bayesian best‑case” at all? One may object that real AI systems—and certainly humans—are not perfectly Bayesian reasoners, nor do they interact through ideal, lossless channels. That is precisely the point: our results constitute an ideal benchmark, before we build and deploy capable agents. If alignment is information‑ or communication‑theoretically hard even for computationally unbounded, rational Bayesians exchanging noiseless messages, then relaxing rationality and unboundedness assumptions, adding noise, strategic behavior, or adversarial tampering can exacerbate the difficulty, as we showed in §5.2. Our takeaways for AI safety are: 1. Too many alignment values drives alignment cost. Our matched lower and upper bounds (tight up to polynomial terms in M,N,ε,δM,N, ,δ) demonstrate a “No‑Free‑Lunch” principle: encoding an exponentially large or high‑entropy set of human values forces at least exponential communication even for unbounded agents. As a result, progress on value alignment / preference modeling should prioritize objective compression, delegation, or progressive disclosure rather than attempting one‑shot, full‑coverage specification. 2. Reward hacking is globally inevitable. Proposition 5 shows that in large state spaces and with bounded agents, reward hacking arises unavoidably from finite sampling. By Proposition 3, this even happens for unbounded agents in large state spaces who communicate finite bits and update their expectations smoothly. Scalable oversight is therefore not about uniform alignment, but about focusing on the parts of the state space that matter most. The engineering task ahead of us then is the mechanism design problem of benchmarks and interactive protocols that target these safety-critical slices—via adversarial sampling, objective compression, and per-slice ⟨ε,δ⟩ ,δ budgets—to certify coverage where it counts. 3. Robustness depends on bounded rationality, memory, and theory of mind. Introducing bounded agents or even mild triangular noise can exponentially increase costs when protocols cannot exploit additional structure or restrict the task state space (ball2025don); yet these assumptions were necessary to prove any alignment guarantees at all. Robust alignment must account for imperfect agents and noisy or obfuscated channels—but as we show in §5.2, real-world agents with these three properties can degrade gracefully rather than catastrophically. 4. Tight bounds inform governance thresholds. For broad and natural protocol classes, our lower bounds are closely matched (up to polynomial terms) by constructive algorithms, enabling principled risk thresholds. Limitations and future work. Our results justify cautious optimism: alignment is tractable in principle, yet only when we restrain objectives and exploit task structure with care. “No-Free-Lunch” does not preclude lunch—it simply forces wise menu choices. At least three directions stand out: 1. Minimal value sets. Our lower bounds imply that having too many objectives is the surest route to inefficiency. A key open question is which small, consensus‐worthy utility families guarantee high-probability safety. In concurrent follow-up work (nayebi2025coresafety), we identify such a small value set for corrigibility as defined by soares2015corrigibility, which was open for a decade. 2. Structure‑exploiting interaction protocols. Design multi-turn agent interaction protocols (beyond single-shot RLHF) and evaluation benchmarks that stress-test the portions of state space most relevant for safety during deployment. This can also be done at the post-training stage, and can augment existing RLHF pipelines. 3. Beyond expectations under noise. (i) Can agreement on specific risk measures cut communication costs relative to full‑expectation alignment? We note that agreement on full expectations is not always required; given a task-specific utility function UjU_j, our framework already covers agreement on optimal actions, arg⁡maxa⁡​[Uj​(a)] _aE[U_j(a)] by having fjf_j be the optimal action indicator. Our framework also models rare events (Appendix §C). (i) We found bounded derivative in the noise model was crucial for convergence (e.g. uniform noise does not suffice). Studying richer obfuscation (e.g. learned steganography) will be essential for informing other robust safety thresholds. Acknowledgements We thank the Burroughs Wellcome Fund (CASI award) for financial support. We also thank Scott Aaronson, Andreas Haupt, Richard Hu, J. Zico Kolter, and Max Simchowitz for helpful discussions on AI safety in the early stages of this work, and Nina Balcan for feedback on a draft of the manuscript. Appendix A Notational Preliminaries We will use asymptotic notation throughout that is standard in computer science, but may not be in other fields. The asymptotic notation is defined as follows: • F​(n)=O​(G​(n))F(n)=O(G(n)): There exist positive constants c1>0c_1>0 and c2>0c_2>0 such that F​(n)≤c1+c2​G​(n)F(n)≤ c_1+c_2G(n), for all n≥0n≥ 0. • F​(n)=O~​(G​(n))F(n)= O(G(n)): There exist positive constants c1,c2c_1,c_2, and k>0k>0 such that F​(n)≤c1+c2​G​(n)​logk⁡nF(n)≤ c_1+c_2G(n) ^kn, for all n≥0n≥ 0. • F​(n)=Ω​(G​(n))F(n)= (G(n)): Similarly, there exist positive constants c1c_1 and c2c_2 such that F​(n)≥c1+c2​G​(n)F(n)≥ c_1+c_2G(n), for all n≥0n≥ 0. • F​(n)=Θ​(G​(n))F(n)= (G(n)): This indicates that F​(n)=O​(G​(n))F(n)=O(G(n)) and F​(n)=Ω​(G​(n))F(n)= (G(n)). In other words, G​(n)G(n) is a tight bound for F​(n)F(n). For notational convenience, let Eji,t​(sj):=ℙji​[fj∣Πji,t​(sj)],E^i,t_j(s_j):=E_P^i_j [f_j _j^i,t(s_j) ], which is the expectation of fjf_j of agent i at timestep t, conditioned on its knowledge partition by then, starting from its own prior ℙjiP^i_j. To simplify notation, we drop the argument sj∈Sjs_j∈ S_j. Appendix B ⟨M,N,ε,δ⟩ M,N, ,δ -Agreement Setup and Dynamics The framework we consider for alignment generalizes Aumann agreement (aumann1976agreeing) to probabilistic ⟨ε,δ⟩ ,δ -agreement (aaronson2005complexity) (rather than exact agreement), across M agreement objectives and N agents, without the Common Prior Assumption (CPA). The CPA dates back to at least harsanyi1967 in his seminal work on games with incomplete information. This is a very powerful assumption and is at the heart of Aumann’s agreement theorem that two rational Bayesian agents must agree if they share a common prior (aumann1976agreeing). As a further illustration of how powerful the CPA is from a computational complexity standpoint, aaronson2005complexity relaxed the exact agreement requirement to ⟨ε,δ⟩ ,δ -agreement and showed that even in this setting, completely independent of how large the state space is, two agents with common priors will need to only exchange O​(1/(δ​ε2))O(1/(δ ^2)) messages to agree within ε with probability at least 1−δ1-δ over their prior. However, the CPA is clearly a very strong assumption for human-AI alignment, as we cannot expect that our AIs will always start out with common priors with every human it will engage with on every task. In fact, even between two humans this assumption is unlikely! For other aspects of agreement and how they relate more broadly to alignment, we defer to the Discussion (§6) for a more detailed treatment. In short, ⟨M,N,ε,δ⟩ M,N, ,δ -agreement represents a “best-case” scenario that is general enough to encompass prior approaches to alignment (cf. Table 1), such that if something is inefficient here, then it forms a prescription for what to avoid in practice, in far more suboptimal circumstances. As examples of suboptimality in practice, we will consider computational boundedness and noisy messages in §5.2, to exactly quantify how the bounds can significantly (e.g. exponentially) worsen. Dispensing with the CPA, we now make our ⟨M,N,ε,δ⟩ M,N, ,δ -agreement framework more precise. For illustration, we consider two agents (N=2N=2), Alice (human) and “Rob” (robot), denoted by A and R, respectively. Let Sjj∈[M]\S_j\_j∈[M] be the collection of (not necessarily disjoint) possible task states for each task j∈[M]j∈[M] they are to perform. We assume each SjS_j is finite (|Sj|=Dj∈ℕ|S_j|=D_j ), as this is a standard assumption, and any physically realistic agent can only encounter a finite number of states anyhow. There are M agreement objectives, f1,…,fMf_1,…,f_M, that Alice and Rob want to jointly estimate, one for each task: fj:Sj→[0,1],∀j∈[M],f_j:S_j→[0,1], ∀ j∈[M], to encompass the possibility of changing needs and differing desired ⟨εj,δj⟩j∈[M] \ _j, _j \_j∈[M]-agreement thresholds for those needs (which we will define shortly in (2)), rather than optimizing for a single monolithic task. Note that setting the output of fjf_j to [0,1][0,1] does not reduce generality. Since SjS_j is finite, any function Sj→ℝS_j has a bounded range, so one can always rescale appropriately to go inside the [0,1][0,1] domain. Alice and Rob have priors ℙjP^A_j and ℙjP^R_j, respectively, over task j’s state space SjS_j. Let νj∈[0,1] _j∈[0,1] denote the prior distance (as introduced by hellman2013almost) between ℙjP^A_j and ℙjP^R_j, defined as the minimal L1L^1 distance between any point xj∈Xj=ℙj×ℙjx_j∈ X_j=P^A_j×P^R_j and any point pj∈j=(pj,pj)∣pj∈Δ​(Sj)p_j _j=\(p_j,p_j) p_j∈ (S_j)\, where Δ​(Sj)∈ℝDj (S_j) ^D_j is the probability simplex over the states in SjS_j. Formally, νj=minxj∈Xj,pj∈j⁡‖xj−pj‖1,∀j∈[M]. _j= _x_j∈ X_j,p_j _j\|x_j-p_j\|_1, ∀ j∈[M]. (3) It is straightforward to see that there exists a common prior ℂ​ℙj∈jCP_j _j between Alice and Rob for task j if and only if the task state space SjS_j has prior distance νj=0 _j=0. (Lemma 2 will in fact show that it is possible to find a common prior with high probability, regardless of the initial value νj _j.) For every state sj∈Sjs_j∈ S_j, we identify the subset Ej⊆SjE_j S_j with the event that sj∈Sjs_j∈ S_j. For each task j∈[M]j∈[M], Alice and Rob exchange messages444These messages could be as simple as communicating the agent’s current expectation of fjf_j, given (conditioned on) its current knowledge partition. For now, we assume the messages are not noisy, but we will remove this assumption in §5.2. from the power set ​(Sj) P(S_j) of the task state space SjS_j, as a sequence mj1,…,mjT:​(Sj)→[0,1]m^1_j,…,m^T_j: P(S_j)→[0,1]. Let Πji,t​(sj) _j^i,t(s_j) be the set of states that agent i∈,i∈\A,R\ considers possible in task j after the t-th message has been sent, given that the true state of the world for task j is sjs_j. Then by construction, sj∈Πji,t​(sj)⊆Sjs_j∈ _j^i,t(s_j) S_j, and the set Πji,t​(sj)sj∈Sj\ _j^i,t(s_j)\_s_j∈ S_j forms a partition of SjS_j (known as a “knowledge partition”). As is standard (aumann1976agreeing; aumann1999interactive; aaronson2005complexity; hellman2013almost), we assume for each task j, the agents know each others’ initial knowledge partitions Πji,0​(sj)sj∈Sj\ _j^i,0(s_j)\_s_j∈ S_j. The justification for this more broadly (aumann1976agreeing; aumann1999interactive) is that a given state of the world sj∈Sjs_j∈ S_j includes the agents’ knowledge. In our setting, it is quite natural to assume that task states for agents coordinating on a task will encode their knowledge. As a consequence, every agents’ subsequent partition is known to every other agent, and every agent knows that this is the case, and so on555This can be implemented via a “common knowledge” set, ​(Πji,ti∈[N])C (\ _j^i,t\^i∈[N] ), which is the finest common coarsening of the agents’ partitions (aumann1976agreeing).. This is because with this assumption, since the agents receive messages from each other, then Πji,t​(sj)⊆Πji,t−1​(sj) _j^i,t(s_j) _j^i,t-1(s_j). In other words, subsequent knowledge partitions Πji,t​(sj)sj∈Sj\ _j^i,t(s_j)\_s_j∈ S_j refine earlier knowledge partitions Πji,t−1​(sj)sj∈Sj\ _j^i,t-1(s_j)\_s_j∈ S_j. (Equivalently, we say that Πji,t−1​(sj)sj∈Sj\ _j^i,t-1(s_j)\_s_j∈ S_j coarsens Πji,t​(sj)sj∈Sj\ _j^i,t(s_j)\_s_j∈ S_j.) Proper refinement is if for at least one state sj∈Sjs_j∈ S_j, Πji,t​(sj)⊊Πji,t−1​(sj) _j^i,t(s_j) _j^i,t-1(s_j), representing a strict increase in knowledge. To illustrate this more concretely, first Alice computes mj1​(Πj,0​(sj))m^1_j ( _j^A,0(s_j) ) and sends it to Rob. Rob’s knowledge partition then becomes refined to the set of messages in his original knowledge partition that match Alice’s message (since they are now both aware of it): Πj,1(sj)=sj′∈Πj,0(sj)∣mj1​(Πj,0​(sj′))=mj1(Πj,0(sj)), split _j^R,1(s_j)\;=\; \\,s _j∈ _j^R,0(s_j) &\;m^1_j\! ( _j^A,0(s _j) )\\[2.0pt] &=m^1_j\! ( _j^A,0(s_j) ) \, split from which Rob computes mj2​(Πj,1​(sj))m^2_j ( _j^R,1(s_j) ) and sends it to Alice. Alice then updates her knowledge partition similarly to become the set of messages in her original partition that match Rob’s message: Πj,2(sj)=sj′∈Πj,0(sj)∣mj2​(Πj,1​(sj′))=mj2(Πj,1(sj)), split _j^A,2(s_j)\;=\; \\,s _j∈ _j^A,0(s_j) &\;m^2_j\! ( _j^R,1(s _j) )\\[2.0pt] &=m^2_j\! ( _j^R,1(s_j) ) \, split and then she computes and sends the message mj3​(Πj,2​(sj))m^3_j ( _j^A,2(s_j) ) to Rob, etc. ⟨M,N,ε,δ⟩ M,N, ,δ -Agreement Criterion: We examine here the number of messages (T) required for Alice and Rob to ⟨εj,δj⟩ _j, _j -agree across all tasks j∈[M]j∈[M], defined as Pr(|ℙj[fj∣Πj,T(sj)]−ℙj[fj∣Πj,T(sj)]|≤εj) \! ( |\,E_P^A_j [f_j _j^A,T(s_j) ]-E_P^R_j [f_j _j^R,T(s_j) ] |≤ _j ) (4) >1−δj,∀j∈[M]. >1- _j, ∀ j∈[M]. In other words, they agree within εj _j with high probability (>1−δj>1- _j) on the expected value of fjf_j with respect to their own task-specific priors (not a common prior!), conditioned666For completeness, note that for any subset Ej⊆SjE_j S_j and distribution ℙP, ℙ​[fj∣Ej]:=∑sj∈Ejf​(sj)​ℙ​[sj∣Ej]=∑sj∈Ejf​(sj)​ℙ​[sj]∑sj∈Ejℙ​[sj]E_P[f_j E_j]:= _s_j∈ E_jf(s_j)P[s_j E_j]= _s_j∈ E_jf(s_j)P[s_j] _s_j∈ E_jP[s_j]. on each of their knowledge partitions by time T. Extending this framework to N>2N>2 agents (consisting of 1≤q<N1≤ q<N humans and N−q≥1N-q≥ 1 AI agents), is straightforward: we can have their initial, task-specific priors be denoted by ℙjii∈[N]\P_j^i\^i∈[N], and we can have them ⟨εj,δj/N2⟩ _j, _j/N^2 -agree pairwise so that they globally ⟨εj,δj⟩ _j, _j -agree. Appendix C Modeling Tail Risk We note in this section that our ⟨M,N,ε,δ⟩ M,N, ,δ -agreement framework can also model tail risk/rare events. For exposition convenience, we use the “loss” convention (higher = worse), so the Expected Shortfall (ES)/Conditional Value at Risk (CVaR) at level τ∈(0,1]τ∈(0,1] uses the upper quantile/tail. Specifically, for a catastrophe indicator fj:=Z∈0,1f_j:=Z∈\0,1\ with ​[fj]=Pr⁡[Z=1]=pE [f_j ]= [Z=1]=p, the ES/CVaR at a given level τ is ESτ​(Z)=1τ​∫1−τ1qu​(Z)​u,ES^τ(Z)= 1τ _1-τ^1q_u(Z)du, where for Z∼Bernoulli​(p)Z (p), the quantile is defined as: qu​(Z)=0,u≤1−p,1,u>1−p.q_u(Z)= cases0,&u≤ 1-p,\\ 1,&u>1-p. cases Therefore, ESτ​(Z)=1τ​∫1−τ1​u>1−p​u=1τ​|(1−τ,1]∩(1−p,1]|=1τ​min⁡τ,p=min⁡1,pτ. splitES^τ(Z)&= 1τ _1-τ^11\u>1-p\\,du\\ &= 1τ\, |(1-τ,1]∩(1-p,1] |\\ &= 1τ \τ,p\\\ &= \1, pτ \. split Hence, if two models agree on p within ε , their ES values differ by at most ε/τ /τ. For a general bounded loss fj:=X∈[0,1]f_j:=X∈[0,1], the rockafellar2000optimization representation ESτ​(X)=infc∈[0,1](c+1τ​[(X−c)+])ES^τ(X)= _c∈[0,1] (c+ 1τE[(X-c)_+] ) shows that ES is the minimum over expectations of bounded transforms ψc​(x)=c+1τ​(x−c)+ _c(x)=c+ 1τ(x-c)_+. Then ESτ​(X)=infc∈[0,1]​[ψc​(X)].ES^τ(X)= _c∈[0,1]\,E\! [ _c(X) ]. Consequently, for two distributions P,QP,Q over X∈[0,1]X∈[0,1], |ESPτ​(X)−ESQτ​(X)|≤supc∈[0,1]|P​[ψc​(X)]−Q​[ψc​(X)]|≤1τ​supc∈[0,1]|P​[(X−c)+]−Q​[(X−c)+]|. split& |ES^τ_P(X)-ES^τ_Q(X) |\\ &≤ _c∈[0,1] |E_P[ _c(X)]-E_Q[ _c(X)] |\\ &≤ 1τ _c∈[0,1] |E_P[(X-c)_+]-E_Q[(X-c)_+] |. split Thus, any ⟨M,N,ε,δ⟩ M,N, ,δ -agreement bounds controlling expectations of bounded functions directly yield corresponding bounds on ES (scaled by 1/τ1/τ). Appendix D Proofs of Lower Bounds D.1 Proof of Proposition 1 Proof. For each task j∈[M]j∈[M], let the input tuple to the N agents be ⟨x1,j,x2,j,…,xN,j⟩∈Sj, x_1,j,\;x_2,j,\;…,\;x_N,j \;∈\;S_j, where SjS_j is defined by Sj:=⟨x1,j,…,xN,j⟩| S_j\;=\; \\, x_1,j,…,x_N,j | xi,j∈(j−1) 2n+1,… x_i,j∈\(j-1)2^n+1,… j 2n,∀i∈[N]. j2^n\,\;∀ i∈[N] \. Thus, each xi,jx_i,j is an integer777One could encode them as binary strings of length at least n+⌈log2⁡j⌉n+ _2j , but in this proof we do not need the explicit binary representation: the integer range sizes themselves suffice to carry out the communication complexity lower bound. in an interval of size 2n2^n that starts at (j−1)⋅2n+1(j-1)· 2^n+1. We endow SjS_j with the uniform common prior ℂ​ℙjCP_j (which will be necessarily difficult by the counting argument below), and define fj​(x1,j,…,xN,j)=∑i=1Nxi,j2n+1.f_j (x_1,j,…,x_N,j )\;=\; _i=1^Nx_i,j2^n+1. Observe that ∑i=1Nxi,j _i=1^Nx_i,j is minimally N​((j−1)​ 2n+1)N ((j-1)\,2^n+1 ) and maximally N​j​ 2nN\,j\,2^n. Hence, the image of fjf_j is contained within [N​((j−1)​2n+1)2n+1,N​j​2n2n+1] [ N ((j-1)2^n+1 )2^n+1, Nj2^n2^n+1 ] =[N​(j−1)2+12n+1,N​j2]. =\; [ N(j-1)2+ 12^n+1, Nj2 ]. Therefore, for j≥1j≥ 1, each instance fjf_j is structurally the same “shifted” problem, but crucially non-overlapping for each j∈ℕj . So it suffices to show that for each j, each instance individually saturates the Ω​(N2​log⁡(1/εj)) (N^2\, (1/ _j) ) bit lower bound, which we will do now: Two-Agent Subproblem for N Agents. Because all agents must ⟨εj,δj⟩ _j, _j -agree on the value of fjf_j, it follows that in particular, every pair of agents (say (i,k)(i,k)) must have expectations of fjf_j that differ by at most εj _j with probability at least 1−δj1- _j. But for any fixed pair (i,k)(i,k), we can treat (xi,j,xk,j) (x_i,j,x_k,j ) as a two‐agent input in which all other coordinates xℓ,jx_ ,j for ℓ≠i,k ≠ i,k are “known” from the perspective of these two, or do not affect the difficulty except to shift the sum888Equivalently, imagine the other N−2N-2 agents are “dummy” participants, and we fix their inputs from the perspective of the (i,k)(i,k) pair.. Hence, for each j and each pair (i,k)(i,k), there is a two‐agent subproblem. We claim that these two agents alone already face a lower bound of Ω​(log⁡(1/εj)) ( (1/ _j) ) bits of communication to achieve ⟨εj,δj⟩ _j, _j -agreement on fjf_j. Suppose agent k sends only t<log2⁡(1−δjεj)t< _2 ( 1- _j _j ) bits to agent i about its input xk,jx_k,j. Label the 2t2^t possible message sequences by m=1,…,2tm=1,…,2^t, with probability pjmp^m_j each. Since xk,jx_k,j is uniform in an interval of size 2n2^n, then conditioned on message m, there remain at least 2n​pjm2^np^m_j possible values of xk,jx_k,j. Each unit change in xk,jx_k,j shifts fjf_j by 1/2n+11/2^n+1, so even if agent i’s estimate is optimal, the fraction of xk,jx_k,j values producing |Ejk,t−Eji,t|≤εj|E^k,t_j-E^i,t_j|≤ _j is at most 2n+1​εj2n​pjm=2​εjpjm. 2^n+1\, _j2^n\,p^m_j\;=\; 2\, _jp^m_j. Hence, the total probability of agreement (over all messages m) is bounded by ∑m=12tpjm⋅2​εjpjm=2​εj​ 2t. _m=1^2^tp^m_j\,·\, 2\, _jp^m_j=2\, _j\,2^t. If 2​εj​ 2t<1−δj2\, _j\,2^t<1- _j, the agents fail to ⟨εj,δj⟩ _j, _j -agree. Equivalently, t≥log2⁡(1−δj 2​εj)t≥ _2 ( 1- _j\,2\, _j ). Since every pair (i,k)(i,k) needs Ω​(log⁡(1/εj)) ( (1/ _j ) ) bits for each of the M tasks, and there are (N2)=Θ​(N2) N2= (N^2) pairs, the total cost is Ω​(M​N2​log⁡(1/ε)), (M\,N^2\, (1/ ) ), where ε:=minj∈[M]⁡εj := _j∈[M] _j, corresponding to the most “stringent” task j. ∎ D.2 Proof of Proposition 2 Proof. We divide the M≥2M≥ 2 tasks into two types of payoff functions fjf_j as follows, each covering the first ⌊M/2⌋ M/2 tasks and the last set of ⌈M/2⌉ M/2 tasks, respectively: Type I Tasks. For the first set of ⌊M/2⌋ M/2 tasks, we let the state space be Sj:=s(j−1)​D,s(j−1)​D+1,…,sj​D−1S_j:=\s_(j-1)D,s_(j-1)D+1,…,s_jD-1\. Let the k-th element of SjS_j be denoted as sk,j:=s(j−1)​D+ks_k,j:=s_(j-1)D+k. Next, choose a sign vector bj∈+1,−1Nb_j∈\+1,-1\^N with N/2N/2 plus signs, and set each element of it, bjib^i_j, as follows to define the prior distributions for some 0<p≤12−ν40<p≤ 12- ν4 as: β:=pD−2,ℙji​(s0,j)=12−bji​ν4−(D−2)​β2,ℙji​(s1,j)=12+bji​ν4−(D−2)​β2,ℙji​(sk,j)=β​(k≥2). split&β:= pD-2, P_j^i(s_0,j)= 12- b^i_jν4- (D-2)β2,\\ &P_j^i(s_1,j)= 12+ b^i_jν4- (D-2)β2,\;P_j^i(s_k,j)=β\;(k≥ 2). split If bji≠bjkb^i_j≠ b^k_j the agent pair (i,k)(i,k) has L1 distance ν, and therefore prior distance ≥ν≥ν by definition. Let Tj​(ωj)T_j( _j) denote the number of bits exchanged on task j when the initial world state is ωj∈Sj _j∈ S_j and the agents follow some message-passing protocol. We consider the expectation ​[Tj]:=ωj​[Tj​(ωj)]E[T_j]:=E_ _j[T_j( _j)] over all initial world states ωj∈Sj _j∈ S_j with respect to the hard prior distributions specified above. Note that for the purpose of a lower bound, we only need to consider the mismatched agent pairs, since for non-mismatched agents, Tj≥0T_j≥ 0, trivially. Given a task index j and a mismatched agent pair (i,k)(i,k), let WjtW^t_j denote the total variation distance between the agents’ posterior distributions, τji,tτ^i,t_j and τjk,tτ^k,t_j, at time t: Wjt:=12​‖τji,t−τjk,t‖1,Wj0=ν/2.W^t_j:= 12 \|τ^i,t_j-τ^k,t_j \|_1, W^0_j=ν/2. Define the “good” event for task j as Gj:=|Eji,T−Ejk,T|≤ε,G_j:= |\,E^i,T_j-E^k,T_j |≤ , which holds with probability at least 1−δj1- _j by the ⟨εj,δj⟩ _j, _j ‑agreement condition. Conditioned on GjG_j, our assumption implies WjTj≤c​νW^T_j_j≤ cν for c<12−δjνc< 12- _jν; on Gj¯ G_j we only know the trivial upper bound on total variation of WjTj≤1W^T_j_j≤ 1. Hence, ​[WjTj]<(1−δj)⋅c​ν+δj⋅1<c​ν+δj<ν/2.E [W^T_j_j ]<(1- _j)· cν+ _j· 1<cν+ _j<ν/2. Since we have that: Tj=∑t=0Tj−11≥∑t=0Tj−1(Wjt−Wjt+1)=Wj0−WjTj,T_j= _t=0^T_j-11≥ _t=0^T_j-1(W^t_j-W^t+1_j)=W^0_j-W^T_j_j, by telescoping. Thus, ​[Tj]≥Wj0−​[WjT]=Ω​(ν)E[T_j]≥ W^0_j-E[W^T_j]= (ν), since δj<ν/2 _j<ν/2. Hence, each mismatched pair pays Ω​(ν) (ν) bits on task j in expectation. By a pigeonhole argument, for every initially mismatched agent pair, there exists an initial world state ωj∈Sj _j∈ S_j that attains at least that length transcript length TjT_j; this is the worst‑case for task j. With Θ​(N2) (N^2) such agent pairs, the mismatch cost per task is Ω​(N2​ν) (N^2ν ), giving a total worst case bit cost of Ω​(M​N2​ν) (M\,N^2\,ν ) across the first set of ⌊M/2⌋ M/2 tasks. Type I Tasks. For the remaining set of ⌈M/2⌉ M/2 tasks, we use the hard instance fjf_j (and its corresponding N-tuple state space) in Proposition 1 for each task, with a uniform common prior. Thus, any deterministic transcript that ⟨εj,δj⟩ _j, _j ‑agrees on every task must concatenate M independent sub‑transcripts across the Type I and Type I tasks, giving the final Ω​(M​N2​(ν+log⁡1ε)) (M\,N^2 (ν+ 1 ) ) bit lower bound. ∎ D.3 Proof of Proposition 3 Proof. We split the M tasks exactly as in Proposition 2: Type I: first ⌊M/2⌋ M/2 tasks, and Type I: remaining ⌈M/2⌉ M/2 tasks. Only Type I needs modification; Type I reuses Proposition 1 verbatim and costs Ω​(N2​log⁡(1/ε)) (N^2 (1/ ) ) bits per task. We consider the following “uniform-slope” hard priors for the Type I tasks: We use the same state-space as in Proposition 2’s Type I tasks, namely Sj:=s(j−1)​D,s(j−1)​D+1,…,sj​D−1S_j:=\s_(j-1)D,s_(j-1)D+1,…,s_jD-1\. For notational convenience, fix such a task j and relabel its D:=DjD:=D_j states Sj=s0,…,sD−1S_j=\s_0,…,s_D-1\, and therefore drop the j subscript. The priors are defined as follows for agents i and k: ℙi​(sm)=λmS,ℙk​(sm)=λ−mS′,S=∑q=0D−1λq,S′=∑q=0D−1λ−q,λ:=1+ν/21−ν/2>1. split&P^i(s_m)= λ^mS, P^k(s_m)= λ^-mS ,\\ &S= _q=0^D-1λ^q, S = _q=0^D-1λ^-q, λ:= 1+ν/21-ν/2>1. split We now show the prior distance for any agent pair (i,k)(i,k) is ≥ν≥ν. By definition of prior distance, the triangle inequality shows that ∥ℙi−ℙk∥1 P^i-P^k _1 lower bounds the prior distance (for set-valued priors per agent, and for single prior distributions per agent it holds with equality), so it suffices to show that ∥ℙi−ℙk∥1≥ν P^i-P^k _1≥ν. We have that ∥ℙi−ℙk∥1:=2​∑m:ℙi​(sm)>ℙk​(sm)(ℙi​(sm)−ℙk​(sm))≥ 2​|ℙi​(s0)−ℙk​(s0)|=2​|1∑q=0D−1λq−1∑q=0D−1λ−q|=2​(λ−1)​(λD−1−1)λD−1≥2​(λ−1)1+λ=ν, split& P^i-P^k _1:=2 _m:\,P^i(s_m)>P^k(s_m) (P^i(s_m)-P^k(s_m) )\\ &≥\;2 |P^i(s_0)-P^k(s_0) |\\ &=2 | 1 _q=0^D-1λ^q- 1 _q=0^D-1λ^-q |= 2(λ-1)(λ^D-1-1)λ^D-1\\ &≥ 2(λ-1)1+λ=ν, split where the last inequality follows from the fact that λD−1−λ≥0λ^D-1-λ≥ 0 since λ>1λ>1 and D≥2D≥ 2, and the last equality directly follows from the definition of λ. Canonical chain gap at t=0t=0. Connectedness of the initial profile implies that for any two states there exists at least one alternating chain of states (hellman2012common, Proposition 2). In particular, the pair (s0,sD−1)(s_0,s_D-1) used in the hard prior is linked by a chain c⋆=(s0,s1,…,sD−1)c =(s_0,s_1,…,s_D-1); tightness makes this chain unique and ensures it visits every state exactly once. We let τi,tτ^i,t denote agent i’s posterior distribution at time t, with τi,0≡ℙiτ^i,0 ^i. Set Lt:=|log​∏(s,s′)∈c⋆τi,t​(s′)τi,t​(s)−log​∏(s,s′)∈c⋆τk,t​(s′)τk,t​(s)|.L_t\;:=\; | _(s,s )∈ c τ^i,t(s )τ^i,t(s)\;-\; _(s,s )∈ c τ^k,t(s )τ^k,t(s) |. At t=0t=0, ∏(s,s′)∈c⋆τi,0​(s′)τi,0​(s)=λD−1,∏(s,s′)∈c⋆τk,0​(s′)τk,0​(s)=λ−(D−1) _(s,s )∈ c \!\! τ^i,0(s )τ^i,0(s)=λ^D-1,\;\; _(s,s )∈ c \!\! τ^k,0(s )τ^k,0(s)=λ^-(D-1) Thus, we have that L0=2​(D−1)​|log⁡λ|=2​(D−1)​|log⁡(1+ν2)−log⁡(1−ν2)|=2​(D−1)​|ν+ν312+…|=Θ​(D​ν), split&L_0=2(D-1)\,| λ|\\ &=2(D-1)\, | (1+ ν2 )- (1- ν2 ) |\\ &=2(D-1) |ν+ ν^312+… |= (Dν), split (5) where the second to last equality follows by the standard Taylor expansions of log⁡(1+x) (1+x) and log⁡(1−x) (1-x), since ν/2≤1ν/2≤ 1. Per timestep increment. Let the message sent in round t contain btb_t bits, and assume the protocol is BBF(β)(β), i.e. for every agent a and all states s,s′s,s , the message likelihoods are bounded as such: β−bt≤Pr⁡[ma,t∣s,Πa,t−1​(s)]Pr⁡[ma,t∣s′,Πa,t−1​(s′)]≤βbt.β^-b_t\;≤\; [m^a,t s, ^a,t-1(s)] [m^a,t s , ^a,t-1(s )]\;≤\;β^\,b_t. Then we will show that the canonical gap satisfies: |Lt−Lt−1|≤ 2​bt​log⁡β.|L_t-L_t-1|\;≤\;2b_t β. (6) To see this, for convenience we denote qta​(s):=Pr⁡[ma,t∣s,Πa,t−1​(s)]q^a_t(s):= [m^a,t s, ^a,t-1(s)] for the message likelihood at time t. Bayes’ rule then gives, for any states s,s′s,s , τa,t​(s′)τa,t​(s)=τa,t−1​(s′)τa,t−1​(s)⋅qta​(s′)qta​(s). τ^a,t(s )τ^a,t(s)\;=\; τ^a,t-1(s )τ^a,t-1(s)· q^a_t(s )q^a_t(s). (7) Next, define: Ξta=log​∏m=0D−2τa,t​(sm+1)τa,t​(sm)=∑m=0D−2log⁡τa,t​(sm+1)τa,t​(sm) ^a_t= \! _m=0^D-2 τ^a,t(s_m+1)τ^a,t(s_m)= _m=0^D-2 τ^a,t(s_m+1)τ^a,t(s_m) (8) We fix the canonical chain c⋆c once for convenience—any path would serve, since we will show that the bound in (6) depends only on the BBF(β)(β) likelihood-ratio condition and is independent of the particular chain selected. It follows from (7) and (8) that Ξta=∑m=0D−2[log⁡τa,t−1​(sm+1)τa,t−1​(sm)+log⁡qta​(sm+1)qta​(sm)]=∑m=0D−2log⁡τa,t−1​(sm+1)τa,t−1​(sm)⏟=Ξt−1a+∑m=0D−2log⁡qta​(sm+1)qta​(sm)⏟telescopes=Ξt−1a+log⁡qta​(sD−1)qta​(s0)=Ξt−1a+Δta. split ^a_t&= _m=0^D-2 [ τ^a,t-1(s_m+1)τ^a,t-1(s_m)+ q^a_t(s_m+1)q^a_t(s_m) ]\\ &= _m=0^D-2 τ^a,t-1(s_m+1)τ^a,t-1(s_m)_ =\; ^a_t-1\;+\; _m=0^D-2 q^a_t(s_m+1)q^a_t(s_m)_ \\ &= ^a_t-1\;+\; q^a_t(s_D-1)q^a_t(s_0)\;=\; ^a_t-1\;+\; ^a_t. split Because Lt=|Ξti−Ξtk|L_t=| ^i_t- ^k_t|, we have by the triangle inequality: |Lt−Lt−1|=||A+B|−|A||≤|B|,A:=Ξt−1i−Ξt−1k,B:=Δti−Δtk. split&|L_t-L_t-1|\;=\; |\,|A+B|-|A| |≤|B|,\\ &A:= ^i_t-1- ^k_t-1,\;B:= ^i_t- ^k_t. split Hence, |Lt−Lt−1|≤|Δti−Δtk||L_t-L_t-1|≤| ^i_t- ^k_t|. By BBF(β)(β), each agent alone obeys |Δta|=|log⁡qta​(sD−1)qta​(s0)|≤bt​log⁡β,a∈i,k. | ^a_t |= | q^a_t(s_D-1)q^a_t(s_0) |≤ b_t β, a∈\i,k\. Hence, by the triangle inequality, |Δti−Δtk|≤|Δti|+|Δtk|≤2​bt​log⁡β, | ^i_t- ^k_t |≤ | ^i_t |+ | ^k_t |≤ 2\,b_t β, giving rise to the desired inequality (6). Per task cost. Let BagreeB^agree be the total number of bits exchanged by the time the agents agree, and let BcpB^cp be the total bits exchanged when the common prior is reached. Clearly Bagree≥BcpB^agree≥ B^cp, so it suffices to lower bound BcpB^cp. Set BT:=∑t=1TbtB_T\;:=\; _t=1^Tb_t, so BTB_T is the cumulative bit count up to round T. By hellman2012common we have LT=0L_T=0 once the common prior is attained. Telescoping and (6) then gives L0−LT=∑t=1T(Lt−Lt−1)≤∑t=1T|Lt−Lt−1|≤ 2​log⁡β​∑t=1Tbt=2​log⁡β​BT. split&L_0-L_T\;=\; _t=1^T (L_t-L_t-1 )\;≤\; _t=1^T |L_t-L_t-1 |\\ &≤\;2 β _t=1^Tb_t=2 β\,B_T. split Hence, by (5), any BBF(β)(β) protocol must transmit at least Ω​(D​ν) (Dν ) bits before the priors coincide, and therefore at least that many bits before ⟨M,N,ε,δ⟩ M,N, ,δ -agreement. Aggregating costs. There are Θ​(N2) (N^2) mismatched pairs and ⌊M/2⌋ M/2 Type I tasks, so the total Type I cost in bits is Ω​(M​N2​D​ν) \! (M\,N^2\,Dν ). Type I contributes Ω​(M​N2​log⁡(1/ε)) \! (M\,N^2 (1/ ) ) bits, hence the overall lower bound in bits is Ω​(M​N2​[D​ν+log⁡(1/ε)]), (MN^2[Dν+ (1/ )] ), with constant 1/log⁡β1/ β, completing the proof. ∎ Appendix E Proof of Lemma 2 Proof. As before, let ℙjii∈[N]\P_j^i\^i∈[N], be the priors of the agents. The “type profile” τjtτ^t_j is the set of the agent’s posterior belief distributions over states sj∈Sjs_j∈ S_j at time t. Thus, at time 0, τj0τ^0_j will correspond to the prior distributions over the states in the knowledge partition Πji,0 _j^i,0. Since for each agent its type profile distribution is constant across the states in its knowledge partition Πji,t​(sj) _j^i,t(s_j) (as they are indistinguishable to the agent, by definition), then the total size of the type profile at time t is |τjt|=∑i=1N|Πji,t|.|τ^t_j|= _i=1^N | _j^i,t |. (9) We make use of the following result of hellman2012common, restated for our particular setting: Let ​(Πji,ti∈[N])C (\ _j^i,t\^i∈[N] ) denote the common knowledge set (finest common coarsening) across the agents’ knowledge partitions at time t. If the knowledge partitions reach a total size across the N agents that satisfies: ∑i=1N|Πji,t|=(N−1)​Dj+​(Πji,ti∈[N]), _i=1^N | _j^i,t |=(N-1)D_j+C (\ _j^i,t\^i∈[N] ), (10) then any type profile τjtτ^t_j over Πji,ti∈[N]\ _j^i,t\^i∈[N] has a common prior ℂ​ℙjCP_j. Now, note that |​(Πji,ti∈[N])|≤Dj |C (\ _j^i,t\^i∈[N] ) |≤ D_j as it forms a partition over the task state space SjS_j, so the set of singleton sets of each element sj∈Sjs_j∈ S_j has the most components to saturate the upper bound. Therefore, the desired size ∑i=1N|Πji,t|≤N​Dj _i=1^N | _j^i,t |≤ ND_j. Now, starting from an initial type profile |τj0||τ^0_j|, the number of proper refinements needed to get to the desired size ∑i=1N|Πji,t| _i=1^N | _j^i,t | in (10) is given by at most: ∑i=1N|Πji,t|−|τj0|+1=O​(N​Dj). _i=1^N | _j^i,t |-|τ^0_j|+1=O (ND_j ). Thus, since999For example, for N agents that start with maximally unrefined knowledge partitions, |τj0|=∑i=1N|Πji,t|=∑i=1N1=N|τ^0_j|= _i=1^N | _j^i,t |= _i=1^N1=N. trivially |τj0|≥0|τ^0_j|≥ 0, then O​(N​Dj)O(ND_j) is the most number of proper refinements we need to ensure there is a common prior with probability 1, by (10). By Lemma 1, this amounts to O​(N2​Dj)O(N^2D_j) messages in the worst case. ∎ Appendix F Proof of Proposition 4 Proof. The N agents will communicate with the spanning tree protocol (cf. Lemma 1) for each task j∈[M]j∈[M], but now with discrete, rather than continuous, messages. The discretized protocol is as follows: Let there be a node FjF_j that is globally accessible to all N agents. This intermediary is allowed its own prior and will see all messages between the agents (but not their inputs). Thus, ΠjFj,0​(sj)=Sj _j^F_j,0(s_j)=S_j for all states sj∈Sjs_j∈ S_j, and ΠjFj,t​(sj)sj∈Sj \ ^F_j,t_j(s_j) \_s_j∈ S_j coarsens the knowledge partitions at time t of the N agents, so all of the agents can therefore compute EjFj,tE^F_j,t_j. When agent i wants to send a message to its neighbor agent k, then agent i sends “High” if Eji,t>EjFj,t+εj/4E^i,t_j>E^F_j,t_j+ _j/4, “Low” if Eji,t<EjFj,t−εj/4E^i,t_j<E^F_j,t_j- _j/4, and “Medium” if otherwise. After agent i sends its message to agent k, agent k then refines its knowledge partition (and FjF_j also refines its partition), before agent k sequentially sends its message relative to the current EFj,t+1E^F_j,t+1 to the next agent down the spanning tree. This process of proper refinement is continued until there is a common prior by Lemma 2, which the N+1N+1 agents (including FjF_j) then condition on to reach ⟨M,N+1,ε,δ⟩ M,N+1, ,δ -agreement (hence the N+1N+1 factor in the first term to ensure there are enough proper refinements between the N+1N+1 agents). This generalizes aaronson2005complexity’s discretized protocol to N>2N>2 agents (and M>1M>1 tasks), which shows that between any pair of agents with a common prior, the number of messages needed for them to ⟨εj,δj⟩ _j, _j -agree remains unchanged from the full protocol, where each pair leverages the intermediary agent FjF_j. Therefore, by applying the spanning tree construction from Lemma 3, we get the same bound in the discretized case as before of O​(((N+1)​gj2)/(δj​εj)2)O(((N+1)g_j^2)/( _j _j)^2) messages before the N+1N+1 agents pairwise ⟨εj,δj⟩ _j, _j -agree, and therefore O​(((N+1)5​gj2)/(δj​εj)2)O(((N+1)^5g_j^2)/( _j _j)^2) messages until all (N+12) N+12 pairs of agents globally ⟨εj,δj⟩ _j, _j -agree, thereby ensuring that the original N agents agree. Following the rest of the proof of our Theorem 1 yields the per-task upper bound of O​((N+1)2​Dj+(N+1)7εj2​δj2)O ((N+1)^2D_j+ (N+1)^7 _j^2 _j^2 ), where we took the worst-case value of gj=O​(N+1)g_j=O(N+1). Subsuming lower-order terms in the big-O gives us the stated upper bound. BBF(3)(3)-compliant extension. Note that this discretized protocol can be made BBF(3)(3)-compliant via the following simple modification: pick any buffer parameter 0<θ≤130<θ≤ 13 (setting θ=14θ= 14 suffices) and, after the sender has deterministically selected the bucket B∈High,Medium,LowB∈\High,Medium,Low\ according to the thresholds ±εj/4± _j/4 around EjFj,tE^F_j,t_j, let the noisy channel transmit the matching 2‑bit codeword with probability 1−2​θ1-2θ and each of the two non‑matching codewords with probability θ, i.e. Pr⁡[mji,t∣sj,Πji,t−1​(sj)]=θ+(1−3​θ)​ 1​mji,t=mji,t,⋆​(B) [m_j^i,t s_j, ^i,t-1_j(s_j)]=θ+(1-3θ)\,1\m_j^i,t=m_j^i,t, (B)\. Because every codeword now has probability either 1−2​θ1-2θ or θ under every state, the message likelihood ratio is always in the range [θ(1−2​θ),(1−2​θ)θ] [ θ(1-2θ), (1-2θ)θ ], so the channel is BBF⁡(β)BBF(β) with β=(1−2​θ)/θ≤3β=(1-2θ)/θ≤ 3 when θ≥15θ≥ 15. We have the agents communicate first, as usual, for O​((N+1)​Dj2)O((N+1)D_j^2) messages per task until they reach a common prior and condition on it, by Lemma 2. Thus, the analysis that remains is how many more messages are needed to reach convergence to ⟨M,N+1,ε,δ⟩ M,N+1, ,δ -agreement. A round is called informative when the sender’s bucket is an outer one (High or Low) and the channel outputs the matching codeword; this occurs with probability at least (1−2​θ)​δ/2(1-2θ)\,δ/2, and in that event EjFj,tE^F_j,t_j moves by at least εj/4 _j/4, so the potential Ψt:=‖EFj,t‖22 _t:=\|E^F_j,t\|_2^2 increases by at least (εj/4)2( _j/4)^2. Hence, ​[Ψt+1−Ψt]≥(1−2​θ)​δj​(εj/4)2/2E[ _t+1- _t]≥(1-2θ)\, _j( _j/4)^2/2, giving a per-round drift κθ=Θ​((1−2​θ)​εj2​δj) _θ= \! ((1-2θ)\, _j^2 _j ). Define the centered process Zt:=Ψt−κθ​tZ_t:= _t- _θt; then Zt\Z_t\ is a martingale with one‑step differences bounded by 22, so the Azuma-Hoeffding inequality can be applied directly to ZtZ_t. Since Ψt≤1 _t≤ 1, the additive‑drift (optional‑stopping) theorem implies ​[T]≤O​(1/κθ)=O​((1−2​θ)−1​εj−2​δj−1)E[T]≤ O (1/ _θ )=O ((1-2θ)^-1 _j^-2 _j^-1 ) for one pair of agents. We write δ:=maxj∈[M]⁡δjδ:= _j∈[M] _j and, for the high‑probability bound, simply divide this budget evenly: each task gets δ/Mδ/M and each of its (N+12)=O​(N2) N+12=O(N^2) pairs gets η:=δ/(M​(N+12))η:=δ/(M N+12). Azuma-Hoeffding with this η yields O​(ln⁡(M​N2/δ)/(1−2​θ)​εj2​(δ/M))O ( (MN^2/δ)/(1-2θ) _j^2(δ/M) ) rounds per pair, so a union bound over all M​(N+12)M N+12 pairs leaves total failure probability ≤δ≤δ. Plugging the same δ/Mδ/M everywhere in the multi‑task bookkeeping of Proposition 4 gives T=O​(M​N2​D+M3​N7​ln⁡(M​N2/δ)(1−2​θ)​ε2​δ),T=O\! (MN^2D+ M^3N^7 (MN^2/δ)(1-2θ) ^2δ ), valid with probability at least 1−δ1-δ, with ε:=minj∈[M]⁡εj := _j∈[M] _j. ∎ Appendix G Proofs of Theorem 2 and Corollary 1 Here we prove both Theorem 2 and Corollary 1. We do this by generalizing aaronson2005complexity’s computational-boundedness treatment from 2 agents to N agents (specifically, N−qN-q agents and q humans that have differing, rather than equal, query costs) and M functions (rather than 1), using a message-passing protocol that combines his smoothed and spanning tree protocols, all without the Common Prior Assumption (CPA). G.1 Message-Passing Protocol This is the multi-task generalization of aaronson2005complexity’s “smoothed standard protocol”, additionally extended to the multi-agent setting a spanning tree the agents use to communicate their messages. Let bj=⌈log2⁡(Cεj)⌉b_j= _2( C _j) be a positive integer we will specify later with a specific constant value C>0C>0, in (18). The N computationally bounded agents follow the ⟨M,N,ε,δ⟩ M,N, ,δ -agreement algorithm (Algorithm 1), passing O​(bj)O(b_j)-bit messages according to the following protocol P: Protocol P description (for each task j∈[M]j∈[M]): 1. Current posterior expectation. The sending agent i∈[N]i∈[N] has at timestep t−1t-1 a real value Eji,t−1​(sj)∈[0,1]E^i,t-1_j(s_j)∈[0,1], which is its conditional expectation of fj∈[0,1]f_j∈[0,1] given its knowledge partition Πji,t−1​(sj) _j^i,t-1(s_j) and the current task state sj∈Sjs_j∈ S_j. (Recall that Eji,t−1​(sj):=ℙji​[fj∣Πji,t−1​(sj)]E^i,t-1_j(s_j):=E_P^i_j [f_j _j^i,t-1(s_j) ].) The knowledge partition Πji,t−1​(sj) _j^i,t-1(s_j) is formed after updating this expectation using Bayes’ rule, after having received the earlier message at time t−2t-2. 2. Draw an integer rjr_j via a triangular distribution. Agent i picks an integer offset rj∈−Lj,−Lj+1,…,Ljr_j\;\;∈\;\;\-L_j,\,-L_j+1,\,…,\,L_j\ according to a (continuous) triangular distribution Δtri​(⋅;αj) _tri(\,·\,;\, _j) that places its mass in the discrete set of values −Lj,…,Lj\-L_j,…,L_j\, and has effective width 2​αj2 _j. These discrete offsets rjr_j ensure that the messages will be discrete as well. Concretely, ℙ​[rj=x] [r_j=x] =Lj−|x|∑z=−LjLj(Lj−|z|) = L_j-|x| _z=-L_j^L_j\! (L_j-|z| ) =Lj−|x|Lj2,x∈−Lj,…,Lj, = L_j-|x|L_j^2, x∈\-L_j,…,L_j\, where LjL_j is chosen so that 2−bj​Lj=αj2^-b_jL_j= _j to bound the messages, as explained in the next step below. Note that the form above is chosen so that the “peak” of the discretized triangular distribution is at rj=0r_j=0. In other words, the form above is maximized in probability when the offset rj=0r_j=0 (which means that no noise is added to the messages with the highest probability). 3. Form the message with noise. The agent then sets mjt​(Πji,t−1​(sj))=round​(Eji,t−1​(sj))+ 2−bj​rj,m^t_j (\, _j^i,t-1(s_j) )\;=\;round (E^i,t-1_j(s_j) )\;+\;2^-b_jr_j, where round​(Eji,t−1​(sj))round (E^i,t-1_j(s_j) ) denotes rounding Eji,t−1​(sj)E^i,t-1_j(s_j) to the nearest multiple of 2−bj2^-b_j (thereby keeping it in the [0,1][0,1] interval). This ensures that the message mjtm_j^t is itself a multiple of 2−bj2^-b_j (thereby being encodable in O​(bj)O(b_j) bits), and is offset by ±αj± _j from round​(Eji,t−1​(sj))round (E^i,t-1_j(s_j) ), since |2−bj​rj|≤2−bj​Lj=αj|2^-b_jr_j|≤ 2^-b_jL_j= _j, by construction. Hence, each mjt∈[−αj,1+αj]m^t_j∈[- _j,1+ _j]. 4. Broadcast. This message mjt​(Πji,t−1​(sj))m^t_j ( _j^i,t-1(s_j) ) is then broadcast (either sequentially or in parallel) to the relevant agents via an edge of the two spanning trees S​Pj1∪S​Pj2SP^1_j∪SP^2_j each of diameter gjg_j, just as in Lemma 1, who update their knowledge partitions accordingly to Step 1. G.2 Sampling‐Tree Construction and Simulation for Each Task j∈[M]j∈[M] In our framework, each agent logically refines its knowledge partition Πji,t​(sj)sj∈Sj\ _j^i,t(s_j)\_s_j∈ S_j upon seeing a new message (Step 7 of Algorithm 1). However, given that the agents are computationally bounded, while refinement is allowed, the issue is with their belief updating. By Requirement 1, they have no direct ability to sample from the conditioned posterior distributions τji,t=ℙji(⋅∣Πji,t(sj))τ^i,t_j=P^i_j(· _j^i,t(s_j)) at run time, in order to compute the expectation in Step 1 of the protocol in §G.1. In other words, they cannot simply call “Sample​(ℙji∣Πji,t​(sj))Sample (P^i_j _j^i,t(s_j) )” in a black‐box manner. Thus, before any messages are exchanged, each agent constructs a sampling tree jiT^i_j offline of unconditional samples from the priors ℙjiP^i_j (which they are able to do by Subroutine 2 in Requirement 1). The idea is to precompute enough unconditional draws so that each new message can be simulated via “walking down” the relevant path in the tree that is consistent with the current message history (including that new message), rather than enumerating or sampling from the newly refined partition directly. That is the intuition. We now explain in detail how each agent can use sampling trees to simulate this protocol in a computationally bounded manner. This follows aaronson2005complexity’s approach of dividing the simulation into two phases—(I) Sampling‐Tree Construction (no communication) and (I) Message‐by‐Message Simulation—but extended here to our multi‐task and multi-agent setting. (I) Sampling‐Tree Construction (General N‐Agent Version). For each task j∈[M]j∈[M], and for each agent i∈[N]i∈[N], that agent i builds a sampling tree jiT^\,i_j of height RjR_j and branching factor BjB_j. We fix an ordering of the O​(Rj)O (R_j ) messages for task j (so that we know which agent is “active” at each level). Formally, let ActiveAgentj:0,…,Rj−1→[N]ActiveAgent_j:\0,…,R_j-1\→[N] denote the function specifying which agent sends the message at each round ℓ=0,…,Rj−1 =0,…,R_j-1. For instance, since the spanning tree protocol cycles through the N agents in a consistent order, then ActiveAgentj​(0)=i0ActiveAgent_j(0)=i_0, ActiveAgentj​(1)=i1,…ActiveAgent_j(1)=i_1,…, and so on, up to RjR_j total rounds. • Let rootjiroot^\,i_j be the root node of jiT^\,i_j. We say that the level of the root node is 0, its children are at level 11, grandchildren at level 22, etc., until depth RjR_j. Each node will be labeled by a sample in task j’s state space SjS_j, drawn from whichever agent’s unconditional prior distribution ℙjk​(⋅)P^k_j(·) is active at that level. • Concretely, for ℓ=0,…,Rj−1 =0,…,R_j-1: (a) Let aj:=ActiveAgentj​(ℓ)a_j:=ActiveAgent_j( ) be the agent whose distribution we want at level ℓ (i.e. the agent who sends the ℓ ‐th message). (b) Every node vjv_j at level ℓ is labeled by some previously chosen sample (if ℓ=0 =0 and vjv_j is the root, we can label it trivially or treat i’s perspective by a do‐nothing step). (c) Each of the BjB_j children w∈Children​(vj)w (v_j) at level ℓ+1 +1 is labeled with a fresh i.i.d. sample drawn from ℙja​(⋅)P^a_j(·), i.e. from the unconditional posterior of agent aja_j. (d) We continue until level RjR_j is reached, yielding a total of Bj+Bj2+⋯+BjRj=BjRj+1−1Bj−1−1=O​(BjRj)B_j+B_j^2+·s+B_j^R_j= B_j^R_j+1-1B_j-1-1=O (B_j^R_j ) (11) newly drawn states from the unconditional prior distributions ℙjkk∈[N]\P^k_j\_k∈[N] at the appropriate levels. Thus, node labels alternate among the N agents’ unconditional draws, depending on which agent is active at each level (timepoint in message history) ℓ in the eventual message sequence for task j. All of this is done offline by each agent, requiring no communication among the agents. Once constructed, each agent i holds jiT^i_j for personal use. (I) Message‐by‐Message Simulation. After building these sampling‐trees jii∈[N]\T^i_j\_i∈[N] (for each task j), the N agents enact the protocol in §G.1 in real time, but whenever an agent i needs to “update its posterior” after receiving a message, it does not sample from τji,t=ℙji(⋅∣new messages).τ^i,t_j=P^i_j(· messages). Instead, it uses the precomputed nodes in jiT^i_j as follows: • Initial estimate. At time t=0t=0, agent i approximates Eji,0​(sj) E^\,i,0_j(s_j) (its prior‐based expectation of fjf_j) by an empirical average of the BjB_j children of the root node rootji∈jiroot^\,i_j ^i_j. This becomes agent i’s initial posterior in the offline sense. • At each round t=1,…,Rjt=1,…,R_j: (a) The agent who is “active” (i.e. is about to send the t‐th message) consults its sampling‐tree, summing over the relevant subtree that corresponds to the newly received messages mj1,…,mjt−1m_j^1,…,m_j^t-1, so as to approximate Eji,t−1​(sj)E^\,i,t-1_j(s_j). (b) It picks an integer offset rj∈−Lj,…,Ljr_j∈\-L_j,…,L_j\ via the discrete triangular distribution (defined in §G.1), and then sends the t‐th message: mjt=round​(⟨Eji,t−1​(⋅)⟩i)+2−bj​rj.m^t_j=round ( E^\,i,t-1_j(·) _i )+2^-b_j\,r_j. (c) The other agents, upon receiving mjtm^t_j via the spanning tree protocol, update node‐weights in their own sampling trees (via the Δ -update equations in (aaronson2005complexity, §4.2)). This effectively “follows” the branch in their sampling trees consistent with mjtm^t_j so they approximate Πji,t​(⋅) ^i,t_j(·) without enumerating or sampling from the conditioned distribution. After all RjR_j messages for task j, the agents will have approximated the ideal protocol in §G.1 with high probability (assuming BjB_j was chosen large enough, which we will give an explicit value for below in the large‐deviation analysis in §G.3). Lemma 4 (Sampling Tree Time Complexity). For each task j, the time complexity of the sampling tree for all N agents is O​(BjRj​TN,q).O (B_j^R_jT_N,q ). (12) Proof. As described, we now assume there are N agents, among which q are humans and N−qN-q are AI agents, each potentially taking different times for evaluating fj​(⋅)f_j(·) or sampling from the priors (Subroutines 1 and 2 of Requirement 1, respectively): Teval,H,Teval,A​I,Tsample,H,Tsample,A​I.T_eval,H, T_eval,AI, T_sample,H, T_sample,AI. Specifically, a human agent i∈Hi∈ H uses time Teval,HT_eval,H to evaluate fj​(sj)f_j(s_j) for a state sj∈Sjs_j∈ S_j, and time Tsample,HT_sample,H to sample from another agent’s prior distribution ℙjk​(⋅)P^k_j(·) unconditionally; whereas an AI agent i∈A​Ii∈ AI spends Teval,A​IT_eval,AI and Tsample,A​IT_sample,AI on the same operations. Sampling Overhead. By (11), we do O​(BjRj)O (B_j^R_j ) unconditional draws in total per agent. Each draw is performed by the agent building the tree, but it might sample from another agent’s distribution. Hence the time cost per sample is Tsample,Hif the sampling agent is human,Tsample,A​Iif the sampling agent is an AI. casesT_sample,H &if the sampling agent is human,\\[4.0pt] T_sample,AI &if the sampling agent is an AI. cases We separate the O​(q​BjRj)O (q\,B_j^R_j ) samples by humans vs. O​((N−q)​BjRj)O ((N-q)\,B_j^R_j ) samples by the AI agents, yielding O​(q​BjRj​Tsample,H+(N−q)​BjRj​Tsample,A​I).O (q\,B_j^R_j\,T_sample,H+(N-q)\,B_j^R_j\,T_sample,AI ). Evaluation Overhead. Each sampled state sj∈Sjs_j∈ S_j may require computing fj​(sj)f_j(s_j). Again, if the labeling agent is a human, that cost is Teval,HT_eval,H, whereas if an AI agent is performing the labeling, it is Teval,A​IT_eval,AI. Consequently, O​(q​BjRj​Teval,H+(N−q)​BjRj​Teval,A​I)O (q\,B_j^R_j\,T_eval,H+(N-q)\,B_j^R_j\,T_eval,AI ) suffices to bound all function evaluations across all task j sampling‐trees jii∈[N]\T^i_j\_i∈[N]. During the Actual RjR_j Rounds. Once messages start flowing, each round requires partial sums or lookups in the prebuilt tree jiT^i_j. If agent i is a human in that round, each node update in the subtree will cost either Teval,HT_eval,H or Tsample,HT_sample,H (though typically we do not resample, so it may be just function evaluations or small indexing). Since the total offline overhead still dominates, summing over RjR_j rounds still yields an overall O​(BjRj​(Tsample,⋅+Teval,⋅))O (B_j^R_j\,(T_sample,·+T_eval,·) ) bound, substituting the index H,AI\H,AI\ depending on the category of the agent. Summing the above together gives us the stated upper bound in (12). ∎ G.3 Runtime Analysis Recall that the N agents follow the algorithm for ⟨M,N,ε,δ⟩ M,N, ,δ -agreement (Algorithm 1), which is a “meta-procedure” that takes in any message protocol we have discussed thus far (specifically, the one above). We now want the agents to communicate with enough rounds RjR_j such that they can feasibly construct a common prior consistent with their current beliefs, with high probability (Step 9 of Algorithm 1). We now have to bound RjR_j. Recall that in the sampling‐tree scenario, agents now no longer have exact access to each other’s posterior distributions, but rather approximate them by offline sampling—thus they cannot do a direct, immediate conditional update (as they could in the unbounded case). Indeed, triangular noise does not strictly mask a surprising message; rather, each agent still can receive an improbable message from its vantage. However, because these posteriors are only approximate, we must invoke large‐deviation bounds to ensure that with high probability, a message deemed γ‐likely by the sender but ≤γ/2≤γ/2 by the neighboring receiver actually appears within some number of messages (in other words, they disagree), forcing a proper refinement in their knowledge partitions. We rely on a probabilistic argument that surprises still occur on a similar timescale as in the exact setting, with high probability, thereby prompting a partition refinement: Lemma 5 (Number of Messages for One Refinement Under Sampling Trees). Suppose two neighboring agents i and k have not yet reached ⟨M,N,ε,δ⟩ M,N, ,δ -agreement on a given task j∈[M]j∈[M]. Therefore, they disagree on some message mj∗m_j^* as follows: Prsj∼ℙjirj∼Δtri​(αj)⁡[round​(Eji,t−1​(sj))+2−bj​rj=mj∗]≥γ, _ subarraycs_j ^i_j\\[2.0pt] r_j _tri( _j) subarray [round\! (E^i,t-1_j(s_j) )+2^-b_jr_j=m_j^* ]\;≥\;γ, (13) namely, Agent i regards mj∗m_j^* as γ-likely. Prsj′∼ℙjkrj′∼Δtri​(αj)⁡[round​(Ejk,t−1​(sj′))+2−bj​rj′=mj∗]≤γ2, _ subarraycs _j ^k_j\\[2.0pt] r _j _tri( _j) subarray [round\! (E^k,t-1_j(s _j) )+2^-b_jr _j=m_j^* ]\;≤\; γ2, (14) namely, Agent k sees mj∗m_j^* with probability at most γ/2γ/2. Then the probability of mj∗m_j^* failing to be produced in T=O​(ln⁡(μj)ln⁡(1/αj))T=O ( ( _j ) (1/ _j ) ) consecutive rounds is at most μj _j. In other words, the neighboring agents will disagree (thereby triggering at least one proper refinement, namely, for agent k) with probability at least 1−μj1- _j after T=O​(ln⁡(μj)ln⁡(1/αj))T=O ( ( _j ) (1/ _j ) ) consecutive rounds, for μj>0 _j>0. Proof. Note that (13) ensures mj∗m_j^* has at least probability γ from the sender’s perspective, so we can bound how quickly mj∗m_j^* is produced. Specifically, by aaronson2005complexity, we know that solely from the sender’s vantage (and therefore not assuming a common prior), the probability of mj∗m_j^* not appearing after T consecutive rounds is given by at most λjT/2​max⁡γ,1Bj, _j^\,T/2 \γ, 1B_j \, where λj:=4​eαj​ln⁡(Bj) _j:= 4\,e _j (B_j ). Therefore, it suffices for T to be such that λjT/2​max⁡γ,1/Bj≤μj _j^\,T/2\, \γ,1/B_j\≤ _j. Isolating T gives us: λjT/2≤μjmax⁡γ, 1/Bj⟹T2​ln⁡λj≤ln⁡μj−ln⁡(max⁡γ, 1/Bj). split& _j^T/2\;≤\; _j \γ,\,1/B_j\\\ & T2\, _j\;≤\; _j- \! ( \γ,\,1/B_j\ ). split Hence T T ≤2​[ln⁡μj−ln⁡(max⁡γ, 1/Bj)]ln⁡λj ≤ 2\! [ _j- \! ( \γ,\,1/B_j\ ) ] _j =2​[ln⁡μj+ln⁡(1/max⁡γ, 1/Bj)]ln⁡(4​e)+ln⁡(1/αj)+ln⁡ln⁡Bj. =\, 2\! [ _j+ \! (1/ \γ,\,1/B_j\ ) ] (4e)+ (1/ _j)+ \! B_j\,. As γ is a free parameter, we can obtain a cleaner (albeit looser) bound on T by subsuming suitably large constants. A natural choice for γ is γ=αjγ= _j, since the added noise to each message is at most αj<1/40 _j<1/40 (via Step 3 in §G.1) at each round. Therefore, the maximum additive noise is also the natural threshold for a “surprising” message from the receiver’s (agent k) point of view. We will also choose the branching factor BjB_j to be sufficiently large enough that 1/Bj≤αj1/B_j≤ _j. Thus, max⁡γ,1/Bj=αj \γ,1/B_j\= _j. Hence, for Bj≥1/αjB_j≥ 1/ _j, trivially ln⁡ln⁡(Bj)>0 (B_j )>0, and we have that T T ≤2​[ln⁡μj+ln⁡(1/αj)]ln⁡(4​e)+ln⁡(1/αj)+ln⁡ln⁡Bj \;≤\; 2 [ _j+ (1/ _j) ] (4e)+ (1/ _j)+ \! B_j ≤2​[ln⁡μj+ln⁡(1/αj)]ln⁡(1/αj) \;≤\; 2 [ _j+ (1/ _j) ] (1/ _j) =O​(ln⁡μjln⁡(1/αj)). =\;O\! ( _j (1/ _j) ). ∎ We are now finally in a position to bound RjR_j. Lemma 6 (Common Prior Lemma Under Sampling Trees). Suppose the N agents have not yet reached ⟨M,N,ε,δ⟩ M,N, ,δ -agreement on a given task j∈[M]j∈[M]. They will reach a common prior ℂ​ℙjCP_j with probability at least 1−δj1- _j, after Rj=O​(N2​Dj​ln⁡(δj/(N2​Dj))ln⁡(1/αj))R_j=O (N^2D_j ( _j/(N^2D_j) ) (1/ _j ) ) rounds. Proof. We recall from Lemma 1 that, if every agent had exact access to its posterior distribution, any block of O​(gj)O (g_j ) messages in the two‐spanning‐trees ordering would pass each agent’s “current expectation” to every other agent precisely once, guaranteeing that if a large disagreement persists, the receiving agent sees an unlikely message and refines its partition. In the sampled (bounded) setting, each agent i approximates its posterior by building an offline tree of O​(BjRj)O (B_j^R_j ) unconditional samples, labeling its nodes by unconditional draws from the respective priors ℙjk​(⋅)\P^k_j(·)\. Then, when agent i must send a message—nominally its exact posterior—it instead looks up that value via partial averages in its sampling‐tree. This is done in a manner consistent with message alternation. Specifically, in each round, the ActiveAgentjActiveAgent_j function from §G.2 ensures that every agent’s approximate expectation is routed along S​Pj1SP^1_j and S​Pj2SP^2_j (Step 4 of §G.1) once in every block of O​(gj)O (g_j ) messages. Now, in Lemma 5, we assumed the agents were neighbors. Thus, in the more general case of N agents, who communicate with spanning trees S​Pj1∪S​Pj2SP^1_j∪SP^2_j each of diameter gjg_j, if there is a “large disagreement” between some agent pair (i,k)(i,k), then from (i→k)(i\!→\!k) or (k→i)(k\!→\!i)’s vantage, the other’s message is improbable. In the worst case, the agents are O​(gj)O(g_j) hops apart. Hence, once that message arrives in these O​(gj​ln⁡(μj)ln⁡(1/αj))=O​(N​ln⁡(μj)ln⁡(1/αj))O (g_j ( _j ) (1/ _j ) )=O (N ( _j ) (1/ _j ) ) transmissions, the probability that the receiving agent still sees it as a surprise and properly refines its partition is ≥1−gj​μj≥1−N​μj≥ 1-g_j _j≥ 1-N _j, by a union bound over hops. These O​(N​ln⁡(μj)ln⁡(1/αj))O (N ( _j ) (1/ _j ) ) transmissions constitute one “block” of messages that results in at least one agents’ refinement with high probability. Finally, we will need O​(N​Dj)O(ND_j) refinements by Lemma 2 to reach a common prior. Partition the total timeline into X=O​(N​Dj)X=O (ND_j ) disjoint “blocks,” each block of O​(gj​ln⁡(μj)ln⁡(1/αj))O (g_j ( _j) (1/ _j) ) messages. Let EiE_i denote the event that “≥1≥ 1 refinement occurs in block i”. By the single‐block argument, ℙ​[Ei]≥1−N​μjP[E_i]≥ 1-N _j. We want ℙ​[⋂i=1XEi]P [ _i=1^XE_i ], the probability that all X blocks yield at least one refinement. Using a union bound on complements, Pr⁡[⋂i=1XEi] \! [ _i=1^XE_i ] =1−Pr⁡[⋃i=1XEic] =1- \! [ _i=1^XE_i^c ] ≥1−∑i=1XPr⁡[Eic] ≥ 1- _i=1^X \! [E_i^c ] ≥1−X​N​μj ≥ 1-X\,N\, _j =1−N2​Dj​μj. =1-N^2D_j\, _j. Thus, after X×O​(gj​ln⁡(μj)ln⁡(1/αj))=O​(N2​Dj​ln⁡(μj)ln⁡(1/αj))X× O (g_j ( _j) (1/ _j) )=O (N^2D_j ( _j) (1/ _j) ) rounds, we have converged to a common prior with probability at least 1−N2​Dj​μj1-N^2D_j _j. Setting δj:=N2​Dj​μj _j:=N^2D_j _j gives us the final result. ∎ Now that we have established that we can converge with high probability ≥1−δj≥ 1- _j to a state where there is a consistent common prior after Rj=O​(N2​Dj​ln⁡(δj/(N2​Dj))ln⁡(1/αj))R_j=O (N^2D_j ( _j/(N^2D_j) ) (1/ _j ) ) rounds, we now introduce an explicit algorithm for the efficient searching of the belief states of the agents. This is given by Algorithm 2, and finds a common prior via linear programming feasibility of the Bayesian posterior consistency ratios across states. Algorithm 2 ConstructCommonPrior(Πji,t,τji,ti=1N) (\ ^i,t_j,τ^i,t_j\_i=1^N ) 0: Finite state-space SjS_j of size DjD_j; for each agent i∈[N]i\!∈\![N]: partition Πji,t=Cj,ki,tk ^i,t_j=\C^i,t_j,k\_k and posterior τji,t(⋅|Cj,ki,t)τ^i,t_j(\,·\,|C^i,t_j,k). 0: Either a distribution pjp_j on SjS_j Bayes-consistent with all τji,tτ^i,t_j, or Infeasible. 1: Πj∗←⋀i=1NΠji,t _j ← _i=1^N ^i,t_j /* intersections */ 2: Let Πj∗=Z1,…,Zr _j =\Z_1,…,Z_r\, each Zα⊆SjZ_α S_j. 3: for α←1α← 1 to r do 4: create LP variable pj​(Zα)≥0p_j(Z_α)≥ 0, where pj​(Zα):=∑sj∈Zαpj​(sj)p_j(Z_α):= _s_j∈ Z_αp_j(s_j) /* prior mass */ 5: end for 6: for all agent i∈[N]i\!∈\![N] and cell Cj,ki,t∈Πji,tC^i,t_j,k\!∈\! ^i,t_j do 7: for all Zα⊆Cj,ki,tZ_α C^i,t_j,k do 8: for all sj,sj′∈Zαs_j,s_j ∈ Z_α do 9: add constraint pj​(sj)pj​(sj′)=τji,t​(sj∣Cj,ki,t)τji,t​(sj′∣Cj,ki,t) p_j(s_j)p_j(s_j )= τ^i,t_j\! (s_j C^i,t_j,k )τ^i,t_j\! (s_j C^i,t_j,k ) 10: end for 11: end for 12: end for 13: add normalization constraint ∑α=1rpj​(Zα)=1 _α=1^rp_j(Z_α)=1. 14: solve the resulting LP for feasibility 15: if a non-negative solution pj​(Zα)\p_j(Z_α)\ exists then 16: reconstruct pjp_j on each sj∈Zαs_j∈ Z_α via the ratio constraints 17: return pjp_j 18: else 19: return Infeasible /* no single prior fits all posteriors */ 20: end if We first give proofs of correctness (Lemma 7) and runtime (Lemma 8) in the exact setting, before generalizing it to the inexact sampling tree setting (Lemma 9). Lemma 7 (Correctness of “ConstructCommonPrior” (Algorithm 2)). Let SjS_j be a finite state space, and for each agent i∈[N]i∈[N] let Πji,t _j^i,t be a partition of SjS_j with corresponding posterior τji,tτ^i,t_j. Then the algorithm ConstructCommonPrior returns a probability distribution pjp_j on SjS_j that is a Bayes‐consistent common prior for all τji,tτ^i,t_j if and only if such a (possibly different) common prior ℂ​ℙjCP_j exists in principle. Proof. (⟹ ) Suppose first that there is some common prior ℂ​ℙjCP_j over SjS_j satisfying ℂ​ℙj​(sj∣Cj,ki,t)=τji,t​(sj∣Cj,ki,t) _j (s_j C_j,k^i,t )= _j^i,t (s_j C_j,k^i,t ) ∀i∈[N],sj∈Cj,ki,t. ∀\,i∈[N],\;s_j∈ C_j,k^i,t. That is, the common prior ℂ​ℙjCP_j agrees with every agent’s posterior on every state in each cell Cj,ki,tC_j,k^i,t. In particular, for sj,sj′∈Cj,ki,ts_j,s _j∈ C_j,k^i,t, ℂ​ℙj​(sj)ℂ​ℙj​(sj′)=τji,t​(sj∣Cj,ki,t)τji,t​(sj′∣Cj,ki,t). CP_j(s_j)CP_j(s _j)\;=\; τ^i,t_j (s_j C_j,k^i,t )τ^i,t_j (s _j C_j,k^i,t ). Since the meet partition Πj∗ _j^* refines each Πji,t _j^i,t, those ratio constraints must also hold on every meet‐cell Zα⊆Cj,ki,tZ_α C_j,k^i,t. Hence, if we introduce variables for pj​(Zα)p_j(Z_α) and enforce these pairwise ratio constraints (Algorithm 2, Step 6), the distribution ℂ​ℙjCP_j serves as a feasible solution to that linear system. Furthermore, the normalization (Step 13) is satisfied by ℂ​ℙjCP_j. Consequently, the algorithm will return some valid pjp_j. (⟸ ) Conversely, if the algorithm’s LP is feasible and yields pj​(Zα)α=1r \\,p_j(Z_α) \_α=1^r with ∑α=1rpj​(Zα)=1 _α=1^rp_j(Z_α)=1, then for any agent i and cell Cj,ki,t∈Πji,tC_j,k^i,t∈ _j^i,t, the meet‐cells Zα⊆Cj,ki,tZ_α C_j,k^i,t satisfy pj​(sj)pj​(sj′)=τji,t​(sj∣Cj,ki,t)τji,t​(sj′∣Cj,ki,t)for all ​sj,sj′∈Zα. p_j(s_j)p_j(s _j)\;=\; τ^i,t_j (s_j C_j,k^i,t )τ^i,t_j (s _j C_j,k^i,t ) all s_j,s _j∈ Z_α. Define for each state sj∈Zαs_j∈ Z_α, pj​(sj)=pj​(Zα)⋅τji,t​(sj∣Cj,ki,t).p_j(s_j)=p_j (Z_α )·τ^i,t_j (s_j C_j,k^i,t ). This pjp_j is a proper distribution over SjS_j (since the pj​(Zα)p_j(Z_α) sum to 1). Moreover, pj​(sj∣Cj,ki,t)=τji,t​(sj∣Cj,ki,t),p_j (s_j C_j,k^i,t )=τ^i,t_j (s_j C_j,k^i,t ), so pjp_j is indeed a common prior that matches each agent’s posterior distribution. Remark on the “true” prior vs. the algorithm’s output. If there were a “true” common prior ℂ​ℙjCP_j, the distribution pjp_j returned by the algorithm need not coincide with ℂ​ℙjCP_j numerically; many distributions can satisfy the same ratio constraints in each cell. But from every agent’s viewpoint, pjp_j and ℂ​ℙjCP_j induce the same posterior on all cells, and thus are equally valid as a Bayes‐consistent common prior. Hence ConstructCommonPrior returns a valid common prior if and only if one exists. ∎ Lemma 8 (Runtime of finding a common prior). Given posteriors for N agents over a state space SjS_j of size DjD_j, a consistent common prior can be found in O​(poly​(N​Dj2))O (poly(N\,D_j^2) ) time. Proof. Each agent’s partition Πji,t _j^i,t divides SjS_j into at most DjD_j cells, so the meet partition Πj∗=⋀i=1NΠji,t, _j^*= _i=1^N _j^i,t, has at most DjD_j nonempty blocks. Labeling each of the DjD_j states with its N-tuple of cell indices takes O​(N)O(N) time per state, and grouping (using hashing or sorting) can be done in O​(Dj)O(D_j) or O​(Dj​log⁡Dj)O(D_j D_j) time. Hence, constructing Πj∗ _j^* takes O~​(N​Dj) O(N\,D_j) time (the O~ O subsumes polylogarithmic factors). Next, for each agent i∈[N]i∈[N] and for each cell Cj,ki,tC_j,k^i,t in Πji,t _j^i,t, we impose pairwise ratio constraints over states in the same meet-cell contained in Cj,ki,tC_j,k^i,t. In the worst case, an agent’s cell may contain all DjD_j states, leading to (Dj2)=Θ​(Dj2) D_j2= (D_j^2) pairwise constraints for that agent. Summing over N agents gives a total of O​(N​Dj2)O(N\,D_j^2) constraints. Standard LP solvers run in time polynomial in the number of variables (at most DjD_j) and constraints (O​(N​Dj2)O(N\,D_j^2)), plus the bit-size of the numerical data. Therefore, the overall runtime is O​(poly​(N​Dj2))O (poly(N\,D_j^2) ). ∎ Having proven the runtime and correctness in the exact case, we now turn to bounding the runtime in the inexact sampling tree setting. From here on out, we will define TN,q,sample:=q​Tsample,H+(N−q)​Tsample,A​I.T_N,q,sample:=q\,T_sample,H+(N-q)T_sample,AI. (15) Lemma 9 (Approximate Common Prior via Sampling Trees). Assume that each agent approximates its conditional posterior τji,t(⋅∣Cj,ki,t)τ^i,t_j(· C_j,k^i,t) using its offline sampling tree jiT^\,i_j (of height RjR_j and branching factor BjB_j) with per-sample costs Tsample,HT_sample,H (for the q human agents) and Tsample,A​IT_sample,AI (for the N−qN-q AI agents). Suppose that for each cell Cj,ki,t⊆SjC_j,k^i,t S_j and for any two states sj,sj′∈Cj,ki,ts_j,s _j∈ C_j,k^i,t, the ratio τji,t​(sj∣Cj,ki,t)τji,t​(sj′∣Cj,ki,t) τ^i,t_j(s_j C_j,k^i,t)τ^i,t_j(s _j C_j,k^i,t) can be estimated via the sampling tree using S=O​(1εj2​ln⁡1δj)S=O ( 1 _j^2 1 _j ) samples per ratio, so that each is accurate within error εj _j with probability at least 1−δj1- _j (we assume sufficiently large BjB_j for this, specifically such that BjRj≳SB_j^R_j S for RjR_j given by Lemma 6). Then the overall time complexity in the sampling-tree setting of ConstructCommonPrior is O​(1εj2​ln⁡N​Dj2δj⋅poly​(Dj2​TN,q,sample)).O ( 1 _j^2 N\,D_j^2 _j·poly (D_j^2T_N,q,sample ) ). Proof. For each cell Cj,ki,tC_j,k^i,t and each state sj∈Cj,ki,ts_j∈ C_j,k^i,t, the agent uses its sampling tree jiT^\,i_j to compute an empirical estimate τ^ji,t​(sj∣Cj,ki,t) τ^i,t_j(s_j C_j,k^i,t) of τji,t​(sj∣Cj,ki,t)τ^i,t_j(s_j C_j,k^i,t) by averaging over the leaves of the appropriate subtree. (Recall that the tree is constructed offline by drawing O​(BjRj)O (B_j^R_j ) i.i.d. samples from the unconditional prior of the active agent at each level, defined by the ActiveAgentjActiveAgent_j function.) Thus, for a fixed agent i, cell Cj,ki,t⊆SjC_j,k^i,t S_j, and state sj∈Cj,ki,ts_j∈ C_j,k^i,t, define the i.i.d. indicator random variables X1,…,XmX_1,…,X_m by Xℓ=1,if the ℓ-th sampled state equals ​sj,0,otherwise.X_ \;=\; cases1,&if the $ $-th sampled state equals s_j,\\[4.0pt] 0,&otherwise. cases (Each sample is drawn from the leaves of jiT^\,i_j restricted to the cell Cj,ki,tC_j,k^i,t.) Thus, each XℓX_ indicates whether the ℓ -th sample drawn via the sampling tree lands on the state sjs_j for agent i. The empirical average τ^ji,t​(sj∣Cj,ki,t)=1m​∑ℓ=1mXℓ τ^i,t_j(s_j C_j,k^i,t)= 1m _ =1^mX_ serves as an estimate for the true probability τji,t​(sj∣Cj,ki,t)τ^i,t_j(s_j C_j,k^i,t). By the “textbook” additive Chernoff-Hoeffding bound, if m=O​(1εj2​ln⁡1δj′)m=O ( 1 _j^2 1 _j ), then Pr[|τ^ji,t(sj∣Cj,ki,t)−τji,t(sj∣Cj,ki,t)|≥εj]≤δj′. [\, | τ^i,t_j(s_j C_j,k^i,t)-τ^i,t_j(s_j C_j,k^i,t) |≥ _j ]≤δ _j. Similarly for sj′s _j, hence the ratio τ^ji,t​(sj∣Cj,ki,t)/τ^ji,t​(sj′∣Cj,ki,t) τ^i,t_j(s_j C_j,k^i,t) / τ^i,t_j(s _j C_j,k^i,t) differs by at most ±εj± _j with probability ≥1−2​δj′≥ 1-2δ _j, as it is computed from two such independent estimates of τ τ. Taking δj′=δj/(2​N​Dj2)δ _j= _j/(2\,N\,D_j^2) and union‐bounding over all O​(N​Dj2)O(N\,D_j^2) ratios yields a net success probability ≥1−δj≥ 1- _j. Finally, each ratio uses O​((1/εj2)​ln⁡(1/δj′))O((1/ _j^2)\, (1/δ _j)) subtree draws, each draw in the outer for loop in Step 6 of Algorithm 2, incurring Tsample,HT_sample,H or Tsample,A​IT_sample,AI cost depending on whether a human or AI agent (of the N total agents) built that portion of jiT^i_j. Hence, we replace the N in Lemma 8’s O​(poly​(N​Dj2))O ( poly (N\,D_j^2 ) ) runtime with q​Tsample,H+(N−q)​Tsample,A​Iq\,T_sample,H+(N-q)T_sample,AI, and multiply by the per-ratio sampling overhead of O​((1/εj2)​ln⁡(1/δj′))O((1/ _j^2)\, (1/δ _j)). This gives us the stated runtime. Thus, ConstructCommonPrior still returns a valid common prior with probability at least 1−δj1- _j. ∎ Now we have reached a state where the N agents have found a common prior ℂ​ℙjCP_j with high probability. In what follows, note that it does not matter if their common prior is approximate, but rather that the N agents can consistently find some common distribution to condition on, which is what Lemma 9 guarantees as a consequence. By Subroutine 2 of Requirement 1, all N agents can sample from the unconditional common prior ℂ​ℙjCP_j once it is found (Step 12 of Algorithm 1), in total time TN,q,sampleT_N,q,sample. They will do this and then build their new sampling trees of height Rj′R _j and branching factor Bj′B _j, post common prior. We now want to bound Rj′R _j: Lemma 10. For all fjf_j and ℂ​ℙjCP_j, the N agents will globally ⟨εj,δj⟩ _j, _j -agree on a given a task j∈[M]j∈[M] after Rj′=O​(N7/(δj​εj)2)R _j=O (N^7/ ( _j _j )^2 ) rounds under the message-passing protocol in §G.1. Proof. By aaronson2005complexity, when any two agents use this protocol under a common prior, it suffices to take Rj′=O​(1/(δj​εj2))R _j=O (1/( _j _j^2) ) rounds to reach pairwise ⟨εj,δj⟩ _j, _j -agreement, which is the same runtime as in the non-noisy two agent case. We just need to generalize this to N agents who share a common prior ℂ​ℙjCP_j. We take the same approach as in our Proposition 4, by having an additional node FjF_j that is globally accessible to all N agents. The rest of the proof follows their Theorem 11 for any pair of agents that use the intermediary FjF_j, so then by our Lemma 3, under the spanning tree protocol, it will instead require Rj′=O​((N+1)7/(δj​εj)2)R _j=O ((N+1)^7/ ( _j _j )^2 ) rounds for all (N+12) N+12 pairs of N+1N+1 agents (including FjF_j) to reach global ⟨εj,δj⟩ _j, _j -agreement. We subsume the lower-order terms in the big-O constant, giving rise to the above lemma. ∎ Thus, combining Lemma 10 with Lemma 4, reaching global ⟨εj,δj⟩ _j, _j -agreement with sampling trees will take total time: O​(Bj′N7/(δj​εj)2​TN,q).O (B _j^N^7/ ( _j _j )^2T_N,q ). (16) Let 1−δjfind_CP1-δ^find\_CP_j be the probability of converging to a common prior by Lemma 6, let 1−δjconstruct_CP1-δ^construct\_CP_j be the probability of constructing a common prior once reaching convergence by Lemma 9, and let 1−δjagree_CP1-δ^agree\_CP_j be the probability of reaching global ⟨εj,δjagree_CP⟩ _j,δ^agree\_CP_j -agreement when conditioned on a constructed common prior, then we have that Pr⁡[εj​-agreement] [ _j-agreement ] ≥(1−δjfind_CP)​(1−δjconstruct_CP) \;≥\; (1-δ^find\_CP_j ) (1-δ^construct\_CP_j ) (17) (1−δjagree_CP) (1-δ^agree\_CP_j ) ≥ 1−(δjfind_CP+δjconstruct_CP \;≥ 1- (δ^find\_CP_j+δ^construct\_CP_j +δjagree_CP). +δ^agree\_CP_j ). Hereafter, we set δj:=δjfind_CP+δjconstruct_CP+δjagree_CP _j:=δ^find\_CP_j+δ^construct\_CP_j+δ^agree\_CP_j. Thus, for a single task j, sufficiently large Bj′B _j, and Bj≥max⁡S1/Rj,1/αjB_j≥ \S^1/R_j,1/ _j\, we have that with probability ≥1−δj≥ 1- _j, the N agents will reach εj _j-agreement in time: O~(BjN2​Dj​ln⁡(δjfind_CP/(N2​Dj))ln⁡(1/αj)TN,q+N2​Dj​poly⁡(Dj2)​TN,q,sample+TN,q,sample+B′jN7(δjagree_CP​εj)2TN,q)=O​(TN,q​[BjN2​Dj​ln⁡(δjfind_CP/(N2​Dj))ln⁡(1/αj)+B′jN7(δjagree_CP​εj)2]). split& O\! (B_j^\,N^2D_j \! (δ^find\_CP_j/(N^2D_j) ) (1/ _j)\;T_N,q\\[4.0pt] &+\;N^2D_j\,poly\! (D_j^2 )\,T_N,q,sample\;+\;T_N,q,sample\\[4.0pt] &+\;B _j^\, N^7(δ^agree\_CP_j _j)^2\;T_N,q )\\[6.0pt] &=\;O\! (T_N,q [B_j^\,N^2D_j \! (δ^find\_CP_j/(N^2D_j) ) (1/ _j)+B _j^\, N^7(δ^agree\_CP_j _j)^2 ] ). split since the logarithmic terms (from Lemma 9) subsumed by the O~ O are on the non-exponential additive terms. This follows from summing the runtime of computing the sampling tree (Lemma 4), the total runtime of the common prior procedure in Lemma 9 multiplied by the number of online rounds RjR_j in the while loop of Algorithm 1 (Lines 4-16) given by Lemma 6, and the runtimes of conditioning and the second set of sampling trees in (15) and (16), respectively. We then take D:=maxj∈[M]⁡DjD:= _j∈[M]D_j, δfind_CP:=maxj∈[M]⁡δjfind_CPδ^find\_CP:= _j∈[M]δ^find\_CP_j, δconstruct_CP:=maxj∈[M]⁡δjconstruct_CPδ^construct\_CP:= _j∈[M]δ^construct\_CP_j, δagree_CP:=maxj∈[M]⁡δjagree_CPδ^agree\_CP:= _j∈[M]δ^agree\_CP_j, α:=minj∈[M]⁡αjα:= _j∈[M] _j, ε:=minj∈[M]⁡εj := _j∈[M] _j. To ensure an overall failure probability of at most δ we allocate the budget δ/Mδ/M per task, splitting it equally among the three sub‑routines: δjfind_CP=δjconstruct_CP=δjagree_CP=δ/(3​M)δ^find\_CP_j=δ^construct\_CP_j=δ^agree\_CP_j=δ/(3M). (A union bound over all M tasks and the three sub‑routines then bounds the total error by δ.) Finally, let B:=maxj∈[M]⁡max⁡Bj,Bj′B:= _j∈[M] \B_j,B _j\ so that a single base subsumes all exponential terms. Multiplying the worst‑case per‑task time by M tasks yields the global bound of Theorem 2. To prove Corollary 1, it suffices to explicitly bound Bj′B _j. This is because for each individual task j, a “total Bayesian wannabe”, after conditioning on a common prior ℂ​ℙjCP_j, as defined in Definition 1, corresponds to a single “Bayesian wannabe” in aaronson2005complexity’s sense. Thus, by their Theorem 20, for εj/50≤αj≤εj/40 _j/50≤ _j≤ _j/40, it suffices to set on a per-task basis, Bj′:=O​((11/αj)Rj′2/ρj2),bj:=⌈log2⁡Rj′ρj​αj⌉+2,B _j:=O ((11/ _j)^R _j^2/ _j^2 ), b_j:= _2 R _j _j _j +2, (18) where Rj′R _j is the value for N agents that we found in Lemma 10. Taking ρ:=minj∈[M]⁡ρjρ:= _j∈[M] _j maximizes the bound, and scaling by M gives rise to what is stated in Corollary 1. Appendix H Proof of Proposition 5 Proof. Hard priors. For a state space S=s0,s1,…,sD−1​(D≥3)S=\s_0,s_1,…,s_D-1\\;(D≥ 3), define ℙi​(s0) P^i(s_0) =0, =0, ℙk​(s0) P^k(s_0) =ν/2, =ν/2, ℙi​(sm) P^i(s_m) =1D−1, = 1D-1, ℙk​(sm) P^k(s_m) =1−ν/2D−1(m≥1). = 1-ν/2D-1 (m≥ 1). Then ‖ℙi−ℙk‖1=ν\|P^i-P^k\|_1=ν, which means the prior distance ≥ν≥ν by the triangle inequality. Payoff and targets. Let f​(s)=​[s=s0]f(s)=1[s=s_0]; thus i​[f]=0,k​[f]=ν/2E_i[f]=0,\;E_k[f]=ν/2. Set ε=ν/8,δ=1/4 =ν/8,\;δ=1/4. One agent, one tree. It suffices to consider only agent k. Let the sampling tree have L leaves labeled s1,…,sLs_1,…,s_L and define the Bernoulli indicators Xℓ=​[sℓ=s0]X_ =1[s_ =s_0]. The empirical expectation sent in the first message is E^​[f]=1L​∑ℓ=1LXℓ. E[f]\;=\; 1L _ =1^LX_ . If none of the L samples equals s0s_0, then E^​[f]=0 E[f]=0 and the absolute error is |E^​[f]−k​[f]|=ν/2≥4​ε| E[f]-E_k[f]|=ν/2≥ 4 ; the protocol necessarily fails. The probability of that event is Pr⁡[no ​s0]=(1−ν/2)L≥1−(ν/2)​L [no s_0]=(1-ν/2)^L≥ 1-(ν/2)L To make it ≤δ=1/4≤δ=1/4, we need 1−(ν/2)​L≤1/41-(ν/2)L≤ 1/4, then L≥3/(2​ν)L≥ 3/(2ν). Because the sampling tree has to be at least of size L, we complete this part. Many tasks & agents. Embed an independent copy of the hard task in each of M task indices, assign PiP^i to N/2N/2 of the agents and PkP^k to the other N/2N/2 agents, and sum the costs to get Ω​(M​ν−1​TN,q,sample) (Mν^-1\,T_N,q,sample) time units once the per‑sample cost of Requirement 1 is applied. ∎