Paper deep dive
Maximum Entropy Relaxation of Multi-Way Cardinality Constraints for Synthetic Population Generation
François Pachet, Jean-Daniel Zucker
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 94%
Last extracted: 3/26/2026, 1:35:59 AM
Summary
The paper introduces a maximum-entropy relaxation method for generating synthetic populations under multi-way cardinality constraints. By recasting the problem as a convex optimization over Lagrange multipliers in an exponential-family model, the approach overcomes the scalability and feasibility issues of traditional constraint programming and iterative proportional fitting (IPF) when dealing with overlapping unary, binary, and ternary constraints.
Entities (5)
Relation Signals (3)
Maximum Entropy Relaxation → solves → Synthetic Population Generation
confidence 98% · we propose a maximum-entropy relaxation of this problem [synthetic population generation]
Maximum Entropy Relaxation → uses → Exponential Family
confidence 95% · yielding an exponential-family distribution over complete population assignments
Maximum Entropy Relaxation → outperforms → Generalized Raking
confidence 90% · The results show that MaxEnt becomes increasingly advantageous as the number of attributes and ternary interactions grows
Cypher Suggestions (2)
Find methods used for synthetic population generation · confidence 90% · unvalidated
MATCH (m:Methodology)-[:SOLVES]->(t:Task {name: 'Synthetic Population Generation'}) RETURN m.nameIdentify datasets used for benchmarking · confidence 85% · unvalidated
MATCH (d:Dataset)<-[:EVALUATED_ON]-(m:Methodology) RETURN d.name, m.name
Abstract
Abstract:Generating synthetic populations from aggregate statistics is a core component of microsimulation, agent-based modeling, policy analysis, and privacy-preserving data release. Beyond classical census marginals, many applications require matching heterogeneous unary, binary, and ternary constraints derived from surveys, expert knowledge, or automatically extracted descriptions. Constructing populations that satisfy such multi-way constraints simultaneously poses a significant computational challenge. We consider populations where each individual is described by categorical attributes and the target is a collection of global frequency constraints over attribute combinations. Exact formulations scale poorly as the number and arity of constraints increase, especially when the constraints are numerous and overlapping. Grounded in methods from statistical physics, we propose a maximum-entropy relaxation of this problem. Multi-way cardinality constraints are matched in expectation rather than exactly, yielding an exponential-family distribution over complete population assignments and a convex optimization problem over Lagrange multipliers. We evaluate the approach on NPORS-derived scaling benchmarks with 4 to 40 attributes and compare it primarily against generalized raking. The results show that MaxEnt becomes increasingly advantageous as the number of attributes and ternary interactions grows, while raking remains competitive on smaller, lower-arity instances.
Tags
Links
- Source: https://arxiv.org/abs/2603.22558v1
- Canonical: https://arxiv.org/abs/2603.22558v1
Full Text
43,633 characters extracted from source content.
Expand or collapse full text
Maximum Entropy Relaxation of Multi-Way Cardinality Constraints for Synthetic Population Generation François Pachet1 Jean-Daniel Zucker1,2 1 ImagineAllThePeople, Paris, France 2 IRD, UMMISCO, Sorbonne Université, France pachet@gmail.com, jean-daniel.zucker@ird.fr Abstract Generating synthetic populations from aggregate statistics is a core component of microsimulation, agent-based modeling, policy analysis, and privacy-preserving data release. Beyond classical census marginals, many applications require matching heterogeneous unary, binary, and ternary constraints derived from surveys, expert knowledge, or automatically extracted descriptions. Constructing populations that satisfy such multi-way constraints simultaneously poses a significant computational challenge. We consider populations where each individual is described by categorical attributes and the target is a collection of global frequency constraints over attribute combinations. Exact formulations scale poorly as the number and arity of constraints increase, especially when the constraints are numerous and overlapping. Grounded in methods from statistical physics, we propose a maximum-entropy relaxation of this problem. Multi-way cardinality constraints are matched in expectation rather than exactly, yielding an exponential-family distribution over complete population assignments and a convex optimization problem over Lagrange multipliers. We evaluate the approach on NPORS-derived scaling benchmarks with 4 to 40 attributes and compare it primarily against generalized raking. The results show that MaxEnt becomes increasingly advantageous as the number of attributes and ternary interactions grows, while raking remains competitive on smaller, lower-arity instances. Keywords: synthetic population generation; maximum entropy; cardinality constraints; constraint programming; exponential family models 1 Introduction Synthetic population generation has become a foundational problem for contemporary AI systems in settings where individual-level microdata are unavailable, legally restricted, or prohibitively expensive to collect. At the same time, many downstream tasks—such as simulation, forecasting, and counterfactual analysis—require large numbers of representative agents rather than aggregate statistics alone. This need arises across a growing range of domains. In public policy and politics, synthetic populations are used to evaluate reforms, study voting behavior, and explore demographic scenarios without exposing sensitive personal data. In health and epidemiology, they support large-scale simulations of disease propagation, intervention strategies, and healthcare demand. In retail and marketing, they enable customer behavior modeling, demand forecasting, and what-if analyses in environments where individual data are fragmented or protected. More broadly, synthetic populations provide a bridge between aggregate statistics, expert knowledge, and agent-based models, allowing AI systems to reason about societies, markets, and institutions at scale while respecting privacy and regulatory constraints. Recent work on LLM-based synthetic personas further broadens this perspective beyond classical microsimulation: DeepMind’s Persona Generators expands compact contextual descriptions into diverse synthetic populations tailored to arbitrary settings, with an explicit emphasis on covering rare but consequential trait combinations (Paglieri et al., 2026). This renewed interest also aligns with a broader practitioner view that synthetic data can serve as a flexible design lever for shaping AI system behavior.111For example, Karpathy describes using LLM-generated synthetic conversations to inject identity and behavior into a small language model (Karpathy, 2025). In these settings, individuals are described by a vector of categorical attributes (e.g., age bracket, gender, region, occupation), and the population must match a set of published or estimated aggregates. A convenient formalization is to specify a set of frequency constraints over patterns of attribute values. For example: “52%52\% female” (unary), “18%18\% female and age 35–44” (binary), or “4%4\% female, age 35–44, and living in Île-de-France” (ternary). Such requirements are naturally expressed in constraint programming as cardinality constraints. Unary cardinalities are efficiently handled by the Global Cardinality Constraint (GCC) with strong filtering (Régin, 1996). However, multi-attribute constraints (arity ≥2≥ 2) are typically modeled as Sum constraints over reified predicates or over contingency-table cell counts. These overlapping linear constraints often provide weak propagation and become difficult to satisfy at scale, especially when many higher-order constraints are specified or when targets are noisy or mutually inconsistent. Synthetic population generation has been extensively studied in microsimulation, transportation modeling, and survey methodology, using approaches ranging from combinatorial optimization and calibration to iterative reweighting and probabilistic modeling (Beckman et al., 1996; Voas and Williamson, 2000; Deming and Stephan, 1940; Fienberg, 1980). These methods have enabled large-scale applications, but typically rely on either unary marginals or fully specified contingency tables, and assume internal consistency of the target statistics. In contrast, many contemporary applications involve selectively specified, heterogeneous, and potentially noisy multi-way constraints, for which existing approaches either do not scale or fail to provide robust solutions. This gap motivates alternative formulations that can accommodate partial and higher-order information without requiring exact feasibility. This paper proposes a probabilistic alternative: a maximum-entropy relaxation of multi-way cardinality-constrained population synthesis. Instead of searching directly for a discrete population, we infer a distribution over full attribute assignments whose expectations match the required frequencies. By the maximum-entropy principle (Jaynes, 1957), the solution belongs to an exponential family with one parameter per constraint. This transforms a difficult combinatorial problem into convex optimization over Lagrange multipliers, after which populations can be sampled efficiently. By construction, the maximum-entropy distribution is as unbiased as possible given the imposed constraints. 1.0.1 Contributions. We provide: (i) a formalization of population synthesis constraints as multi-way cardinality (Sum) CSPs over contingency tables; (i) a maximum-entropy relaxation that recasts this problem as convex optimization over an exponential-family model; (i) a practical fitting and sampling procedure in the exact-enumeration regime, together with a convex penalized extension for inconsistent targets; and (iv) an NPORS-derived benchmark family that reveals when MaxEnt overtakes generalized raking as dimensionality and constraint arity increase. 2 Related Work 2.1 Cardinality Constraints in CP The Global Cardinality Constraint (GCC) (Régin, 1996) enforces lower and upper bounds on the number of occurrences of atomic values across a set of variables sharing a domain, and supports strong propagation via flow-based filtering. In population synthesis, unary marginals can be expressed directly as GCCs over per-individual attribute variables. However, GCCs do not naturally extend to multi-attribute patterns: binary and ternary marginals correspond to joint counts rather than counts of atomic values, and therefore cannot be captured by a single GCC over the original variables. Such constraints typically require reification and additional Sum constraints, or contingency-table formulations with explicit count variables, which substantially weakens propagation. Recent work by Petit and Pachot (2025) proposes a CP framework for exact synthetic population generation from aggregate statistics and structural relations, using a batch-based construction strategy rather than a single global solve. The paper provides a limited extension-preserving result for the marginal-distribution objective in isolation, but it does not prove global completeness or global optimality for the full model once horizontal constraints, diversity, joint features, time limits, and irrevocable batch commitments are included. Our work targets the same Sum-based regime, but replaces exact feasibility by a probabilistic relaxation and optimizes a single global model over all constraints at once; it should therefore be viewed as complementary to these encodings rather than as a replacement for classical GCC propagation. 2.2 Contingency Tables, IPF, and Log-Linear Models Fitting contingency tables from marginal distributions has a long tradition in statistics, notably through iterative proportional fitting (IPF) and log-linear models (Deming and Stephan, 1940; Bishop, 1967; Fienberg, 1980). When only first-order marginals are specified, IPF is known to converge to the unique maximum-entropy distribution consistent with these constraints, and can be viewed as a special case of maximum-likelihood estimation in log-linear models with main effects only (Csiszár, 1975; Darroch and Ratcliff, 1972). However, this equivalence fundamentally breaks down when constraints involve multi-attribute patterns. Enforcing binary or ternary marginals using IPF would require operating on full contingency tables whose dimensionality grows exponentially with the number of attributes (Fienberg, 1980). In realistic population synthesis settings, only a sparse and heterogeneous subset of multi-way marginals is available, which prevents the construction of such tables in practice (Voas and Williamson, 2000; Lomax and Norman, 2016). Several extensions of raking have been proposed to incorporate additional constraints, including generalized regression estimators and iterative proportional updating (IPU) (Deville and Särndal, 1992; Ye et al., 2009). These approaches rely on heuristic weight adjustments and auxiliary interaction variables, and do not provide general guarantees of feasibility or convergence when constraints are overlapping or selectively specified (Chambers and Skinner, 2003; Lomax and Norman, 2016). As a result, IPF and its generalizations cannot serve as principled baselines for population synthesis under arbitrary multi-way cardinality constraints. 2.3 Bayesian Networks and Probabilistic Graphical Models Bayesian networks represent joint distributions through local conditional dependencies structured as directed acyclic graphs (Pearl, 1988). While effective for probabilistic inference under assumed causal or dependency structures, they are not well suited for enforcing exact or near-exact global cardinality constraints over multi-variable configurations. Encoding such constraints within a Bayesian network would require auxiliary counting variables or high-order dependencies spanning the entire population, thereby eliminating the tractability advantages of Bayesian factorization and rendering inference computationally prohibitive (Cooper, 1990). Moreover, Bayesian networks do not guarantee the preservation of observed marginal distributions, whereas maximum-entropy models explicitly enforce aggregate constraints through convex optimization. The approach proposed here is therefore better understood as a constraint-driven distribution construction method rather than as a probabilistic inference problem amenable to standard Bayesian network formulations. 2.4 Positioning our work Our approach directly targets the regime where constraints are expressed as sums over arbitrary attribute patterns, including unary, binary, and ternary marginals. Rather than enumerating full contingency tables or relying on heuristic reweighting schemes, we formulate population synthesis as a maximum-entropy problem under linear expectation constraints. In this perspective, IPF appears as a degenerate special case of our framework restricted to unary marginals. Our contribution should therefore be viewed as extending classical population synthesis techniques to the Sum-based constraint regime that arises in realistic high-dimensional settings. 3 Problem Formulation 3.1 Attribute Space and Populations Let K be the number of categorical attributes, with finite domains D1,…,DKD_1,…,D_K. A persona is a full assignment x=(a1,…,aK)∈≜D1×⋯×DK,x=(a_1,…,a_K) D_1×·s× D_K, and ||=∏k=1K|Dk||X|= _k=1^K|D_k|. A population of size N can be represented either as a multiset of individuals x(1),…,x(N)\x^(1),…,x^(N)\ or, equivalently, as a contingency table of nonnegative integer counts C=(Cx)x∈,Cx∈ℤ≥0,∑x∈Cx=N.C=(C_x)_x , C_x _≥ 0, _x C_x=N. 3.2 Multi-Way Cardinality Constraints (Sum-CSP) We consider constraints specified over patterns (subsets) j⊆S_j with target frequency αj∈[0,1] _j∈[0,1] (or target count Mj=αjNM_j= _jN). Each pattern induces an indicator feature fj(x)≜[x∈j].f_j(x) 1[x _j]. In a strict CSP over contingency-table variables, the constraints take the form ∑x∈jCx=Mj,j=1,…,m, _x _jC_x=M_j, j=1,…,m, (1) together with nonnegativity and the total-size constraint. When jS_j corresponds to fixing 1, 2, or 3 attributes, (1) encodes unary, binary, or ternary marginals respectively. As arity and overlap increase, (1) becomes difficult for CP/MIP solvers, and feasibility may fail under inconsistent targets. 3.3 Computational Hardness and Motivation for Relaxation Problem (1) defines a system of linear equalities over |||X| integer variables, where |||X| grows exponentially with the number of attributes K. Even for moderate K, explicitly representing all contingency-table cells is infeasible. Moreover, the constraints in (1) are highly overlapping: each variable CxC_x typically participates in many pattern constraints, and the resulting constraint matrix is dense and ill-conditioned. As a consequence, exact feasibility is fragile: small inconsistencies in the target marginals may render the system infeasible. From a computational perspective, solving (1) exactly amounts to finding a nonnegative integer solution to a large-scale multi-way marginal consistency problem, which is known to be NP-hard in general and intractable for standard CP or MIP solvers beyond small instances. Even when feasibility holds, the space of solutions is typically extremely large, making arbitrary solution selection undesirable. These limitations motivate the use of relaxed formulations that trade exact satisfaction of constraints for probabilistic or entropic consistency, while preserving the essential structure of the marginal information. 4 Maximum Entropy Relaxation Maximum entropy modeling under linear expectation constraints is equivalent to exponential-family modeling: the MaxEnt solution admits a canonical exponential form whose sufficient statistics correspond to the constrained features. While exponential-family and energy-based models are typically learned from microdata via likelihood-based objectives (Wainwright and Jordan, 2008), our setting departs from this paradigm by inferring parameters directly from aggregate multi-way cardinality constraints. The resulting model is therefore used as an approximate solver and sampler for large-scale combinatorial population constraints, rather than as a density estimator over observed samples. The maximum entropy principle provides a principled way to construct probability distributions from partial information, originally introduced in statistical physics to characterize equilibrium distributions under macroscopic constraints (Jaynes, 1957). Its central idea is to select, among all distributions satisfying the known constraints, the one that makes the fewest additional assumptions. Beyond physics, maximum entropy models have been successfully applied in machine learning and structured data modeling to capture higher-order statistical regularities while avoiding overfitting (Sakellariou et al., 2017). In our context, this principle yields a global optimization-based relaxation of multi-way cardinality-constrained population synthesis and suggests a natural penalized extension for noisy or mutually inconsistent targets. 4.1 Expectation Constraints We replace the integer table C by a distribution p over X and enforce constraints in expectation: p[fj]=∑x∈p(x)fj(x)=αj,j=1,…,m.E_p[f_j]= _x p(x)\,f_j(x)= _j, j=1,…,m. (2) This is a convex relaxation of (1), replacing exact counts by expected frequencies. 4.2 Soft Constraints for Inconsistent Targets When the target vector α lies outside the moment polytope generated by the features fjj=1m\f_j\_j=1^m, the equalities in (2) cannot all be satisfied simultaneously. A natural soft alternative is therefore to optimize directly over distributions: minp∈Δ()−H(p)+β2∑j=1mwj(p[fj]−αj)2, _p∈ (X)\;-H(p)+ β2 _j=1^mw_j (E_p[f_j]- _j )^2, (3) where β>0β>0 controls the trade-off between entropy and constraint fidelity and wj≥0w_j≥ 0 optionally rescales heterogeneous constraints. This objective is convex in p because negative entropy is convex and each penalty term is a square of a linear functional. When the hard constraints are feasible and β is large, the solution approaches the standard MaxEnt model. In the present paper, however, the empirical evaluation focuses on the hard-constraint regime induced by empirical marginals extracted from observed populations; the penalized formulation in (3) is included to make the treatment of noisy targets explicit. 4.3 MaxEnt Principle and Exponential Family Among distributions satisfying (2), we choose the one with maximum Shannon entropy: maxp∈Δ()H(p)s.t.p[fj]=αj∀j, _p∈ (X)\;H(p) .t. _p[f_j]= _j\;\;∀ j, where H(p)=−∑xp(x)logp(x)H(p)=- _xp(x) p(x). By classical duality results (Jaynes, 1957; Wainwright and Jordan, 2008), if the constraint set is feasible, the unique maximizer has the exponential-family form pλ(x)=1Z(λ)exp(∑j=1mλjfj(x)),p_λ(x)= 1Z(λ) \! ( _j=1^m _jf_j(x) ), (4) where λ∈ℝmλ ^m are Lagrange multipliers and Z(λ)=∑xexp(∑jλjfj(x))Z(λ)= _x ( _j _jf_j(x)) is the partition function. 4.4 Dual Objective and Convexity The dual problem is minλ∈ℝmΦ(λ)≜logZ(λ)−∑j=1mλjαj. _λ ^m\;\; (λ) Z(λ)- _j=1^m _j _j. (5) Its gradient is ∇Φ(λ)=pλ[f]−α,∇ (λ)=E_p_λ[f]-α, and the Hessian equals the covariance matrix Covpλ(f)Cov_p_λ(f), implying convexity and, under mild conditions, strict convexity (Wainwright and Jordan, 2008). 5 Optimization and Sampling 5.1 Learning λ We learn λ by minimizing (5) using gradient-based optimization. When |||X| is moderate (e.g., up to a few million), expectations can be computed exactly by enumerating x∈x . For larger spaces, expectations can be estimated by sampling from pλp_λ (e.g., Gibbs or Metropolis–Hastings); in this draft we focus on the enumeration regime relevant to arity-1/2/3 benchmarks. All experiments reported below use exact enumeration of X and optimize λ with L-BFGS; no MCMC approximation is used in the present draft. The overall procedure is summarized in Algorithm 1. Algorithm 1 MaxEnt fitting from multi-way cardinality targets 0: Attribute space X; patterns jj=1m\S_j\_j=1^m; targets α∈[0,1]mα∈[0,1]^m 1: Initialize λ←0λ← 0 2: repeat 3: Compute pλ(x)p_λ(x) via (4) (exact or approximate normalization) 4: Compute α^j←pλ[fj]=∑xpλ(x)[x∈j] α_j _p_λ[f_j]= _xp_λ(x)1[x _j] 5: Update λ←λ−η(α^−α)λ←λ-η\,( α-α) (e.g., gradient descent; use L-BFGS in practice) 6: until convergence 7: return λ 5.2 Generating a Population Once λ is learned, we sample a population of size N i.i.d.: x(1),…,x(N)∼pλ.x^(1),…,x^(N) p_λ. Empirical frequencies concentrate around targets by standard concentration inequalities. If exact integer marginals are required, a lightweight repair step (e.g., local swaps) can be applied to match selected constraints while minimally perturbing others (left for future work). 6 Experimental Evaluation We evaluate the proposed Maximum Entropy relaxation on a family of NPORS-derived scaling benchmarks with ternary-or-less cardinality constraints. The goal is to isolate how relative performance changes with the number of attributes, the constraint arity, and the sampled population size. 6.1 Experimental Setup To obtain realistic yet controllable instances, we use the 2024 National Public Opinion Reference Survey (NPORS) as a source population. After preprocessing, the dataset contains N=5,626N=5,626 respondents and several hundred categorical or discretized variables with explicit missing-value codes. We construct five NPORS-derived problems by selecting subsets of K∈4,12,20,28,40K∈\4,12,20,28,40\ categorical attributes with moderate domain sizes, clear semantic interpretation, and limited structural missingness. Each subset induces an empirical population in the formalism of Section 3, from which we extract global cardinality constraints over attribute patterns of increasing arity: • Unary constraints, corresponding to marginal distributions of single attributes; • Binary constraints, corresponding to joint marginals over pairs of attributes; • Ternary constraints, corresponding to joint marginals over triples of attributes. Unary marginals are always included in full. For K<20K<20, all candidate pairs and triples are retained. For K≥20K≥ 20, binary and ternary interactions are ranked by informativeness and truncated to the top 50 pairs and top 50 triples, respectively. This keeps the benchmark family comparable across problem sizes while allowing the number of atomic constraints to reflect domain size and interaction complexity. 6.1.1 Inputs. The construction algorithm takes as input: • a population dataset C=(Cx)x∈C=(C_x)_x ; • a binary constraint budget n2∈ℕ+n_2 ^+ or rate ρ2∈(0,1] _2∈(0,1], controlling the number of variable pairs retained; • a ternary constraint budget n3∈ℕ+n_3 ^+ or rate ρ3∈(0,1] _3∈(0,1], controlling the number of variable triples retained. Unary constraints are always extracted in full (one constraint per attribute). 6.1.2 Variable selection by informativeness. Rather than selecting pairs and triples arbitrarily, we rank them by their statistical informativeness and retain the top-k most informative ones: • Binary pairs are ranked by Normalized Mutual Information (NMI) computed on the joint distribution of each pair in the population; • Ternary triples are ranked by KL divergence between the observed joint marginal and the MaxEnt-IPF reference distribution that matches all three pairwise marginals of the triple. Given a budget n2n_2 (resp. n3n_3), the top-n2n_2 pairs (resp. top-n3n_3 triples) are retained. When a rate ρ2 _2 (resp. ρ3 _3) is specified instead, k=⌈ρ×|candidates|⌉k= ρ×|candidates| pairs (resp. triples) are selected. 6.1.3 Output. The algorithm outputs a constraint problem defined by a set C of global Sum constraints enforcing exact pattern counts for the selected marginals. The total number of atomic constraints equals the sum of the number of observed values across all selected attributes, pairs, and triples — which may be orders of magnitude larger than the number of selected variable pairs or triples (e.g., 50 triples yielding up to 17 883 atomic constraints for a 40-variable problem). 6.1.4 Constraint Extraction Algorithm. The construction procedure is summarized in Algorithm 2. Unary, binary, and ternary marginals are extracted from the empirical population and included according to the specified budgets or rates. Algorithm 2 Extraction of Reference Constraints from a Population Dataset 0: Population C; binary budget n2∈ℕ+n_2 ^+ or rate ρ2∈(0,1] _2∈(0,1]; ternary budget n3∈ℕ+n_3 ^+ or rate ρ3∈(0,1] _3∈(0,1] 0: Constraint set C 1: ←∅C← 2: — Phase 1: Unary constraints all attributes, always included in full 3: for each attribute AiA_i, i=1,…,Ki=1,…,K do 4: compute unary marginal of AiA_i over C 5: add one Sum constraint per observed value of AiA_i to C 6: end for 7: — Phase 2: Binary constraints top-k2k_2 pairs by Normalized Mutual Information (NMI) 8: for each attribute pair (Ai,Aj)(A_i,A_j) with i<ji<j do 9: sij←NMI(Ai,Aj)s_ij (A_i,A_j) 10: end for 11: k2←n2k_2← n_2 if n2n_2 given, else ⌈ρ2⋅(K2)⌉ _2· K2 12: for each of the k2k_2 top-ranked pairs (Ai,Aj)(A_i,A_j) by sijs_ij (descending) do 13: compute binary marginal of (Ai,Aj)(A_i,A_j) over C 14: add one Sum constraint per observed value combination to C 15: end for 16: — Phase 3: Ternary constraints top-k3k_3 triples by Kullback–Leibler (KL) divergence 17: for each attribute triple (Ai,Aj,Ak)(A_i,A_j,A_k) with i<j<ki<j<k do 18: P^ijk←IPF(Pij,Pik,Pjk) P_ijk (P_ij,\,P_ik,\,P_jk) Iterative Proportional Fitting: MaxEnt dist. matching all pairwise marginals 19: sijk←KL(Pijk∥P^ijk)s_ijk (P_ijk\,\|\, P_ijk) 20: end for 21: k3←n3k_3← n_3 if n3n_3 given, else ⌈ρ3⋅(K3)⌉ _3· K3 22: for each of the k3k_3 top-ranked triples (Ai,Aj,Ak)(A_i,A_j,A_k) by sijks_ijk (descending) do 23: compute ternary marginal of (Ai,Aj,Ak)(A_i,A_j,A_k) over C 24: add one Sum constraint per observed value combination to C 25: end for 26: return C Problem Unary Binary Ternary Total 4-Var 12 54 92 158 12-Var 35 117 1 082 1 234 20-Var 67 510 2 249 2 826 28-Var 107 703 3 824 4 634 40-Var 197 1 329 17 883 19 409 Table 1: Number of nonzero atomic cells induced by the selected unary, binary, and ternary marginals in the NPORS-derived scaling benchmarks (N=5,626N=5,626 in the source population). Reported counts correspond to observed support sizes summed over retained marginals, not simply to the number of variables, pairs, or triples. For K≥20K≥ 20, the numbers of retained binary and ternary marginals are capped at 50. This procedure yields the five NPORS-derived scaling benchmarks used throughout the paper. Table 1 reports the number of atomic constraints, defined as the observed nonzero cells contributed by the retained unary, binary, and ternary marginals, rather than the number of marginals themselves. The reported values are therefore obtained by summing the observed support sizes of all retained marginals within each arity. In this setting, the support size of a marginal is the number of category combinations that occur at least once in the source population. For example, the ternary marginal BORN-DEVICE1A-EMINUSE222BORN indicates whether the respondent identifies as a born-again or evangelical Christian, DEVICE1A indicates cell phone ownership, and EMINUSE indicates whether the respondent uses the internet or email. involves three variables with three observed levels each, so its full Cartesian product contains 3×3×3=273× 3× 3=27 possible cells; however, only 22 of these combinations are observed in the NPORS source population, and its support size is therefore 22. The 4-Var benchmark makes this accounting explicit: the four variables contribute 4×3=124× 3=12 unary constraints, the six retained pairs each have full 3×33× 3 support and thus contribute 6×9=546× 9=54 binary constraints, and the four retained triples have support sizes 22, 23, 25, and 22, which sum to 92 ternary constraints. The total for 4-Var is thus 12+54+92=15812+54+92=158. The same principle explains the larger counts in higher-dimensional benchmarks: for instance, the 12-Var benchmark contains 12 unary marginals but 35 unary atomic constraints, 14 binary marginals but 117 binary atomic constraints, and 44 ternary marginals but 1,082 ternary atomic constraints. More generally, the ternary count grows rapidly with both the number of retained triples and the attribute domain sizes, reaching 17,883 atomic constraints in the 40-Var benchmark despite the cap of 50 retained triples. 6.1.5 Compared methods. The main empirical comparison is between MaxEnt and generalized raking. We focus on this comparison because both methods produce approximate synthetic populations under the same selected marginals and remain meaningful across all benchmark sizes. Exact CP encodings motivate the relaxation but are not used as the primary quantitative baseline in the present draft. 6.1.6 OR-Tools CP-SAT formulation. For completeness, we also implemented an exact reference model with OR-Tools CP-SAT. Let V denote the set of selected categorical variables and let n be the desired size of the synthetic population. For each variable v∈Vv∈ V, the domain vD_v is the set of categories observed in the reference data, with missing-value codes retained as ordinary categories. From the reference population, we compute all unary marginals and, up to a user-defined maximum arity k∈1,2,3k∈\1,2,3\, all pairwise and ternary marginals. For each scope S⊆VS V such that 1≤|S|≤k1≤|S|≤ k, and for each observed assignment a∈∏v∈Sva∈ _v∈ SD_v, we define a target count tS,at_S,a. When the requested synthetic size differs from the source population size, these counts are proportionally rescaled so that they sum to n. The CP-SAT model then introduces integer decision variables xi,vx_i,v encoding the category assigned to variable v for synthetic individual i, together with auxiliary count variables yS,a=∑i=1n[xi,S=a]y_S,a= _i=1^n1[x_i,S=a] for every stored marginal cell. Depending on the experimental setting, the model either enforces yS,a=tS,ay_S,a=t_S,a exactly (hard mode) or minimizes the ℓ1 _1 discrepancy ∑S,a|yS,a−tS,a|. _S,a |y_S,a-t_S,a |. Only marginal cells with nonzero empirical support are stored in the generated instance; consequently, unobserved cells are not encoded as structural zeros and remain unconstrained. In the present draft, this CP-SAT model is included as an exact reference formulation, while the systematic quantitative comparison focuses on MaxEnt and generalized raking. 6.1.7 Implementation details. All MaxEnt models are fitted in the exact-enumeration regime: expectations over X are computed explicitly and the parameters λ are optimized with L-BFGS. Generalized raking follows a standard iterative proportional fitting procedure: each individual carries a positive weight, initialized uniformly, and constraints are processed sequentially; for each constraint, the weights of matching individuals are multiplicatively rescaled so that the weighted sum matches the target value. The procedure is iterated 1 000 times. 6.1.8 Metric. For a sampled population, let α^j α_j denote the empirical frequency associated with constraint j. We report the mean relative constraint error MRE≜1m∑j=1m|α^j−αj|αj,MRE 1m _j=1^m | α_j- _j| _j, (6) together with runtime. Because only marginal cells with nonzero empirical support are retained, all target frequencies αj _j are strictly positive. 6.2 Results on NPORS-Derived Scaling Benchmarks All results below use the five NPORS-derived problems described above. We vary the sampled population size N to study how each method behaves under increasing sampling pressure while keeping the underlying constraint system fixed. As expected, the CP-SAT approach computes exact solutions but does not scale. In the case of 12 variables and 315 target cells, the solver proves optimality for populations up to n=30n=30 (in under 90 seconds). Beyond this threshold, the solver returns feasible but unproved solutions whose relative error degrades rapidly with population size. In the case of 24 variables and 3,621 target cells, the solver cannot prove optimality even for the smallest tested population (n=10n=10), and fails to find any feasible solution for n≥200n≥ 200 within the allotted time. These results are summarized in Figure 1. Figure 1: Scalability of the CP-SAT solver. Left: 12 variables (315 target cells); the solver proves optimality up to n=30n=30 (green dots). Right: 24 variables (3,621 target cells); the solver never proves optimality. Markers indicate solver status: optimal (proved best), feasible (not proved optimal), and timeout/infeasible. Error bars show the range over 3 random seeds. The time limit is 120 s per run. Figures 2 and 3 summarize the relative performance of MaxEnt and Raking across problem sizes and constraint arities in 2D and 3D competence maps. Figures 4 and 5 show mean relative error as a function of population size for 28- and 40-variable instances, at arity 2 and arity 3. Figure 2: Competence map of MaxEnt vs. Raking (2D view): relative advantage as a function of population size and number of variables, for arity 2 and arity 3. Green = MaxEnt better; red = Raking better. Figure 3: Competence map of MaxEnt vs. Raking (3D view): height encodes MaxEnt precision (1−-err); color encodes relative advantage. Green regions indicate configurations where MaxEnt wins. Figure 4: Mean relative error vs. population size for 28-variable instances (left: arity 2; right: arity 3). MaxEnt consistently outperforms Raking for small populations and high arity. Figure 5: Mean relative error vs. population size for 40-variable instances (left: arity 2; right: arity 3). The advantage of MaxEnt over Raking grows with the number of variables. 7 Discussion 7.1 MaxEnt as a Global Optimizer From an algorithmic standpoint, the proposed approach can be interpreted as an approximate solver for Sum-based multi-way cardinality CSPs. Exact CP or MIP formulations search directly for integer-feasible contingency tables. In contrast, the maximum-entropy relaxation optimizes a single global objective over the multiplier vector λ and returns a distribution over complete assignments. The learned exponential-family model therefore plays the role of a relaxed solver: it identifies regions of the solution space that best satisfy the constraints in aggregate and supports principled sampling rather than arbitrary solution selection. Crucially, MaxEnt performs a global update of all Lagrange multipliers λ, finding one trade-off across all constraints at once. Raking, by contrast, applies sequential one-constraint-at-a-time rescalings, so improvements on one marginal can partially undo earlier corrections on overlapping marginals. On the NPORS-derived benchmarks, this interference becomes more pronounced as K and the constraint arity increase, which helps explain the larger residual errors observed for raking in the high-dimensional settings. 7.2 Empirical Comparison: MaxEnt vs. Raking On the NPORS-derived scaling benchmarks, the most salient factor governing relative performance is the number of variables K, not the sampled population size N alone. For K≤12K≤ 12, raking remains a strong baseline and can win at arity 2 or at large N even under arity 3. Around K=20K=20, the picture becomes mixed: MaxEnt is more reliable at arity 3, whereas neither method dominates uniformly at lower arity. For K≥28K≥ 28, MaxEnt outperforms raking in all tested configurations, with the largest gaps appearing on the 40-variable problems. A plausible structural explanation is that the number of overlapping constraints grows much faster than N as K increases, increasing interference between sequential raking updates while leaving the MaxEnt fit governed by a single global objective. We present this as an empirical interpretation of the observed results rather than as a general impossibility claim. Table 2 reports mean relative constraint error for representative configurations spanning five problem sizes and both arities. Config Pop. N ME Raking Winner Gap 12 variables 12-var, arity=2 1 000 0.036 0.030 Raking −17%-17\% 12-var, arity=3 100 0.129 0.176 ME +27%+27\% 12-var, arity=3 5 626 0.026 0.026 equal ≈0≈ 0 12-var, arity=3 100 000 0.017 0.009 Raking −46%-46\% 20 variables 20-var, arity=2 200 0.114 0.186 ME +39%+39\% 20-var, arity=3 100 0.204 0.352 ME +42%+42\% 20-var, arity=3 10 000 0.027 0.028 ME +4%+4\% 20-var, arity=3 100 000 0.013 0.011 Raking −22%-22\% 28 variables — MaxEnt wins in all configurations 28-var, arity=2 1 000 0.078 0.140 ME +44%+44\% 28-var, arity=3 100 0.209 0.425 ME +51%+51\% 28-var, arity=3 10 000 0.037 0.092 ME +59%+59\% 28-var, arity=3 100 000 0.018 0.027 ME +33%+33\% 40 variables — largest observed gaps 40-var, arity=2 1 000 0.077 0.336 ME +77%+77\% 40-var, arity=3 100 0.380 0.916 ME +59%+59\% 40-var, arity=3 5 626 0.095 0.352 ME +73%+73\% 40-var, arity=3 100 000 0.061 0.214 ME +71%+71\% Table 2: Mean relative constraint error (lower is better) on representative NPORS-derived scaling benchmarks. Gap = relative reduction over the weaker method. At K≤12K≤ 12, the winner depends on arity and N. At K≥28K≥ 28, MaxEnt wins in all configurations we tested. 7.3 Empirical Guidance The empirical results above support a benchmark-specific rule of thumb, summarised in Table 3. Situation Guidance K≤12K≤ 12, arity =2=2 Raking is a strong baseline and can be preferred when speed is the main concern. K≤12K≤ 12, arity =3=3, small or moderate N MaxEnt often yields lower error and is worth trying first. K≈20K≈ 20 Method choice becomes regime-dependent; MaxEnt is safer when ternary interactions matter. K≥28K≥ 28 MaxEnt consistently outperformed raking on our benchmarks. K=40K=40 The largest empirical gaps favor MaxEnt by a wide margin. Table 3: Empirical guidance on the NPORS-derived scaling benchmarks. These recommendations summarize observed trends and should not be read as universal guarantees. 8 Conclusion We presented a maximum-entropy relaxation of multi-way cardinality-constrained population synthesis. By interpreting conjunction-defined cardinality requirements as expectation constraints, we obtain an exponential-family model whose parameters can be learned by convex optimization. On the NPORS-derived scaling benchmarks, MaxEnt is especially attractive when the number of attributes and the density of higher-order interactions increase, whereas generalized raking remains competitive on smaller, lower-arity problems. Two directions follow naturally from this study. First, the penalized soft-constraint formulation in (3) should be evaluated systematically on genuinely noisy or mutually inconsistent targets. Second, exact or near-exact repair procedures could be layered on top of MaxEnt samples when selected marginals must be matched exactly. Extending the same evaluation to higher arities and approximate expectation estimation is an important next step. Acknowledgements This work originates from research and development activities initiated by the first author in connection with ImagineAllThePeople. The company provided intellectual and conceptual support that significantly contributed to the direction of this work. References R. J. Beckman, K. A. Baggerly, and M. D. McKay (1996) Creating synthetic baseline populations. Transportation Research Part A 30 (6), p. 415–429. Cited by: §1. Y. M. M. Bishop (1967) Multidimensional contingency tables: cell estimates. Journal of the American Statistical Association 62 (317), p. 567–576. Cited by: §2.2. R. Chambers and C. Skinner (2003) Analysis of survey data. Wiley, Chichester, UK. Cited by: §2.2. G. F. Cooper (1990) The computational complexity of probabilistic inference using bayesian belief networks. Artificial Intelligence 42 (2–3), p. 393–405. Cited by: §2.3. I. Csiszár (1975) I-divergence geometry of probability distributions and minimization problems. The Annals of Probability 3 (1), p. 146–158. Cited by: §2.2. J. N. Darroch and D. Ratcliff (1972) Generalized iterative scaling for log-linear models. The Annals of Mathematical Statistics 43 (5), p. 1470–1480. Cited by: §2.2. W. E. Deming and F. F. Stephan (1940) On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. The Annals of Mathematical Statistics 11 (4), p. 427–444. Cited by: §1, §2.2. J. Deville and C. Särndal (1992) Calibration estimators in survey sampling. Journal of the American Statistical Association 87 (418), p. 376–382. Cited by: §2.2. S. E. Fienberg (1980) The analysis of cross-classified categorical data. MIT Press, Cambridge, MA. Cited by: §1, §2.2, §2.2. E. T. Jaynes (1957) Information theory and statistical mechanics. Physical Review 106 (4), p. 620–630. External Links: Document Cited by: §1, §4.3, §4. A. Karpathy (2025) Guide: infusing identity to your nanochat. Note: https://github.com/karpathy/nanochat/discussions/139GitHub discussion, accessed 2026-03-18 Cited by: footnote 1. N. Lomax and P. Norman (2016) Estimating population attribute values in a table: ”get me started in” iterative proportional fitting. Professional Geographer 68 (3), p. 451–461. Cited by: §2.2, §2.2. D. Paglieri, L. Cross, W. A. Cunningham, J. Z. Leibo, and A. S. Vezhnevets (2026) Persona generators: generating diverse synthetic personas at scale. External Links: 2602.03545, Document, Link Cited by: §1. J. Pearl (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann. Cited by: §2.3. T. Petit and A. Pachot (2025) Exact synthetic populations for scalable societal and market modeling. External Links: 2512.07306, Document, Link Cited by: §2.1. J. Régin (1996) Generalized arc consistency for global cardinality constraint. Journal of Artificial Intelligence Research 2, p. 193–234. Cited by: §1, §2.1. J. Sakellariou, F. Tria, V. Loreto, and F. Pachet (2017) Maximum entropy models capture melodic styles. Scientific Reports 7, p. 9172. External Links: Document Cited by: §4. D. Voas and P. Williamson (2000) An evaluation of the combinatorial optimisation approach to the creation of synthetic microdata. International Journal of Population Geography 6 (5), p. 349–366. Cited by: §1, §2.2. M. J. Wainwright and M. I. Jordan (2008) Graphical models, exponential families, and variational inference. Now Publishers. Cited by: §4.3, §4.4, §4. X. Ye, K. Konduri, R. M. Pendyala, B. Sana, and P. Waddell (2009) A methodology to match distributions of both household and person attributes in the generation of synthetic populations. Transportation Research Part C 17 (5), p. 533–545. Cited by: §2.2.