Paper deep dive
When both Grounding and not Grounding are Bad -- A Partially Grounded Encoding of Planning into SAT (Extended Version)
João Filipe, Gregor Behnke
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 94%
Last extracted: 3/23/2026, 12:04:49 PM
Summary
The paper introduces three SAT-based planning encodings that utilize partial grounding of predicates while keeping actions lifted. This approach addresses the exponential blowup of full grounding and the quadratic scaling of previous lifted SAT encodings (like LiSAT), achieving linear scaling with plan length and improved performance on hard-to-ground domains.
Entities (4)
Relation Signals (3)
LiSAT → uses → Unified Arguments
confidence 95% · Espasa et al. (2021) introduced a new way of handling action’s arguments which LiSAT also uses, i.e., a new way of operator splitting – Unified Arguments.
Partially Grounded Encoding → uses → Lifted Mutex Groups
confidence 95% · For the latter two, we use lifted mutex groups (Helmert 2009), allowing for a compact lifted encoding of some of the problem’s predicates.
Partially Grounded Encoding → outperforms → LiSAT
confidence 90% · our new partially grounded encodings outperform LiSAT in five out of nine benchmark domains.
Cypher Suggestions (2)
Map the relationship between encoding techniques and their underlying concepts · confidence 90% · unvalidated
MATCH (e:EncodingTechnique)-[:USES]->(c:Concept) RETURN e.name, c.name
Find all planning systems and their performance characteristics · confidence 85% · unvalidated
MATCH (p:PlanningSystem)-[r:OUTPERFORMS]->(other) RETURN p.name, r.relation, other.name
Abstract
Abstract:Classical planning problems are typically defined using lifted first-order representations, which offer compactness and generality. While most planners ground these representations to simplify reasoning, this can cause an exponential blowup in size. Recent approaches instead operate directly on the lifted level to avoid full grounding. We explore a middle ground between fully lifted and fully grounded planning by introducing three SAT encodings that keep actions lifted while partially grounding predicates. Unlike previous SAT encodings, which scale quadratically with plan length, our approach scales linearly, enabling better performance on longer plans. Empirically, our best encoding outperforms the state of the art in length-optimal planning on hard-to-ground domains.
Tags
Links
- Source: https://arxiv.org/abs/2603.19429v1
- Canonical: https://arxiv.org/abs/2603.19429v1
Full Text
70,506 characters extracted from source content.
Expand or collapse full text
When both Grounding and not Grounding are Bad – A Partially Grounded Encoding of Planning into SAT João Filipe1, Gregor Behnke1 Abstract Classical planning problems are typically defined using lifted first-order representations, which offer compactness and generality. While most planners ground these representations to simplify reasoning, this can cause an exponential blowup in size. Recent approaches instead operate directly on the lifted level to avoid full grounding. We explore a middle ground between fully lifted and fully grounded planning by introducing three SAT encodings that keep actions lifted while partially grounding predicates. Unlike previous SAT encodings, which scale quadratically with plan length, our approach scales linearly, enabling better performance on longer plans. Empirically, our best encoding outperforms the state of the art in length-optimal planning on hard-to-ground domains. Introduction Automated Planning aims to find a sequence of actions that transforms an initial state into one satisfying a given goal. While most research in the last decades focussed on solving planning problems that can be fully ground instantiated, the task of lifted planning - that is planning with actions described using a function-free first-order language - has made significant progress since the seminal work by Corrêa et al. (2020). Using lifted planning, significantly larger problems than before could be tackled successfully. Current methods for lifted planning mainly fall into two categories: ones based on search and ones based on translation to logic. In this paper, we focus on the latter. Shaik and van de Pol (2022) proposed a translation of lifted planning into Quantified Boolean Formulae (QBF), in which lifting is encoded using universal quantification by iterating over possible objects as arguments for predicates. At the same time Höller and Behnke (2022) proposed an encoding into Boolean Satisfiability (SAT) – called LiSAT – which is until now the state-of-the-art for plan-length optimal planning, which is also optimal planning for unit-cost problems. Similar to grounded SAT-based planners, it views the plan as a sequence of time steps and generates SAT formulae ϕℓ _ that are satisfiable iff there is a plan of length up to ℓ . To generate an optimal plan, it iterates over ℓ until a plan is found. LiSAT expresses lifting using variables ata^t and (v=o)t(v=o)^t denoting that action a is executed at time t and argument variable v of that action is the object o, respectively. To encode that preconditions of actions are met and that effects are applied, LiSAT uses the concept of causal links (Penberthy and Weld 1992), linking actions having a fact as a precondition to actions that make that fact true. As a consequence, the size of ϕℓ _ grows quadratically in ℓ – which is not desirable. We explore the middle ground between full grounding and full lifting for SAT-based planning. For this, we introduce three encodings that require only a partial grounding of the planning problem. In all three, actions are encoded fully lifted. For the state description, i.e., the problem’s predicates, we investigate both a full grounding approach, as well as two approaches that represent the current state using lifted representations that explicitly encode predicates and their argument objects. For the latter two, we use lifted mutex groups (Helmert 2009), allowing for a compact lifted encoding of some of the problem’s predicates. We present two ways of encoding objects in this case: with a one-hot encoding or with a binary encoding. Although the full grounding approach is not competitive with other SAT-based approaches, our new partially grounded encodings outperform LiSAT in five out of nine benchmark domains. Preliminaries Lifted Planning We formalize lifted STRIPS planning problems (Fikes and Nilsson 1971) as a tuple: Π=⟨,,,ℐ,⟩ = ,P,A,I,G . Here, O is a set of objects, P is a set of predicates symbols, A is a set of action symbols, ℐI is the initial state and G is the goal condition. Each object o∈o has a type t. The set of all objects of a type is denoted as tO^t. Similarly, for a variable v we write vO^v to denote all objects of v’s type. Typically, types are defined via a type hierarchy, where one type t1t_1 is a subtype of another type t2t_2. We flatten this hierarchy by adding all objects of t1O^t_1 to t2O^t_2. We assume that the type hierarchy is a tree (which it always is in benchmark instances). Each predicate symbol pn∈p^n an arity n and a tuple of typed variables: pn(v1t1,…,vntn)p^n(v_1^t_1,…,v_n^t_n), where vitiv_i^t_i indicates that variable viv_i is of the type tit_i. We call a predicate together with concretely chosen arguments, which can either be variables or objects pn(a1,…,an)p^n(a_1,…,a_n) an atom. When all variables are replaced by objects of the respective type, the predicate is considered grounded, then called a fact. Similarly to predicate symbols, each action symbol an∈a^n has an arity n, a tuple of typed variables and is considered grounded when all variables are replaced by objects of the respective type. To describe the action’s semantics, there are three functions: precprec, addadd and deldel. They map actions to their preconditions, add effects, and delete effects, respectively, which are sets of atoms using the action’s arguments as variables. Lastly, ℐI and G are sets of facts, representing the initial state and the goal condition. Definition 1. (Applicability). A ground action a(o1,…,on)a(o_1,…,o_n) is applicable in state s iff prec(a(o1,…,on))⊆sprec(a(o_1,…,o_n)) s. Definition 2. (State Transition). Given a state s and an applicable ground action a, the transition function τ(s,a(o1,…,on))=(s el(a(o1,…,on)))∪add(a(o1,…,on))τ(s,a(o_1,…,o_n))=(s del(a(o_1,…,o_n)))∪ add(a(o_1,…,o_n)) gives the resulting state. Definition 3. (Solution). A solution is a sequence of ground actions π=(a0(o11,…,on11),…,am(o1m,…,onmm))π=(a_0(o_1^1,…,o_n_1^1),…,a_m(o_1^m,…,o_n_m^m)) such that aia_i is applicable in sis_i, si+1=τ(si,ai(o1i,…,onii))s_i+1=τ(s_i,a_i(o_1^i,…,o_n_i^i)), s0=ℐs_0=I and ⊆sm+1G s_m+1 Definition 4. (Optimal Length). A solution π=(a1,…,an)π=(a_1,…,a_n) is length optimal if for any other solution π′=(b1,…,bm)π =(b_1,…,b_m), it holds that n≤mn≤ m. Lifted Mutex Groups In planning problems, it is often the case, that not all syntactically valid states can actually be reached from ℐI. Instead, certain invariants are true for every state reachable via application of actions. One such type of invariants are mutex groups: they state that, out of a set of facts, at most one can be true in any reachable state. For example, a package cannot simultaneously be in two different locations. Thus, facts such as at(package1,lisbon)at(package1,lisbon) and at(package1,amsterdam)at(package1,amsterdam) would form a mutex group. Defining, computing, and using mutex groups as sets of facts is problematic in practice as there are often many mutex groups with complex structures. Instead, one can restrict to lifted mutex groups (LMGs, Helmert 2009). A LMG is described using variables so that every ground instantiation of an LMG is a mutex group. LMGs can be obtained without ground-instantiating all facts and actions (Helmert 2009; Fišer 2020), but these procedures usually do not find all LMGs, but only a (practically sufficient) subset of them. For our experiments, we used the discovery method by Fišer (2020) based on Lifted Fact Altering Mutex Groups, which can generate a substantial amount of LMGs quickly. We now define the syntax for LMGs, by defining candidate structures that could be LMGs. Definition 5. A lifted mutex group (LMG) candidate ℒL is a triple ⟨fix,cnt,P⟩ ,cnt,P where: (1) fix is a set of fixed variables, (2) cnt is a set of counted variables with fix∩cnt=∅fix = , and (3) P is a set of atoms, whose variables are from fix∪cntfix . Conceptually, each instantiation of the fixed variables fixfix should yield a mutex group. The facts in that mutex group are obtained by computing all ground-instantiations of the counted variables cntcnt. For example, we may have ℒ=⟨?p,?l,at(?p,?l)⟩L= \?p\,\?l\,\at(?p,?l)\ . Each instantiation of the fixed variable ?p?p with an object x will yield one mutex group, whose elements are the facts of the form at(x,y)at(x,y) – where y is a possible object for the variable ?l?l. For the purposes of this paper, we often work with partially instantiated LMGs, which we call partially lifted mutex groups (PLMGs). In them, the fixed variables have already been instantiated. Definition 6. A partially lifted mutex group (PLMG) candidate ℳM is a pair ⟨cnt,P⟩ cnt,P where: cntcnt is a set of counted variables, and P is a set of atoms, with variables from cntcnt. We write ℭ(ℳ) C(M) to denote the counted variables of a PLMG and (ℳ) L(M) to denotes its atoms. In both LMGs and PLMGs, all variables v are associated with a type vO^v. Given a LMG and an instantiation of the fixed variables I:fix→OI:fix→ O, we obtain a PLMG by replacing all occurrences of fixed variables v by I(v)I(v). A PLMG implies a single mutex group M by computing all ground instantiations of its atoms w.r.t to the counted variables cntcnt. We write this set of ground instantiated facts as F(ℳ)F(M). Definition 7. A PLMG candidate ℳM is a PLMG iff F(ℳ)F(M) is a mutex group, that is: for every state reachable by the application of actions, at most one fact f∈F(ℳ)f∈ F(M) is true. A LMG candidate is a LMG iff every instantiation of its fixed variables leads to a PLMG. A mutex group requires solely that at most one fact of the group is true in every reachable state. In many cases, it is also the case that at least one fact in the group is true. We call such mutex groups, LMGs, and PLMGs, exactly-one mutex groups, LMGs, and PLMGs. The LMG inference mechanism based on Lifted Fact Altering Mutex Groups we use (Fišer 2020), determines for every LMG whether it is an exactly-one or an at-most-one LMG. This naturally also applies to generated PLMGs. Example 1. Consider the transport domain. One of the LMGs of this domain is : ⟨?p,?l,?v,at(?p,?l),in(?p,?v)⟩ \?p\,\?l,?v\,\at(?p,?l),in(?p,?v)\ . From it, we can generate as many PLMGs as there are packages. For each package p, we then obtain a PLMG group ℳM with ℭ(ℳ)=?l,?v C(M)=\?l,?v\ and (ℳ)=at(p,?l),in(p,?v) L(M)=\at(p,?l),in(p,?v)\. Boolean Satisfiability Boolean Satisfiability (SAT) is the task of determining whether a propositional formula in conjunctive normal form (CNF) has a truth assignment of its variables that makes the formula true. A CNF formula consists of a conjunction (∧ ) of clauses, where each clause is a disjunction (∨ ) of literals. A literal is a propositional variable, or variable for short, x or its negation ¬x x. SAT solvers, which are readily available, determine whether a formula is satisfiable, and if so, provide a satisfying truth assignment. For ease of notation, we will not present SAT formulae in CNF, but all formulae can be translated easily to CNF. Related Work SAT-based planning has been a prominent technique since the work of Kautz and Selman (1992). Most approaches follow a common idea: generate SAT formulas ϕℓ _ satisfiable iff a plan of length ≤ℓ≤ exists. These are passed to a SAT solver either incrementally (for optimal planning) or via a schedule (e.g., (Rintanen 2012, 2011) for satisficing planning). Once a satisfiable ϕi _i is found, a plan is extracted. Several SAT planning systems have since been developed, including SATPLAN04 (Kautz et al. 2006), Madagascar (Rintanen 2012), Aquaplanning 111https://github.com/domschrei/aquaplanning , and SASE (Huang et al. 2012), all relying on fully grounded representations. While grounding simplifies reasoning, it poses scalability issues in complex and large domains. To mitigate this, alternative approaches emerged that avoid full grounding. Early approaches such as operator splitting were introduced by Kautz et al. (1996) and Ernst et al. (1997). Kautz et al.’s simple splitting assigned each lifted action its own set of argument variables, whereas Ernst et al. proposed an overloaded version where all actions shared the same argument variables. Robinson et al. (2009) later extended this idea to support action parallelism by grouping arguments. However, all of these approaches fully grounded predicates and states, as argued for by Kautz et al. (1996). LiSAT (Höller and Behnke 2022) avoids this limitation by using a fully lifted, stateless encoding inspired by plan-space planning. It does not track the state over time, instead, it ensures that each action’s preconditions are achieved by prior actions (or the initial state) and not invalidated by intervening actions. This removes the need to ground facts. Problematically, LiSAT’s encoding leads to quadratic growth in formula size with respect to plan length L, as every precondition must be linked to a potential achiever at any earlier step. This limits LiSAT’s scalability in problems requiring long plans, despite strong performance in the length-optimal setting. LiSAT is also not able to exploit inherent problem structures, like LMGs – which can cause a substantial overhead in reasoning. Lifted Representation of Actions Espasa et al. (2021) introduced a new way of handling action’s arguments which LiSAT also uses, i.e., a new way of operator splitting – Unified Arguments. It is similar to Ernst et al. (1997)’s overloaded splitting in that it allows different actions to share argument variables. It however does not perform this sharing simply by argument index. Instead, it considers the argument’s types and shares only arguments of the same type, irrespective of their positions in the action’s argument lists. This is especially useful in domains where the number of objects within types is vastly different. For each type, the total number of arguments across all actions is computed, and the maximum count is taken as the number of placeholders required for that type. This process eliminates variability in argument order across actions, as each placeholder is uniquely associated with a type. By doing so, the formula becomes independent of the specific argument ordering of individual actions. Example 2. Consider the actions drivedrive(?v−vehicle?v-vehicle, ?l1−location?l1-location, ?l2−location?l2-location), dropdrop(?v−vehicle?v-vehicle, ?l−location?l-location, ?p−package?p-package) and pickuppickup(?v−vehicle?v-vehicle, ?l−location?l-location, ?p−package?p-package) from the transport domain. There are three distinct types involved in this example: vehicle (v), location (l), and package (p). Let numtype(action)num_type(action) denote the number of arguments of a specific type that an action takes. The total occurrences of each type in each action are summarized as follows: • numv(drive)num_v(drive) = numv(drop)num_v(drop) = numv(pickup)num_v(pickup) = 1 • numl(drive)num_l(drive) = 2, numl(drop)num_l(drop) = numl(pickup)num_l(pickup) = 1 • nump(drive)num_p(drive) = 0, nump(drop)num_p(drop) = nump(pickup)num_p(pickup) = 1 The resulting Unified Arguments consist of four placeholders: one for vehicles, two for locations, and one for packages. Table 1 illustrates more clearly the alignment of the arguments from all actions with the final UA. drive ?v ?l ?l drop ?v ?l ?p pick-up ?v ?l ?p UA vehicle location location package Table 1: Unified Arguments As we will base our own encodings on the concepts of LiSAT including its Unified arguments, we briefly review the relevant sections of LiSAT’s formula, that we retain for our work. To encode actions and their arguments, Höller and Behnke (2022) introduced two types of variables: • ata^t: indicates if action a is applied at time step t. • (v=o)t(v=o)^t: indicates whether an object o is selected by unified argument v or not at time step t. We write tA^t to denote the set of all ata^t variables for time step t and vt UA_v^t to denote the set of all (v=o)t(v=o)^t variables for unified argument v at time t. We write U to denote the set of all unified arguments. Lastly, let argsargs be a function mapping an action to the set of unified arguments it has as its arguments. We need to ensure that at most one action, with properly selected arguments is executed at each time step t. For this we use the atMostOneatMostOne macro, which given a set of decision variables generates clauses ensuring at most one is true, yielding the constraints: atMostOne(t)atMostOne(A^t), atMostOne(vt)atMostOne( UA_v^t), and at⟹⋁x∈vtxa^t _x∈ UA_v^tx. Depending on the number of variables, the encoding of the atMostOneatMostOne constraint varies. For up to 256 variables, a pairwise quadratic encoding is used, whereas a binary counter encoding is applied otherwise (Sinz 2005). We retain all constraints pertaining to static preconditions described in Höller and Behnke (2022) and omit them for brevity. We further retain the constraint at⟹⋁∀b∈bt−1a^t\!\!\! \!\!\! _∀ b b^t-1, which reduces symmetries in the formula by enforcing that the time steps with actions are compact at the start of the plan. Partial Grounding in Propositional Logic We present three SAT-based encodings which, through the use of partial grounding, eliminate the quadratic scaling present in LiSAT (Höller and Behnke 2022). The main cause of LiSAT’s quadratic formula size was the need to encode causal links, which then was caused by not tracking the state explicitly. Instead, we will encode the state explicitly, removing the need for encoding causal links and thus ensuring a linear growth behavior of our formula. Our formulas take the overall form F=Fℐ∧F∧⋀t=0ℓτ(t,t+1)F=F_I F_G _t=0 τ(t,t+1), where τ(t,t+1)τ(t,t+1) encodes the transition from time t to t+1t+1. The encodings presented throughout the rest of the paper all specify τ(t,t+1)τ(t,t+1). Since their size is not dependent on the number of time steps, |τ(t,t+1)||τ(t,t+1)| is a constant with respect to ℓ and only scales in |Π|| |. So F scales linearly in ℓ . Our overall approach is to always leave actions fully lifted, while fully or partially (depending on the encoding) grounding the state. This distinction is reasonable, as the arity of predicates is typically bounded by the arity of the actions they appear in – which means that predicates and thus the state description, results in a smaller grounding than the actions. By retaining actions at a lifted level, we preserve LiSAT’s compact representation for actions. Our three encodings differ in the degree in which they ground the state representation, with one fully grounding predicates and the other two partially grounding them using PLMGs, but differing in the way that they encode them. Encoding with Fully Grounded Facts We start by giving a simplistic approach that acts as a baseline. In it, we fully ground all predicates and use them as the state representation. This encoding is in fact similar to the one proposed by Kautz et al. and Kautz and Selman (1996), as operationalised and automated by Ernst et al. (1997). The main difference between their encoding and ours are: (1) we use Unified Arguments instead of simple or overloaded splitting, (2) we use LiSAT’s (Höller and Behnke 2022) handling of typing and type constraints, and (3) we use LiSAT’s dedicated encoding of static preconditions. In addition to the variables ata^t and (v=o)t(v=o)^t to encode the selected actions, we introduce the following variables to describe the state: • ftf^t: indicates if fact f is true at time step t. • ca,p(v1,…,vn)f,tc_a,p(v_1,...,v_n)^f,t: indicates that the action a makes via its effect p(v1,…,vn)p(v_1,...,v_n) the fact f true at time step t, with f being the grounding of p(v1,…,vn)p(v_1,...,v_n). For this encoding, we fully ground all predicates to obtain the set ℱF of all facts. Further assume we encode the plan for length L and there is a function, achieversachievers which receive a fact as an argument and returns an action and the predicate in its effects (either add or delete) corresponding to the fact. Firstly, we assert the initial and goal states at time steps 0 and L, respectively, via ℱI=⋀f∈ℐf0∧⋀f∉ℐ¬f0F_I= _f f^0 _f f^0 and ℱG=⋀f∈fLF_G= _f f^L. Constraint (1) forces preconditions of selected actions to hold. Formulae with a parameter t are generated per time step. Note that p(o1,…,on)p(o_1,...,o_n) in this case is a specific fact. ∀a∈∀p(v1,…,vn)∈prec(a)∀o1∈v1,…,on∈vn: ∀ a\!∈\!A∀ p(v_1,...,v_n)\!∈\!prec(a)∀ o_1\!∈\!O^v_1,...,o_n\!∈\!O^v_n: at∧⋀i=1n(vi=oi)t⟹p(o1,…,on)t a^t _i=1^n(v_i=o_i)^t p(o_1,...,o_n)^t (1) Constraint (2) ensures that effects of actions take effect at the next time. The negation (¬ ) is present for delete effects, but not for add effects. Constraint (3) introduces the cause variables that keep track of reasons for a fact to become true (no ¬ ) or false (¬ ) – depending on whether the effect is adding or deleting the fact. ∀a∈∀(¬)p(v1,…,vn)∈(a)∀o1∈v1,…,on∈vn: ∀ a\!∈\!A∀( )p(v_1,...,v_n)\!∈\!eff(a)∀ o_1\!∈\!O^v_1,...,o_n\!∈\!O^v_n: at∧⋀i=1n(vi=oi)t⟹(¬)p(o1,…,on)t+1 a^t _i=1^n(v_i=o_i)^t ( )p(o_1,...,o_n)^t+1 (2) ca,p(v1,…,vn)(¬)p(o1,…,on),t⟹at∧⋀i=1n(vi=oi)t c_a,p(v_1,...,v_n)^( )p(o_1,...,o_n),t a^t _i=1^n(v_i=o_i)^t (3) Finally, Constraints (4) and (5) specify that a change in a fact’s truth is caused by an action’s effects. ∀f∈ℱ:ft∧¬ft+1⟹⋁a,p(v1,…,vn)∈achievers(¬f)ca,p(v1,…,vn)¬f,t+1 ∀ f :f^t f^t+1 -28.45274pt _a,p(v_1,...,v_n)∈ achievers( f) -28.45274ptc_a,p(v_1,...,v_n) f,t+1 (4) ∀f∈ℱ:¬ft∧ft+1⟹⋁a,p(v1,…,vn)∈achievers(f)ca,p(v1,…,vn)f,t+1 ∀ f : f^t f^t+1 -25.6073pt _a,p(v_1,...,v_n)∈ achievers(f) -25.6073ptc_a,p(v_1,...,v_n)^f,t+1 (5) Example 3. Consider a simple planning problem from the transport domain with only 3 objects: v - vehicle, p - package, l - location. In the initial state, v is at l and p is in v. In the goal state, p is at l. Our encoding of length 1 for this problem would contain the following clauses (the full encoding would have more clauses but we only present the relevant ones for brevity): • Precondition check: – drop1∧(v1=v)1∧(v2=l)1⟹at(v,l)0drop^1 (v_1=v)^1 (v_2=l)^1 at(v,l)^0 – drop1∧(v1=v)1∧(v4=p)1⟹in(p,v)0drop^1 (v_1=v)^1 (v_4=p)^1 in(p,v)^0 • Effects: – drop1∧(v1=v)1∧(v4=p)1⟹¬in(p,v)1drop^1 (v_1=v)^1 (v_4=p)^1 in(p,v)^1 – drop1∧(v2=l)1∧(v4=p)1⟹at(p,l)1drop^1 (v_2=l)^1 (v_4=p)^1 at(p,l)^1 • Cause Variables: – cdrop,at(v2,v4)at(p,l),1⟹drop1∧(v2=l)1∧(v4=p)1c_drop,at(v_2,v_4)^at(p,l),1 drop^1 (v_2=l)^1 (v_4=p)^1 – cdrop,at(v1,v4)¬in(p,v),1⟹drop1∧(v1=v)1∧(v4=p)1c_drop,at(v_1,v_4) in(p,v),1 drop^1 (v_1=v)^1 (v_4=p)^1 • Frame Axioms: – in(p,v)0∧¬in(p,v)1⟹cdrop,at(v1,v4)¬in(p,v),1in(p,v)^0 in(p,v)^1 c_drop,at(v_1,v_4) in(p,v),1 – ¬at(p,l)0∧at(p,l)1⟹cdrop,at(v2,v4)at(p,l),1 at(p,l)^0 at(p,l)^1 c_drop,at(v_2,v_4)^at(p,l),1 Predicate Pruning As an important optimisation to all our encodings, we remove facts f from the list of all facts ℱF whose truth value does not matter for any plan. Intuitively, a fact can only influence a plan if it appears either in the goal or as a precondition. Facts that do not occur in either place cannot affect the success or failure of any action execution or the satisfaction of the goal. They can therefore be safely discarded. Such irrelevant facts are commonly pruned in grounded planning (see e.g. (Muise et al. 2012, Def. 8) for grounded probabilistic planning). Due to lifting, we cannot analyze each ground fact individually. Instead, we remove entire predicates that do not appear in any precondition. This might over-delete facts that appear in no precondition, but appear only in the goal. As an exception, goal facts are added as individual facts to be considered when encoding. This way ensures that the goal conditions are checked correctly. We refer to this as predicate pruning. Encoding with Partially Grounded Facts In this section, we present the novel contributions of this work. We introduce a new family of SAT encodings that leverage partially lifted mutex groups (PLMGs) to partially ground the state description while keeping the action encoding fully lifted. This approach exploits the inherent structure of planning problems to improve the scalability of the encoding. Unlike previous encodings of lifted planning into SAT, whose formula size scales quadratically with plan length, our encoding’s formula size scales linearly, therefore addressing a key limitation of existing methods. In practice, the fully grounded encoding leads to prohibitively large formulae. This can e.g. be caused by a high number of objects (more than 10310^3) or by predicates of high arity (greater than 2) that cause a lot of facts in grounding. One way to counteract this, is to exploit the planning problem’s inherent internal structure. For this, we leverage PLMGs – they will allow us to only “partially” ground the predicates enabling a smaller representation. Let ℳM be a PLMG. From it, we know that in every state at most one of the ground instances (i.e. regular facts) of its literals can be true. Instead of generating all these literals, we use the counted variables of the PLMG to implicitly denote which of the facts is currently true. To do so, we need to (1) identify from which atom in (ℳ) L(M) that fact was grounded and (2) what the value of the counted variables in it is. This can allow for a more compact representation. In practice the LMG inference by Fišer (2020), generates more LMGs than necessary and notably overlapping LMGs. We use Helmert’s (2009) greedy PLMG selection, which has been the standard for ≥ 15 years. Non-greedy alternatives typically yield negligible or even negative impact on performance. First, if the grounding of a LMG contains all facts of a predicate p, we select all PLMGs grounded from it and remove p from consideration. We repeat this process until no further such LMGs can be found. We then fully ground the remaining predicates (typically a small subset of all predicates, or none at all) and LMGs into PLMGs. We select PLMGs greedily, starting with the ones that cover the most still uncovered facts, until no PLMGs that would cover new facts remain. As a result, we obtain a set P of PLMGs that we use for encoding. After this procedure, there might still be facts U not covered by any of the PLMGs. For simplicity, we assume that no PLMG contains two literals with the same predicate, i.e., that they can be identified by their predicate, which is true for all LMG group generated by Fišer (2020). For the encoding, we retain the variables ftf^t and the cause variables only for the uncovered facts in U. For them, we also retain the encoding previously described. We remove the facts pertaining to all other facts and encode them using PLMGs instead. For this, we introduce new variables: • (cℳ=o)t(c^M=o)^t: indicating whether the counted variable c of the PLMG ℳM has the value o at time t. • (pℳ)t(p^M)^t: indicating that the literal of predicate p of PLMG ℳM is true at time t. • (vj∈D(cℳ))t(v_j\!∈\!D(c^M))^t: indicating whether the object assigned to the unified argument vjv_j is of the same type of the counted variable c of PLMG ℳM at time t. • (cℳ≡vj)t(c^M\!≡\!v_j)^t: indicating if counted variable c of PLMG ℳM has the same value as the unified argument vjv_j at time t. • (cℳ←ap(v1,…,vn))t(c^M← a_p(v_1,…,v_n))^t: indicating that action a has set the value of counted variable cℳc^M via its effect p(v1,…,vn)p(v_1,…,v_n). • (pℳ←ap(v1,…,vn))t(p^M\!\!\!←\!\!a_p(v_1,…,v_n))^t: indicating action a has set the predicate of PLMG ℳM to p via its effect p(v1,…,vn)p(v_1,…,v_n). • (cℳ≢cℳ)t(c^M ≡ c^M)^t: indicating that the value of the counted variable c of PLMG ℳM is different at times t−1t-1 and t. For notational purposes, we define the function O, which takes a literal of a PLMG as an argument. It returns a set of pairs (o,j)(o,j) for each object o at the jjth argument position of that literal. We introduce V to return a set of pairs (v,j)(v,j) for each variable v at the jjth argument position of that literal. First, we assert per PLMG that every counted variable has exactly one value and that exactly one of the literals is selected. This constraint is only correct for exactly-one PLMGs. For PLMGs that are not exactly-one, we add a new literal ”none” to that PLMG that is true iff none of the other values in the PLMG are true. With it, we can force also for these PLMGs that exactly one literal of the PLMG is true. ∀ℳ∈∀c∈ℭ(ℳ): ∈ P\ \ ∀ c∈ C(M): atMostOne((cℳ=o)t∣o∈c)∧⋁o∈c(cℳ=o)t atMostOne(\(c^M\!=\!o)^t\! \!o\!∈\!O^c\)\! \!\! _o ^c\!\!(c^M=o)^t (6) ∀ℳ∈: ∈ P: atMostOne((piℳ)t∣p(x1,…,xm)∈(ℳ)) atMostOne(\(p_i^M)^t p(x_1,…,x_m)∈ L(M)\) (7) ⋁p(x1,…,xm)∈(ℳ)(piℳ)t _p(x_1,…,x_m)∈ L(M)(p_i^M)^t (8) As the second step, we need to correctly assert the initial state. For this, we iterate over all facts in the initial state and PLMG and check whether it contains a matching literal, if so, we assert it. Matching here means that if both have objects at a given argument position, they are the same, and otherwise the object in the initial state’s fact is part of the type of the PLMG’s literal’s variable. We also generate symmetric clauses for the goal, which we here omit for brevity. ∀p(o1,…,om)∈ℐ∀ℳ∈∀p(x1,…,xm)∈(ℳ): ∀ p(o_1,...,o_m)\!∈\!I\ \ ∈ P\ \ ∀ p(x_1,…,x_m)∈ L(M): if ∀(o,j)∈O(p(x1,…,xmi)):o=oj ∀(o,j) (p(x_1,...,x_m_i))\!:\!o=o_j and ∀(c,j)∈V(p(x1,…,xmi)):oj∈c then ∀(c,j) (p(x_1,...,x_m_i)):o_j\!∈\!O^c then (pℳ)0∧⋀(c,j)∈V(p(x1,…,xm))(cℳ=oj)0 (p^M)^0 _(c,j) (p(x_1,…,x_m))(c^M=o_j)^0 (9) Thirdly, we have to assert that preconditions of actions are actually met. This means that for every PLMG containing the precondition with its chosen arguments as a grounding, this grounding must be true in the previous time step in one of the PLMG. Here, we do not access the values of the PLMG’s counted variables directly, but assert the intermediate (cℳ≡vj)t(c M≡ v_j)^t variables to be true, stating that a specific counted variable must equal a specific unified argument. ∀a∈∀p(v1,…,vm)∈prec(a) ∀ a\!∈\!A∀ p(v_1,…,v_m)\!∈\!prec(a) ∀ℳ∈∀p(x1,…,xm)∈(ℳ): ∈ P\ \ ∀ p(x_1,…,x_m)∈ L(M): if ∀(o,j)∈O(p(x1,…,xm)):o∈vj then ∀(o,j)\!∈\!O(p(x_1,...,x_m)):o\!∈\!O^v_j then at∧⋀(o,j)∈O(p(x1,…,xm))(vj=o)t∧⋀(c,j)∈V(p(x1,…,xm))(vj∈D(c))t a^t \!\!\!\!\!\!\!\! _(o,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!(v_j=o)^t \!\!\!\!\!\!\!\! _(c,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!\!\!\!\!(v_j\!∈\!D(c))^t ⟹(pℳ)t∧⋀(c,j)∈V(p(x1,…,xm))(cℳ≡vj)t 56.9055pt (p^M)^t \!\!\!\!\!\!\!\! _(c,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!\!\!\!\!(c^M≡ v_j)^t (10) Similarly, we have to assert that the effects of actions take place at the same time step and thus set the value of the counted variables of PLMGs to specific values. Note that an effect can either be positive (adding) or negative (deleting). In the latter case, we need to negate the implication’s right-hand-side to ensure that the PLMG does not take the deleted value at the next time step. ∀a∈∀(¬)p(v1,…,vn)∈(a) ∀ a\!∈\!A∀( )p(v_1,…,v_n)\!∈\!eff(a) ∀ℳ∈∀p(x1,…,xm)∈(ℳ): ∈ P\ \ ∀ p(x_1,…,x_m)∈ L(M): if ∀(o,j)∈O(p(x1,…,xm)):o∈vj then ∀(o,j)\!∈\!O(p(x_1,...,x_m)):o\!∈\!O^v_j then at∧⋀(o,j)∈O(p(x1,…,xm))(vj=o)t∧⋀(c,j)∈V(p(x1,…,xm))(vj∈D(c))t a^t \!\!\!\!\!\!\!\! _(o,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!(v_j=o)^t \!\!\!\!\!\!\!\! _(c,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!\!\!\!\!(v_j\!∈\!D(c))^t ⟹(¬)[(pℳ)t+1∧⋀(c,j)∈V(p(x1,…,xm))(cℳ≡vj)t+1] 14.22636pt ( ) [(p^M)^t+1 \!\!\!\!\!\!\!\! _(c,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!\!\!\!\!(c^M≡ v_j)^t+1 ] (11) It is possible that for a PLMG an action with a delete effect on it but no add effect is executed. In this case, the PLMG becomes inactive, i.e., all of its ground facts are false. In our encoding the literal ”none” then becomes true – forced by Eq. 8. The frame axioms introduced next ensure that no other literal in the PLMG can be selected. For the LMGs generated via Lifted Fact Altering Mutex Groups (Fišer 2020), it is further the case that once a PLMG has become inactive it can never become active again, i.e., once the ”none” literal is true it will stay true (Fišer 2020, Thm. 6). We enforce this using the formula (noneℳ)t⟹(noneℳ)t+1(none^M)^t (none^M)^t+1. If one were to use LMGs generated by a different method, this constraint must not be added. The remaining encoding naturally allows the ”none” to become false again and sets the correct ground fact of the PLMG to true. As the fourth step, we create the frame axioms, ensuring that the state does not change from one time to the next, except if caused by an action’s effects. For representing the state using PLMGs, this means that the value of a counted variable can only change if there is an action that has set it to a specific value (i.e. object). There are two possible ways of encoding this: either we encode that for a variable to attain a value an action must have set it to that value, or we encode that if the variable’s value changed (to any value), an action must have set it. The first encoding is quite cumbersome and large, as it needs to create clauses for every possible value that the counted variable could have. Further, the encoding of effects already uses variables of the type (cℳ≡vj)t(c^M≡ v_j)^t which if true, mean the variable c was set to a specific value, namely the value of the unified argument vjv_j. We first encode the causes that can change the variable’s value as expressed by the (cℳ←ap(v1,…,vn))t(c^M← a_p(v_1,…,v_n))^t and (pℳ←ap(v1,…,vn))t(p^M← a_p(v_1,…,v_n))^t variables, which we call cause variables. ∀a∈∀p(v1,…,vn)∈(a) ∀ a\!∈\!A∀ p(v_1,…,v_n)\!∈\!eff(a) ∀ℳ∈∀p(x1,…,xm)∈(ℳ): ∈ P\ \ ∀ p(x_1,…,x_m)∈ L(M): if ∀(o,j)∈O(p(x1,…,xm)):o∈vj then ∀(o,j)\!∈\!O(p(x_1,...,x_m)):o\!∈\!O^v_j then at∧⋀(o,j)∈O(p(x1,…,xm))(vj=o)t∧⋀(c,j)∈V(p(x1,…,xm))(vj∈D(c))t a^t \!\!\!\!\!\!\!\! _(o,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!(v_j=o)^t \!\!\!\!\!\!\!\! _(c,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!\!\!\!\!(v_j\!∈\!D(c))^t ⟹(pℳ←ap(v1,…,vn))t+1∧ 56.9055pt (p^M← a_p(v_1,…,v_n))^t+1 ⋀(c,j)∈V(p(x1,…,xm))(cℳ←ap(v1,…,vn))t+1 62.59596pt _(c,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!\!\!\!\!(c^M← a_p(v_1,…,v_n))^t+1 (12) (pℳ←ap(v1,…,vn))t+1∨(cℳ←ap(v1,…,vn))t+1⟹ (p^M← a_p(v_1,…,v_n))^t+1 (c^M← a_p(v_1,…,v_n))^t+1 at∧⋀(o,j)∈O(p(x1,…,xm))(vj=o)t∧⋀(c,j)∈V(p(x1,…,xm))(vj∈D(c))t a^t \!\!\!\!\!\!\!\! _(o,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!(v_j=o)^t \!\!\!\!\!\!\!\! _(c,j) (p(x_1,…,x_m))\!\!\!\!\!\!\!\!\!\!\!\!\!\!(v_j\!∈\!D(c))^t (13) We have to connect the causes for a counted variable to be set to the change of the value of that variable. Let ℂ(cℳ,t) C(c^M,t) be the set of all cause variables for the counted variable c. Any change in literal, except to ”none”, has to be caused. ∀ℳ∈ ∈ P\ \ ∀c∈ℭ(ℳ):(cℳ≢cℳ)t⟹⋁x∈ℂ(cℳ,t)x ∀ c∈ C(M):(c^M ≡ c^M)^t _x∈ C(c^M,t)x (14) ∀p(x1,…,xm)∈(ℳ):¬(pℳ)t∧(pℳ)t+1⟹⋁x∈ℂ(pℳ,t)x ∀ p(x_1,...,x_m)\!\!∈\!\! L(M)\!\!:\!\! (p^M)^t\! \!(p^M)^t+1\!\!\! \!\!\! -14.22636pt _x∈ C(p^M,t) -14.22636ptx (15) As the fifth and last step, we need to assert the semantics of the intermediate helper variables (vj∈D(cℳ))t(v_j\!∈\!D(c^M))^t, (cℳ≡vj)t(c^M≡ v_j)^t and (cℳ≢cℳ)t(c^M ≡ c^M)^t that we used throughout the encoding, expressing that the value of a unified argument is within the type of a counted variable, that a counted variable has the same value of a unified argument, and that the value of a counted variable changed, respectively. ∀ℳ∈∀c∈ℭ(ℳ)∀v∈: ∈ P\ \ ∀ c∈ C(M)\ \ ∀ v∈ U: ∀o∈v∖c:(v=o)t⟹¬(v∈D(cℳ))t ∀ o ^v ^c:(v=o)^t (v\!∈\!D(c^M))^t (16) ∀o∈v∩c:(v=o)t⟹(v∈D(cℳ))t ∀ o ^v ^c:(v=o)^t (v\!∈\!D(c^M))^t (17) ∧(cℳ=o)t∧(v=o)t⟹(cℳ≡v)t \!(c^M=o)^t\! \!(v=o)^t\!\!\! \!\!\!(c^M\!≡\!v)^t (18) ∧(cℳ≡v)t⟹[(cℳ=o)t⇔(v=o)t] \!(c^M\!≡\!v)^t\! \! [(c^M=o)^t\! \!(v=o)^t ] (19) ∀o∈c∖v:(cℳ=o)t⟹¬(cℳ≡v)t ∀ o ^c ^v:(c^M=o)^t (c^M≡ v)^t (20) ∀ℳ∈∀c∈ℭ(ℳ)∀o∈c: ∈ P\ \ ∀ c∈ C(M)∀ o ^c: (cℳ=o)t−1∧¬(cℳ=o)t⟹(cℳ≢cℳ)t (c^M=o)^t-1 (c^M=o)^t (c^M ≡ c^M)^t (21) It is sufficient to encode only the “falling edge” in equation 21, as there is always exactly one (cℳ=o)t(c^M=o)^t that is true. Fully Partially Binary LiSAT PWL A* CPDDL A* MpC FD A* lmc Binary LiSAT PWL alt-bfws1 CPDDL gbfs MpC FD LAMA - P - P - P - h1h^1 lmc lmc inv. - AS P - hff hff inv. - AS Blocks (40) 21 21 15 15 40 40 40 0 2 34 4 4 12 12 40 40 22 3 4 0 12 12 Childs (144) 48 48 96 96 96 96 83 2 1 6 1 0 8 6 144 144 104 47 66 66 110 138 GED (312) 67 67 64 64 64 64 72 32 0 36 56 62 40 50 77 110 286 88 183 77 312 312 Logistics (40) 26 26 40 40 40 40 32 5 0 31 0 0 16 13 40 40 40 13 0 0 24 16 OS (56) 54 54 54 54 54 54 56 44 24 32 0 0 18 44 51 55 50 31 0 0 18 46 Pipes (50) 35 35 36 36 32 34 21 9 9 15 7 6 10 10 43 25 48 20 10 9 18 37 Rover (40) 5 5 16 16 12 12 4 2 0 3 0 0 5 5 12 4 40 40 0 0 12 12 Visitall (180) 29 34 57 110 56 115 107 73 30 62 18 28 71 68 178 176 147 121 18 56 90 90 Labyrinth (20) 8 8 8 8 8 8 12 1 0 1 0 2 1 8 8 12 5 0 3 1 15 3 Cov. (882) 293 298 386 439 402 463 427 168 66 220 86 102 181 216 593 606 742 363 284 209 611 666 Score (9) 4.07 4.10 5.05 5.34 5.49 5.86 5.32 1.72 0.83 3.12 0.53 0.68 1.97 2.72 6.71 6.51 7.11 3.63 1.59 1.25 4.90 5.17 Table 2: Coverage results for length-optimal (left) and satificing (right) planning. Maxima for each group are in bold. Example 4. Consider the same planning problem as in the previous example. Assume there are the following PLMGs: ℳ:⟨p,?v,?l,in(p,?v),at(p,?l)⟩M: \p\,\?v,?l\,\in(p,?v),at(p,?l)\ and m:⟨v,?l,at(v,?l)⟩m: \v\,\?l\,\at(v,?l)\ . • Precondition check: – drop1∧(v4=p)1∧(v1∈D(v_cnt))1⟹(inℳ)0∧(v_cnt==v1)0drop^1 (v_4=p)^1 (v_1∈ D(v\_cnt))^1 (in^M)^0 (v\_cnt==v_1)^0 – drop1∧(v1=v)1∧(v2∈D(l_cnt))1⟹(atm)0∧(l_cnt==v2)0drop^1 (v_1=v)^1 (v_2∈ D(l\_cnt))^1 (at^m)^0 (l\_cnt==v_2)^0 • Effects: Same logic as preconditions, just a different time step at the right side of the implication. • Frame Axioms: – drop1∧(v4=p)1∧(v2∈D(l_cnt))1⟹(atℳ←drop)1∧(l_cnt←drop)1drop^1 (v_4=p)^1 (v_2∈ D(l\_cnt))^1 (at^M← drop)^1 (l\_cnt← drop)^1 – (atℳ←drop)1∨(l_cnt←drop)1⟹drop1∧(v4=p)1∧(v1∈D(cnt))1(at^M← drop)^1 (l\_cnt← drop)^1 drop^1 (v_4=p)^1 (v_1∈ D(cnt))^1 • Connecting Cause Variables: – (_l_cnt)1⟹(l_cnt←drop)1(diff\_l\_cnt)^1 (l\_cnt← drop)^1 – ¬(atℳ)0∧(atℳ)1⟹(atℳ←drop)1 (at^M)^0 (at^M)^1 (at^M← drop)^1 • Assert intermediate variables: – (v2=l)1⟹(v2∈D(l_cnt))1(v_2=l)^1 (v_2∈ D(l\_cnt))^1 – (l_cnt==v2)1∧(l_cnt=l)1⟹(v2=l)1(l\_cnt==v_2)^1 (l\_cnt=l)^1 (v_2=l)^1 – (l_cnt==v2)1∧(v2=l)1⟹(l_cnt=l)1(l\_cnt==v_2)^1 (v_2=l)^1 (l\_cnt=l)^1 – (l_cnt=l)1∧(v2=l)1⟹(l_cnt==v2)1(l\_cnt=l)^1 (v_2=l)^1 (l\_cnt==v_2)^1 – (l_cnt=l)1∧¬(l_cnt=l)1⟹(_l_cnt)1(l\_cnt=l)^1 (l\_cnt=l)^1 (diff\_l\_cnt)^1 Although it can not be demonstrated in a simple example such as this one, constraints (16) and (20) as well as variables (vj∈D(cℳ))t(v_j\!∈\!D(c^M))^t are fundamental for the encoding in cases where the type of a counted variable in a PLMG is a subtype of the corresponding UA, ensuring that impossible implications are not forced. Binary Encoding of Objects In the encoding with partially grounded facts, the information density contained in the variables (cℳ=o)t(c^M=o)^t is fairly small since at most one of the variables for the admissible objects o is true. As such, this encoding requires linearly many variables – which can be a hindrance, especially in planning problems with many objects. Given a set of objects O, it is possible to encode the information contained in the (cℳ=o)t(c^M=o)^t variables with only ⌈log2(|O|)⌉ _2(|O|) variables via a bit-representation. This increases the information density of valuations of the SAT formula we generate, but simultaneously also decreases the overall formula size in terms of number of clauses. This observation was, as far as we know, first made by Ernst et al. (1997). They applied it only to the set of grounded actions. To simplify formula generation, we initially assume that all variables can have any object as its value. Further, we assume the set of objects O is an ordered set and the objects are numbered o0o_0 to o||−1o_|O|-1. To denote a specific object, B=⌈log2(||)⌉B\!=\! _2(|O|) variables are needed. We introduce two new types of variables replacing (v=o)t(v=o)^t and (cℳ=o)t(c^M\!=\!o)^t: • vbtv^t_b: indicates that the bbth bit of the index of the object assigned to unified argument v at time t is true. • (cℳ)bt(c^M)^t_b: indicates that the bbth bit of the index of the object assigned to variable cℳc^M at time t is true. For notation, we define the function β(x,b)β(x,b) to be true if the bbth bit of x is 1 and false otherwise. Similarly, we write (¬)p( )^p where p is a truth value to indicate that a negation should be present if p is true, and that it should not be present if p is false. We write ℬB for the set 1,…,B\1,…,B\. We start by connecting the values of the variables representing unified arguments (v=o)t(v=o)^t to the vitv^t_i variables. ∀v∀oi∈∀b∈ℬ:(v=oi)t⟹(¬)¬β(i,b)vbt ∀ v∀ o_i ∀ b :(v=o_i)^t ( ) β(i,b)v^t_b (22) It would be possible to entirely remove the one-hot representation of the variables (vj=o)t(v_j=o)^t for the unified arguments and to remove the clauses in Eq. 22. The reduction from doing so is significantly less noticeable than from replacing the (cℳ=o)t(c^M=o)^t variables: the unified arguments only account for a fraction of the overall encoded variables. On the other hand, encoding constraints on the unified arguments on action level (static preconditions, inequalities) becomes more complicated with a binary-only representation – which prompted us to retain the one-hot encoding here. Secondly, we have to encode for each variable cℳc^M that the object identified by bitmask in the (cℳ)it(c^M)^t_i variables is of the type of that variable. Let this type be t(c)t(c). After preprocessing, we know that the objects of type t(c)t(c) form a subsequence of the sequence of all objects, that is there are upper ut(c)u_t(c) and lower ℓt(c) _t(c) bounds so that the type t(c)t(c) comprises exactly the objects oi∣ℓt(c)≤i≤ut(c)\o_i _t(c)≤ i≤ u_t(c)\. To implement these restrictions, we add the following clauses. Note that bit 0 is the least significant bit of the representation. ∀ℳ∈∀c∈ℭ(ℳ)∀b∈ℬ ∈ P∀ c∈ C(M)∀ b if β(ℓt(c),b):⋀b′∈b+1,…B(¬)¬β(ℓt(c),b′)(cℳ)b′t⟹(cℳ)bt if β( _t(c),b): -17.07182pt _b ∈\b+1,… B\ -14.22636pt( ) β( _t(c),b )(c^M)_b ^t\!\! \!\!(c^M)_b^t (23) if ¬β(ut(c),b):⋀b′∈b+1,…B(¬)¬β(ut(c),b′)(cℳ)b′t⟹¬(cℳ)bt if β(u_t(c),b)\!:\! -19.91684pt _b ∈\b+1,… B\ -19.91684pt( ) β(u_t(c),b )(c^M)_b ^t\!\!\! \!\! (c^M)_b^t (24) Thirdly, we have to replace the clauses defining the semantics of (cℳ≡vj)t(c^M≡ v_j)^t. We introduce a new type of variable to express bit-wise equality: • (cℳ≡vj)bt(c^M≡ v_j)^t_b: indicates that the bbth bit of the indices of the objects assigned to cℳc^M and v at time t is the same. If the bbth bit of cℳc^M and v are equal, (cℳ≡vj)bt(c^M≡ v_j)^t_b is true and if all (cℳ≡vj)bt(c^M≡ v_j)^t_b are true, cℳc^M and v are equal. ∀ℳ∈∀c∈ℭ(ℳ)∀vj∈∀b∈ℬ: ∈ P∀ c∈ C(M)∀ v_j∈ U\ \ ∀ b : vbt∧(cℳ)bt⟹(cℳ≡vj)bt v^t_b (c^M)^t_b (c^M≡ v_j)^t_b (25) ¬vbt∧¬(cℳ)bt⇒(cℳ≡vj)bt v^t_b (c^M)^t_b (c^M≡ v_j)^t_b (26) (⋀b∈ℬ(cℳ≡vj)bt)⇒(cℳ≡vj)t ( _b (c^M≡ v_j)^t_b ) (c^M≡ v_j)^t (27) Conversely, if (cℳ≡vj)t(c^M≡ v_j)^t is true, all bits of cℳc^M and v must be equal. (cℳ)bt∧(cℳ≡vj)t⟹vbt (c^M)^t_b (c^M≡ v_j)^t v^t_b (28) vbt∧(cℳ≡vj)t⟹(cℳ)bt v^t_b (c^M≡ v_j)^t (c^M)^t_b (29) Lastly, we have to replace the clauses defining the semantics of the (cℳ≢cℳ)t(c^M ≡ c^M)^t variables, encoding that the value of cℳc^M changed from time t to time t+1t+1. We need to introduce auxiliary variable for every bit: • (cℳ≢cℳ)it(c^M ≡ c^M)^t_i: indicates that the iith bit of value of variable cℳc^M is different between times t and t+1t+1. We then add the following clauses ∀ℳ∈∀c∈ℭ(ℳ) ∈ P∀ c∈ C(M): (cℳ≢cℳ)t⟹⋁b∈[1,…,B](cℳ≢cℳ)bt (c^M ≡ c^M)^t _b∈[1,…,B](c^M ≡ c^M)^t_b (30) ∀b∈ℬ:(cℳ≢cℳ)bt⟹(cℳ≢cℳ)t ∀ b :(c^M ≡ c^M)^t_b (c^M ≡ c^M)^t (31) ∀b∈ℬ:(cℳ)bt∧¬(cℳ)bt+1⟹(cℳ≢cℳ)bt ∀ b :(c^M)^t_b (c^M)^t+1_b (c^M ≡ c^M)^t_b (32) ∀b∈ℬ:¬(cℳ)bt∧(cℳ)bt+1⟹(cℳ≢cℳ)bt ∀ b : (c^M)^t_b (c^M)^t+1_b (c^M ≡ c^M)^t_b (33) Experimental Evaluation The experiments were conducted on an AMD EPYC 9654 with a memory limit of 8GB and a time limit of 1800 seconds. A snapshot of the data and code used is archived at (Filipe and Behnke 2026). The most up-to-date version of the codebase is maintained in a public repository222https://github.com/galvusdamor/PGSAT. The results were obtained from the standard benchmark set for lifted planning333https://github.com/abcorrea/htg-domains and validated with VAL444https://github.com/KCL-Planning/VAL. These benchmarks do not contain action costs, i.e., in our evaluation length-optimal is the same as cost-optimal. Tab. 2 presents the results of running multiple planners in optimal (left side) and satisficing (right side) modes. Fig. 1(c) shows a cactus plot for the optimal mode. Our encodings were ran with and without predicate pruning (P) and use the Kissat SAT solver (Biere et al. 2024), the same solver employed by LiSAT. For the satisficing mode, we used the iteration algorithm proposed by Höller and Behnke (2022): attempt to solve formulas for the bounds 10, 25, 50, 100, and 200 iteratively. Each bound is allocated a time budget of remaining_run_time|remaining_bounds| remaining\_run\_time|remaining\_bounds|, after which the next bound is attempted. (a) Number of variables (log-log) (b) Number of clauses (log-log) (c) Coverage Figure 1: Comparison of formula size, variable number and coverage (a) Variable Ratio Comparison (log/log) (b) Clause Ratio Comparison (log/log) Figure 2: Ration comparisons of number of variables/clauses with respect to a formula of length 1 Figure 3: Solve Time Scatter Plot We compare against state-of-the-art lifted planners, Powerlifted (PWL) and CPDDL, as well as grounded planners Madagascar (MpC) and Fast Downward (FD). For PWL, we used A* search with the hmaxh^max heuristic (optimal mode), and alt-bfws1 and hffh^f (satisficing mode). We also used the optimal configuration (PWL A* lmc) from Wichlacz et al. (2023) presenting a lifted version of the lm-cut (lmc) heuristic (Helmert and Domshlak 2009), but could not fully reproduce their results even with technical support from an author; nevertheless, we include them for reference. We use CPDDL with the best optimal and satisficing configuration reported in Horčík and Fišer (2023) (Gaifman graphs with 95% and 25% reduction, A* with hlm-cuth^lm-cut and GBFS with hffh^f, respectively). For reference, we included two grounded planners: Madagascar (state of the art grounded SAT-based planner, with and without invariant computation; no parallelism and algorithm S in optimal mode) and Fast Downward (Helmert 2006) (with and without action splitting (Elahi and Rintanen 2024)) using A* search and the lm-cut heuristic (Helmert and Domshlak 2009) (in optimal mode) and the LAMA configuration (in satisficing mode). Coverage is the total number of instances solved across all domains, while the score is the sum of per-domain percentages of solved instances. We do not compare against the approach by Robinson et al. (2009), as it is not meant for optimal planning (it allows for action parallelism) and no implementation is publicly available, nor against the QBF-based planner by Shaik and van de Pol (2022), as it was outperformed by LiSAT. Results – Optimal In the optimal setting, search-based lifted methods are clearly outperformed by SAT-based methods: even the best approach (CPDDL A* lmc) performs worse than our worst SAT-based approach (Fully, no P). The benefits of predicate pruning are most evident in Visitall. Although only one predicate (visited) is pruned, this alone eliminates approximately half of the grounded facts. Since this predicate is not part of any PLMG, all its facts require (expensive) explicit encoding, while all other non-static facts (of the at-robot predicate) form a single PLMG. Here, predicate pruning substantially reduces the number facts that are not encoded compactly using PLMGs but explicitly (only the goal remains). In Rovers 3 unary predicates (all of the type communicated_X_data) are pruned. The number of removed facts is not as significant, thus explaining the lack of improvement. For Pipesworld (Binary), the difference observed is not caused by predicate pruning. The discrepancy is due to one encoding failing to find a satisfying assignment within the time limit. Three of our encodings achieve a higher score than LiSAT. Two of them also have higher coverage. Furthermore, they fall short of LiSAT’s performance in only three domains (GED, OS, Labyrinth), yet they remain competitive in two of them (GED, OS), with the difference in solved instances being << 5% of the total instances in the domain. In contrast, of the five domains where our encodings (Binary P) outperform LiSAT, four exhibit a ≥9%≥ 9\% increase in solved instances, with notable improvements in Logistics, Pipesworld, and Rover, where the difference is ≥20%≥ 20\%. Results – Satisficing For brevity, we only report our best encoding (Binary, P) here – which is the second best in terms of score, outperformed only by PWL. Although we underperform in terms of coverage compared to FD, LiSAT, and PWL, this is primarily due to GED, whose instance count skews coverage. Importantly, our planner (and LiSAT) outperforms other lifted and grounded satisficing planners in the domain Blocksworld, Childsnack, Visitall, Labyrinth, and OS. This indicates that SAT-based planners (ours and LiSAT) provided complementary capabilities to current search-based lifted planners, which would, e.g., be useful as part of a portfolio planner. Results – Formula Size Figure 1 compares number of clauses and variable between Binary P and LiSAT. Interestingly, Binary P always generates more clauses than LiSAT for the same plan length. Even in the best-case (Childsnack) our encoding produces only less than one order of magnitude more clauses than LiSAT, while in most cases the difference is one to two orders of magnitude. This is due to the increased size of the encoding needed for correctly encoding PLMG’s semantics. On the other hand, Binary P requires fewer variables in most instances, sometimes up to two orders of magnitude. The trend that our encoding scales linearly, while LiSAT’s scales quadratically is however visible from LiSAT’s more rapid growth in number of variables and clauses. For the number of clauses this is particularly evident in OS (light blue). We provide additional plots showing the growth of the formulas more clearly in the appendix. Figure 2 presents log-log plots comparing the growth of the number of variables and the number of clauses used by a formula of any length to the formula of length 1 between the predicate-pruned Binary encoding and LiSAT. For this, we have taken for every planning problem, the number of clauses (variables, respectively) for encoding with plan length one: N1N_1. Then for any higher plan length, we divided the number of clauses (variables, resp.) by N1N_1. With this, we normalised the growth of the formula w.r.t. the formula for plan length 1. This enables us to more easily compare instances of different size (number of objects) and across domains (different number and arity of predicates and actions). In terms of both the number of clauses and the number of variables, we can see that our encoding exhibits a linear growth with the plan length while LiSAT exhibits a quadratic growth, as was expected. Additionally, it can also be seen that in LiSAT’s formula, the growth is also dependent on the instance of each domain with harder instances exhibiting a bigger growth, while our encoding exhibits very little variance in the scaling across the domains. Runtime Figure 3 presents a scatter plot of the solve time of both planners. Interestingly, there is no clear dominance between LisSAT and Binary P, but a dependence on the domain (e.g. Visitall for Binary P and blocksworld for LiSAT). This suggests that both approaches exhibit orthogonal capabilities. Conclusion We presented three encodings of partially lifted planning into SAT—keeping actions fully lifted while partially grounding the state description. Unlike previous encodings whose formula size scaled quadratically with plan length, ours scales linearly. Empirically, we have shown that our partially grounded and binary encodings outperform the state of the art in length-optimal planning – LiSAT – and is competitive with it satisfying planning. Further our new encodings provide complementary capabilities to current state-of-the-art lifted satisficing planners as they solve problem that are out of reach for search-based planners. While our encodings target sequential plans, future work could explore integrating action parallelism. This may improve scalability and broaden the applicability of partially grounded encodings, especially in satisficing settings. The encodings may also be extended to support negative preconditions and conditional effects. Acknowledgements This publication is part of the project “Exploiting Problem Structures in SAT-based Planning” with file number OCENW.M.22.050 of the research programme Open Competition which is financed by the Dutch Research Council (NWO). References L. C. Aiello, J. Doyle, and S. C. Shapiro (Eds.) (1996) Proceedings of the fifth international conference on principles of knowledge representation and reasoning (kr 1996). Morgan Kaufmann. Cited by: H. Kautz, D. McAllester, and B. Selman (1996). J. C. Beck, E. Karpas, and S. Sohrabi (Eds.) (2020) Proceedings of the thirtieth international conference on automated planning and scheduling (icaps 2020). AAAI Press. Cited by: A. B. Corrêa, F. Pommerening, M. Helmert, and G. Francès (2020). A. Biere, T. Faller, K. Fazekas, M. Fleury, N. Froleyks, and F. Pollitt (2024) CaDiCaL, Gimsatul, IsaSAT and Kissat entering the SAT Competition 2024. In Proc. of SAT Competition 2024 – Solver, Benchmark and Proof Checker Descriptions, M. Heule, M. Iser, M. Järvisalo, and M. Suda (Eds.), Department of Computer Science Report Series B, Vol. B-2024-1, p. 8–10. Cited by: Experimental Evaluation. V. Conitzer and F. Sha (Eds.) (2020) Proceedings of the thirty-fourth AAAI conference on artificial intelligence (AAAI 2020). AAAI Press. Cited by: D. Fišer (2020). A. B. Corrêa, F. Pommerening, M. Helmert, and G. Francès (2020) Lifted successor generation using query optimization techniques. See Proceedings of the thirtieth international conference on automated planning and scheduling (icaps 2020), Beck et al., p. 80–89. Cited by: Introduction. J. Dy and S. Natarajan (Eds.) (2024) Proceedings of the thirty-eighth AAAI conference on artificial intelligence (AAAI 2024). AAAI Press. Cited by: M. Elahi and J. Rintanen (2024). M. Elahi and J. Rintanen (2024) Optimizing the optimization of planning domains by automatic action schema splitting. See Proceedings of the thirty-eighth AAAI conference on artificial intelligence (AAAI 2024), Dy and Natarajan, p. 20096–20103. Cited by: Experimental Evaluation. M. D. Ernst, T. D. Millstein, and D. S. Weld (1997) Automatic SAT-Compilation of Planning Problems. In Proceedings of the Fifteenth International Joint Conference on Artifical Intelligence, San Francisco, CA, USA, p. Volume 2, 1169–1176. Cited by: Related Work, Lifted Representation of Actions, Encoding with Fully Grounded Facts, Binary Encoding of Objects. J. Espasa, J. Coll, I. Miguel, and M. Villaret (2021) Exploring lifted planning encodings in essence prime. In Artificial Intelligence Research and Development: Proceedings of the 23rd International Conference of the Catalan Association for Artificial Intelligence, Vol. 339, p. 66–75. Cited by: Lifted Representation of Actions. [10] (2006) Fifth International Planning Competition (IPC-5): planner abstracts. Cited by: H. Kautz, B. Selman, and J. Hoffmann (2006). R. E. Fikes and N. J. Nilsson (1971) STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence 2, p. 189–208. Cited by: Lifted Planning. J. Filipe and G. Behnke (2026) (Data and code) when both grounding and not grounding are bad – a partially grounded encoding of planning into sat. Note: https://doi.org/10.5281/zenodo.19067374 External Links: Document Cited by: Experimental Evaluation. D. Fišer (2020) Lifted fact-alternating mutex groups and pruned grounding of classical planning problems. See Proceedings of the thirty-fourth AAAI conference on artificial intelligence (AAAI 2020), Conitzer and Sha, p. 9835–9842. Cited by: Lifted Mutex Groups, Lifted Mutex Groups, Encoding with Partially Grounded Facts, Encoding with Partially Grounded Facts. M. Helmert and C. Domshlak (2009) Landmarks, critical paths and abstractions: What’s the difference anyway?. In Proceedings of the Nineteenth International Conference on Automated Planning and Scheduling (ICAPS 2009), p. 162–169. Cited by: Experimental Evaluation. M. Helmert (2006) The Fast Downward planning system. Journal of Artificial Intelligence Research 26, p. 191–246. Cited by: Experimental Evaluation. M. Helmert (2009) Concise finite-domain representations for PDDL planning tasks. Artificial Intelligence 173, p. 503–535. Cited by: Introduction, Lifted Mutex Groups, Encoding with Partially Grounded Facts. D. Höller and G. Behnke (2022) Encoding lifted classical planning in propositional logic. In Proceedings of the Thirty-Second International Conference on Automated Planning and Scheduling (ICAPS 2022), p. 134–144. Cited by: Introduction, Related Work, Lifted Representation of Actions, Lifted Representation of Actions, Encoding with Fully Grounded Facts, Partial Grounding in Propositional Logic, Experimental Evaluation. R. Horčík and D. Fišer (2023) Gaifman graphs in lifted planning. In Proceedings of the 26th European Conference on Artificial Intelligence (ECAI 2023), p. 1052–1059. Cited by: Experimental Evaluation. R. Huang, Y. Chen, and W. Zhang (2012) SAS+ planning as satisfiability. Journal of Artificial Intelligence Research 43, p. 293–328. Cited by: Related Work. H. Kautz, D. McAllester, and B. Selman (1996) Encoding plans in propositional logic. See Proceedings of the fifth international conference on principles of knowledge representation and reasoning (kr 1996), Aiello et al., p. 374–384. Cited by: Related Work, Encoding with Fully Grounded Facts. H. Kautz, B. Selman, and J. Hoffmann (2006) SatPlan: planning as satisfiability. See 10, Cited by: Related Work. H. Kautz and B. Selman (1992) Planning as satisfiability. See Proceedings of the 10th European conference on Artificial Intelligence (ECAI 1992), Neumann, p. 359–363. Cited by: Related Work. H. Kautz and B. Selman (1996) Pushing the envelope: planning, propositional logic, and stochastic search. See 29, p. 1194–1201. Cited by: Encoding with Fully Grounded Facts. L. McCluskey, B. Williams, J. R. Silva, and B. Bonet (Eds.) (2012) Proceedings of the twenty-second international conference on automated planning and scheduling (icaps 2012). AAAI Press. Cited by: C. J. Muise, S. A. McIlraith, and J. C. Beck (2012). C. J. Muise, S. A. McIlraith, and J. C. Beck (2012) Improved non-deterministic planning by exploiting state relevance. See Proceedings of the twenty-second international conference on automated planning and scheduling (icaps 2012), McCluskey et al., p. 172–180. Cited by: Predicate Pruning. B. Nebel, C. Rich, and W. Swartout (Eds.) (1992) Proceedings of the third international conference on principles of knowledge representation and reasoning (kr 1992). Morgan Kaufmann. Cited by: J. S. Penberthy and D. S. Weld (1992). B. Neumann (Ed.) (1992) Proceedings of the 10th European conference on Artificial Intelligence (ECAI 1992). John Wiley and Sons. Cited by: H. Kautz and B. Selman (1992). J. S. Penberthy and D. S. Weld (1992) UCPOP: a sound, complete, partial order planner for ADL. See Proceedings of the third international conference on principles of knowledge representation and reasoning (kr 1992), Nebel et al., p. 103–114. Cited by: Introduction. [29] (1996) Proceedings of the thirteenth national conference on artificial intelligence (AAAI 1996). AAAI Press. Cited by: H. Kautz and B. Selman (1996). J. Rintanen (2011) Madagascar: scalable planning with SAT. In IPC 2011 Planner Abstracts, p. 61–64. Cited by: Related Work. J. Rintanen (2012) Planning as satisfiability: heuristics. Artificial Intelligence 193, p. 45–86. Cited by: Related Work. N. Robinson, C. Gretton, D. N. Pham, and A. Sattar (2009) SAT-based parallel planning using a split representation of actions. In Proceedings of the Nineteenth International Conference on Automated Planning and Scheduling (ICAPS 2009), p. 281–288. Cited by: Related Work, Experimental Evaluation. I. Shaik and J. van de Pol (2022) Classical planning as QBF without grounding. In Proceedings of the Thirty-Second International Conference on Automated Planning and Scheduling (ICAPS 2022), p. 329–337. Cited by: Introduction, Experimental Evaluation. C. Sinz (2005) Towards an optimal CNF encoding of boolean cardinality constrai nts. In Proceedings of the 11th International Conference on Principles and Practice of Constraint Programming (CP 2005), Vol. 3709, p. 827–831. Cited by: Lifted Representation of Actions. J. Wichlacz, D. Höller, D. Fišer, and J. Hoffmann (2023) A landmark-cut heuristic for lifted optimal planning. In Proceedings of the 26th European Conference on Artificial Intelligence (ECAI 2023), p. 2623–2630. Cited by: Experimental Evaluation.