← Back to papers

Paper deep dive

Modal Logic for Distributed Trust

Niels Voorneveld, Peeter Laud

Year: 2026Venue: arXiv preprintArea: cs.LOType: PreprintEmbeddings: 176

Intelligence

Status: succeeded | Model: anthropic/claude-sonnet-4.6 | Prompt: intel-v1 | Confidence: 93%

Last extracted: 3/24/2026, 1:36:17 AM

Summary

This paper proposes a modal logic framework for reasoning about trust in multi-agent distributed systems. It introduces belief modalities (B_a) and interaction modalities (I_{a←b}) to describe agent beliefs and communications, defines modal splitting axioms for information sharing, and establishes trust derivations including chains of trust. The framework is applied to public key infrastructures (PKI), certificate authorities, and decentralized identity systems. The logic is constructive (intuitionistic), supports a lambda calculus of proofs via Curry-Howard correspondence, and is designed to be decidable. Key contributions include axioms for indirect communication, trust levels for delegation chains, and accountability mechanisms for distributed protocols.

Entities (33)

Estonian Research Council · organization · 100%Modal Logic for Distributed Trust · paper · 100%Niels Voorneveld · person · 100%PRG1780 · grant · 100%Peeter Laud · person · 100%Certificate Authority (CA) · concept · 98%Public Key Infrastructure (PKI) · technology · 98%Belief Modality · logicalconstruct · 97%Interaction Modality · logicalconstruct · 97%Intuitionistic Modal Logic · logicsystem · 97%Axiom K · logicalaxiom · 95%Curry-Howard Correspondence · concept · 95%DKAL · system · 95%Kripke Semantics · concept · 95%Modal Splitting Axiom · logicalconstruct · 95%Multi-Agent System · concept · 95%Self-Sovereign Identity · concept · 95%Authorization Logic · logicsystem · 93%Constructive Logic · logicsystem · 93%Decentralized Identity · concept · 93%Lambda Calculus of Proofs · concept · 93%Awareness Axiom · logicalconstruct · 92%Cautious Trust · concept · 90%DKAL 2 · system · 90%Decidability · concept · 90%Epistemic Logic · logicsystem · 88%Forwarding · concept · 88%Liau · person · 88%Liberal Datalog · formalsystem · 88%Trust Management System · concept · 88%Trust Threshold · concept · 88%Axiom D · logicalaxiom · 85%S4 Modal Logic · logicsystem · 85%

Relation Signals (28)

Niels Voorneveld authored Modal Logic for Distributed Trust

confidence 100% · Niels Voorneveld Peeter Laud — listed as authors

Peeter Laud authored Modal Logic for Distributed Trust

confidence 100% · Niels Voorneveld Peeter Laud — listed as authors

Modal Logic for Distributed Trust fundedby Estonian Research Council

confidence 100% · This research has been supported by Estonian Research Council, grant No. PRG1780.

Modal Logic for Distributed Trust usesgrant PRG1780

confidence 100% · grant No. PRG1780

Modal Logic for Distributed Trust appliedto Public Key Infrastructure (PKI)

confidence 97% · We see how this can be applied to trust models in public key infrastructures.

Modal Logic for Distributed Trust introduces Belief Modality

confidence 97% · For each a∈A, the modality B_a signifies what a believes.

Modal Logic for Distributed Trust introduces Interaction Modality

confidence 97% · For each pair a,b∈A, the modality I_{a←b} signifies interaction between a and b.

Modal Logic for Distributed Trust proposes Intuitionistic Modal Logic

confidence 97% · We specifically look at intuitionistic modal logic, in order to keep proofs constructive and expressible by various lambda calculi via the Curry-Howard correspondence.

Modal Logic for Distributed Trust appliedto Certificate Authority (CA)

confidence 95% · We employ our logic as a tool for specifying the function and duties of entities in distributed networks, like CAs in public key infrastructures.

Intuitionistic Modal Logic hassemantics Kripke Semantics

confidence 95% · the Kripke semantics of our logic gives us greater confidence in our axioms, inference rules, and proof search methods.

Modal Logic for Distributed Trust introduces Modal Splitting Axiom

confidence 95% · A core building block of our logic is the consideration of modal splitting axioms.

Intuitionistic Modal Logic uses Curry-Howard Correspondence

confidence 95% · in order to keep proofs constructive and expressible by various lambda calculi via the Curry-Howard correspondence.

Modal Logic for Distributed Trust uses Constructive Logic

confidence 95% · Constructive logics have many useful applications in computer science and security information systems... we use a constructive modal logic.

Modal Logic for Distributed Trust appliedto Decentralized Identity

confidence 93% · One of the main motivations for this paper is its applications to decentralized identities.

Modal Logic for Distributed Trust comparedto Authorization Logic

confidence 93% · The work has a strong connection to authorization and access logics (AL).

Intuitionistic Modal Logic enables Lambda Calculus of Proofs

confidence 93% · We give specifications for the modal logic which can be readily adapted into a lambda calculus of proofs.

Modal Logic for Distributed Trust comparedto DKAL

confidence 92% · In terms of treatment of trust itself, DKAL is perhaps the closest to our logic.

Modal Logic for Distributed Trust achieves Decidability

confidence 90% · In order to facilitate decidability, we limit ourselves to a specified set of modal axioms, as well as avoiding the use of polymorphism and quantifiers.

Liau defined Cautious Trust

confidence 90% · We ground our work on the definition of cautious trust by Liau, given in terms of belief and communication.

Modal Logic for Distributed Trust defines Forwarding

confidence 90% · We define how information in the network can be shared via forwarding.

Modal Logic for Distributed Trust groundedin Cautious Trust

confidence 90% · We ground our work on the definition of cautious trust by Liau, given in terms of belief and communication.

Self-Sovereign Identity motivates Modal Logic for Distributed Trust

confidence 90% · the emergence of self-sovereign identity acutely necessitates the ability to assign truth values to complex statements about and including trust.

Belief Modality satisfies Axiom K

confidence 90% · we use modalities satisfying the axiom K and the necessity rule

Interaction Modality satisfies Axiom K

confidence 90% · DKAL mostly focuses on parametric modalities satisfying axiom K and Necessity

DKAL 2 extends DKAL

confidence 88% · Their follow-up DKAL 2 does have a Kripke model, but lacks some of the expressivity discussed above.

Modal Logic for Distributed Trust models Trust Threshold

confidence 88% · threshold trust can be employed to deal with with issues of unresponsiveness or corruption of CAs.

Modal Logic for Distributed Trust specializes Trust Management System

confidence 88% · We consider our logic to be a specialization of former work on trust management systems in the direction of distributed systems with defined information access.

DKAL uses Liberal Datalog

confidence 88% · DKAL instead has a translation to Liberal Datalog for deriving truth.

Cypher Suggestions (8)

Find all papers funded by Estonian Research Council · confidence 95% · unvalidated

MATCH (paper:Entity {entity_type: 'Paper'})-[:FUNDED_BY]->(org:Entity {name: 'Estonian Research Council'}) RETURN paper.name

Find all papers authored by Niels Voorneveld or Peeter Laud · confidence 92% · unvalidated

MATCH (p:Entity {entity_type: 'Paper'})<-[:AUTHORED]-(a:Entity {entity_type: 'Person'}) WHERE a.name IN ['Niels Voorneveld', 'Peeter Laud'] RETURN p.name, a.name

Find all application domains of this paper · confidence 90% · unvalidated

MATCH (paper:Entity {name: 'Modal Logic for Distributed Trust'})-[:APPLIED_TO]->(domain:Entity) RETURN domain.name, domain.entity_type

Find all logical constructs introduced by this paper · confidence 90% · unvalidated

MATCH (paper:Entity {name: 'Modal Logic for Distributed Trust'})-[:INTRODUCES]->(lc:Entity) RETURN lc.name, lc.entity_type

Find all systems or logics that this paper compares itself to · confidence 90% · unvalidated

MATCH (paper:Entity {name: 'Modal Logic for Distributed Trust'})-[:COMPARED_TO]->(other:Entity) RETURN other.name, other.entity_type

Find all concepts that motivated this paper · confidence 88% · unvalidated

MATCH (concept:Entity)-[:MOTIVATES]->(paper:Entity {name: 'Modal Logic for Distributed Trust'}) RETURN concept.name, concept.entity_type

Find all axioms used or satisfied in this paper's logic · confidence 87% · unvalidated

MATCH (e:Entity)-[:SATISFIES]->(ax:Entity {entity_type: 'LogicalAxiom'}) RETURN e.name, ax.name

Find the full chain: author -> paper -> logic system -> semantics · confidence 85% · unvalidated

MATCH (a:Entity {entity_type: 'Person'})-[:AUTHORED]->(p:Entity {name: 'Modal Logic for Distributed Trust'})-[:PROPOSES]->(ls:Entity {entity_type: 'LogicSystem'})-[:HAS_SEMANTICS]->(sem:Entity) RETURN a.name, p.name, ls.name, sem.name

Abstract

Abstract:We propose a method for reasoning about trust in multi-agent systems, specifying a language for describing communication protocols and making trust assumptions and derivations. This is given an interpretation in a modal logic for describing the beliefs and communications of agents in a network. We define how information in the network can be shared via forwarding, and how trust between agents can be generalized to trust across networks. We give specifications for the modal logic which can be readily adapted into a lambda calculus of proofs. We show that by nesting modalities, we can describe chains of communication between agents, and establish suitable notions of trust for such chains. We see how this can be applied to trust models in public key infrastructures, as well as other interaction protocols in distributed systems.

Tags

ai-safety (imported, 100%)cslo (suggested, 92%)preprint (suggested, 88%)

Links

Your browser cannot display the PDF inline. Open PDF directly →

Full Text

175,814 characters extracted from source content.

Expand or collapse full text

Modal Logic for Distributed Trust†thanks: This research has been supported by Estonian Research Council, grant No. PRG1780. Niels Voorneveld Peeter Laud Abstract We propose a method for reasoning about trust in multi-agent systems, specifying a language for describing communication protocols and making trust assumptions and derivations. This is given an interpretation in a modal logic for describing the beliefs and communications of agents in a network. We define how information in the network can be shared via forwarding, and how trust between agents can be generalized to trust across networks. We give specifications for the modal logic which can be readily adapted into a lambda calculus of proofs. We show that by nesting modalities, we can describe chains of communication between agents, and establish suitable notions of trust for such chains. We see how this can be applied to trust models in public key infrastructures, as well as other interaction protocols in distributed systems. 1 Introduction When protocols are distributed across multiple parties, you must consider those parties in their proper context in order to make guarantees. Who has access to what information? Who is claiming that the information is correct. These are factors necessary for determining both whether something is correct, and whether someone has the necessary knowledge to learn about this correctness. Properties and guarantees we want to establish within distributed contexts involve claims regarding credentials, and correctness of distributed processes according to some specifications. Cooperation is needed to prove such guarantees, which fundamentally requires the sharing of statements and the consideration of other peoples statements. There are two sides to cooperation. The first is to facilitate the building of strong claims by combining knowledge. This may not necessitate full transparency, and entities can choose what and with whom they share their claims, in order to cut costs or keep confidential information hidden. Such entities would like to learn the effectivity of sharing facts, and what others can do with them. They can prove that they can convince others, or that collaborators can use their facts effectively. Though the partial hiding of information is assumed, this paper is not about privacy. No guarantees are made that information cannot leak. It is simply specified in which contexts claims are officially shared, and proofs are built only on such specified assumptions. Still, some basic properties of privacy across communication channels can be asserted, and one can reason about consequences of leaked information. The flipside to cooperation is responsibility and accountability. Sharing information is based on mutual trust, and entities making false claims can break this trust. As such, entities should take care not to share false facts, and careful entities should stick to their expertise. Entities are not just accountable for the claims they make, but also for the logical consequences of their claims. Implied lies are still lies. As such, it is important to specify exactly what information an entity has access to. This is another reason why the destination of claims matters, as it tells us who has received the information. Entities should not be allowed to ignore information shared with them. They should consider consequences of received claims, and be accountable for them. That said, it is important to distinguish between the claim as a whole, and the content of the claim. It is ok to accept that a claim has been made, yet to deny that the content of the claim is true. Whether or not to accept the contents of a claim as true is a matter of trust. Trust is the cornerstone of this work, and has been studied from many points of view. If you trust a claim, you accept its content as true. Trust is formulated in a conditional way: it does not matter whether the claim has actually been made, trust states that if a certain claim is made by a certain entity, then it is true. Making trust conditional facilitates cooperation. Entities may declare their trust in others. This can then be used by other entities, which could incorporate it in their model in order to make guarantees. For instance, a shop may declare trust in an authority in its ability to authenticate credentials. A buyer then knows it can prove their credentials to the shop via this authority. Such interacting trust relations can then be built up in order to prove guarantees of larger authentication networks and protocols. Note however, that if an entity receives a claim which they have previously declared as trustworthy, they are now accountable for the consequences of the claim. Hence, declared trust, which can be interpreted as vouching for the expertise of someone, requires one to accept accountability for a trusted entity’s statements regarding that area of expertise. 1.1 Constructive Modal Logic Constructive logics have many useful applications in computer science and security information systems. Such logics force one to be more specific regarding how a property is satisfied. Proving a disjunction requires one to specify explicitly which of the two options are true (or give a decision procedure as such given the assumptions). Moreover, it disallows double negation elimination: “it cannot fail” is not a sufficient enough reason of why something is guaranteed to work. In short, constructive logic requires one to build an explicit guarantee using a set of assumptions. This also increases accountability: the precise reason for the guarantee is embedded within the proof itself, allowing one to trace back which assumptions are used, and how a guarantee is satisfied. If the guarantee is later shown to be false, then it is easier to identify the entity that made a mistake. To express both the claims being shared, and the internal perspectives of entities, we use modalities. Guided by our pursuit of accountable claims, we use modalities satisfying the axiom K and the necessity rule, which are considered standard axioms in epistemic (knowledge-based) logics. We moreover consider a collection of well-behaved axioms relating the many perspectives. Though many varieties can be considered, we start with a basic model of belief ℬB and interaction ℐI to mark internal and communicated knowledge. A core building block of our logic is the consideration of modal splitting axioms, which declare that ℳ​A⇒​ℛ​AMA holds for any statement A. These express how information A from modality ℳM can be adopted by modality N. This latter domain does not necessarily accept the statement as true. Instead, they consider it as flagged by modality ℛR, and can reason with this statement. A specific instance of such an axiom is an awareness axiom of the form ℳ​A⇒​ℳ​AMA for all A, where we say N is aware of all statements A provable by ℳM. This would hold for instance if ℳM represents a communication channel which N has access to. Another aspect to consider is the decidability of the logic; does an algorithm exist which shows whether a guarantee is true given certain assumptions? Decidability is useful in order to increase accountability: it enables entities to compute and hence be aware of certain consequences of their claims. Checking for particularly significant consequences of one’s claims should be part of an entity’s due diligence. In order to facilitate decidability, we limit ourselves to a specified set of modal axioms, as well as avoiding the use of polymorphism and quantifiers. 1.2 Distributed Trust and Decentralized Identity One of the main motivations for this paper is its applications to decentralized identities. We believe that the emergence of self-sovereign identity [46] acutely necessitates the ability to assign truth values to complex statements about and including trust. An entity’s sovereignty over their identity means the proliferation of identifiers that that entity intends to use in different contexts. These identifiers are presumably created by the entity themselves; some authority may or may not have bound them to identifiers in a more generally known namespace (e.g. the names of the residents of some country). The statements that the various authorities make about an entity will be bound to the various identifiers created by that entity (as expressed by e.g. the “Privacy and minimal disclosure” principle of [38]). While it should generally be the responsibility of that entity to make sure that the claims they want to present together are bound to the same identifier, we expect the reality to not be so simple. When a relying party (i.e. a party that makes decisions on the basis of the claims it receives about an entity) receives a number of statements pertaining to a number of different identifiers, as well as some statements that relate these identifiers to each other (as per the “Delegation” principle of [38]), it becomes highly non-trivial to find out which of these claims can be combined with each other. This paper provides a system for relying parties to make such inferences. The same question — what can be reasonably and confidently derived for which identifier? — is also something that the individual entities themselves want to find the answers to. Our logical approach provides them with tools to make these inferences from the point of view of some other party, allowing them to forecast the decisions made by the relying parties. Indeed, the decisions can be derived for various parties, considering what we know about their trust relationships; these decisions may be combined with each other and considered from the point of view of different parties. 1.3 Misplaced Trust in Certificate Authorities A motivating example we focus on is that of networks for public key validation. A certificate authority (CA) is an entity in such networks which makes publicly verifiable statements that bind public keys to entities. These statements are the certificates. A CA is trusted to be able to properly verify that an entity is in control of a public key. We consult CAs to find out an entity’s public key. There are various mechanisms to find out whether an entity is a CA, e.g. a higher-level CA may issue a certificate to another entity stating that it is a (sub-)CA, too. We use such statements in hierarchical public-key infrastructures (PKI), where the knowledge of the public key of a root CA allows us to verify a certificate chain that ultimately binds a public key to an (end-)entity. Of course, when an entity is declared to be a CA, then the declarer has to perform due diligence on that entity’s ability to be a CA. Sometimes these declarations are made by CAs that should not be declaring other entities to be a CA. This creates confusion when verifying certificate chains, because the last certificate in that chain may be issued by a CA that should not be seen as a CA. Several incidents (see [36], subsections A.1.11, A.1.13, A.1.14, A.1.17) have taken place due to such misplaced trust. Even more (see subsections A.1.10 and A.1.25 in [36]) have happened due to other usage restrictions that have been absent in the issued certificate. We employ our logic as a tool for specifying the function and duties of entities in distributed networks, like CAs in public key infrastructures. This way, they can be held accountable for mistakes made. Given a formal constructive proof of a property, if the property is later shown to be false, we can trace back the dependencies and find who is at fault, ensuring accountability. Moreover, definitions of trust can be fine-tuned to safeguard against certain errors and attacks. For instance, threshold trust can be employed to deal with issues of unresponsiveness or corruption of CAs. 1.4 Related Work There exist a number of calculi and logics for defining and computing trust in distributed networks in general and PKIs in particular. The paper [45] considers a modal logic with belief modality, with the description of messages modeled by the underlying Kripke style semantics instead of explicitly described in the logic as we do. Non-modal approaches include a predicate calculus [19] studying a generalization of the usual graph description of PKI networks, and [6] which focuses on modeling the concurrent properties of such networks. Especially interesting are [34, 49] which both study trust statements and inferences which can be made using them, similar to our treatment of trust for PKI, which we describe within our logic. The goals of these calculi are similar to our formalisms — turning statements made by trusted entities into beliefs of users. Though they do not support higher-order reasoning, with messages referring to other messages, which our modal logic does consider. We use a modal logic to describe and give meaning to trust in distributed systems. Modal logics have had many applications in computer science due to their flexibility and versatility, which shows their usefulness as a verification tool. Examples include modeling programming features such as variable scoping [43], and reasoning about cryptographic processes [20]. Using a modal logic to describe trust has been investigated in several different contexts. We ground our work on the definition of cautious trust by Liau [41], given in terms of belief and communication. This has been further investigated theoretically in a plethora of other works [16, 18, 40]. It has been shown that such trust definitions could be extended to consider probability [30, 39] and time [5]. We specifically look at intuitionistic modal logic [53], in order to keep proofs constructive and expressible by various lambda calculi [9, 26, 4] via the Curry-Howard correspondence. We maintain that calculi such as these are one of the requirements for a trust logic, due to the support it can give for the derivation of trust in software libraries. 1.5 Comparison to Authorization and Access Logics The work has a strong connection to authorization and access logics (AL) [13, 22, 7, 8]. Though the basic motivation and foundations of the formalisms are distinct, they are not wholly incompatible. ALs are about certain commands, and whether they were made by an authorized entity. This entails authenticating proofs of authority. In this paper, we do not focus on specific commands and control statements as such, but instead focus on the sharing and evaluating of information across distributed networks. Many works mainly consider two primitives, a modality SaS_a for what a principal a says, and a primitive formula stating that a speaks for b. This logical operation already existed in foundational work [13], which focused on information regarding who owns which keys, and used in particular descriptions of what information people see, believe and control. Soon after, in [1], a more thorough investigation of the says constructor was done, studying a lattice structure and defining control as the logical affirmation: saying a statement makes that statement true. Many different axiomatizations of SaS_a were considered, with order structure, consistency Sa⊥⇒⊥S_a (axiom D) and idempotency Sa​A⇔Sa​Sa​AS_aA S_aS_aA occurring in [1] already. Consistently throughout further work [3, 21, 24] the axioms A⇒Sa​A S_aA and Sa​Sa​A⇒Sa​AS_aS_aA S_aA were assumed, with some (mainly [3]) assuming even Sa​Sb​A⇒Sb​Sa​AS_aS_bA S_bS_aA. It is perhaps this last axiom which highlights a fundamental feature of authorization logic different from out approach: in ALs they are concerned about who has authority, not about who said something. The axiom A⇒Sa​A S_aA signifies that whoever is reasoning about statements from a may complement any of a’s utterances with other things which are true. This comes from the perspective that a may want access or authority over something, and hence is ok with others completing their proof of access with additional details which may help them. This is in stark opposition to the aim in this paper, where a is accountable for their statements, and would not want others to put words into their mouth. Additionally, we would like a to be able to reason about consequences of their claims, and hold a precise model of how other people regard them. As such, they should only need to be accountable for knowledge they are aware of. The paper[2] considers a variation in which A⇒Sa​A S_aA is replaced by Sb​A⇒Sa​Sb​AS_bA S_aS_bA, meaning one may adopt claims made by others. Utterances by b are considered as universally shared, which would make a aware of them. We would like to disallow this as well in general, as communications are not necessarily shared with everyone. Other work[31, 32, 33] allow Sa​A⇒Sa​Sa​AS_aA S_aS_aA as we do when considering belief, though still have Sa​Sa​A⇒Sa​AS_aS_aA S_aA. We drop axioms of the form ℳ​A⇒ℛ​AMNA mostly in order to keep decidability in the absence of S4’s unit axioms. We predominantly focus on how information is shared and adopted, which entails axioms of the form ℳ​A⇒​AMA and ℳ​A⇒​ℛ​AMA . These include ℬa​A⇒ℬa​ℬa​AB_aA _aB_aA, as well as agents adopting information they have access to, such as ℐa←b​A⇒ℬa​ℐa←b​AI_a 0.5[0.75]$←$bA _aI_a 0.5[0.75]$←$bA. We moreover do not consider axiom D stating that ℳ⊥⇒⊥M [35, 50], since we do not wish the inconsistency of one party’s statements to necessarily imply the inconsistency of the whole. Certain foundational works consider lattice structures on their agents[1, 3] which give an axiomatic order on SaS_a, as well as meet and join axioms. We adopt the order structure to define how belief is shared, which allows us to model broadcasting. Other works have this order as dynamic statements in the logic, with derivable speaks for formulas. We do not consider those, as they are statements of authority, not necessarily trust. Some work formulating can statements[7, 8] does bear resemblance to our treatment of trust. There, if a says that b can do A, and if b does it, then a says A happens. Similarly, we say that if a claims that b’s opinion on A is valid (trust), and if b claims A, a is responsible for claim A. The DKAL language [27, 29] considers a similar trust statement as primitive in a different context. In terms of treatment of trust itself, DKAL is perhaps the closest to our logic, with our derivations of validity (Sec. 4) having similarities with their operational semantics [29]. DKAL mostly focuses on parametric modalities satisfying axiom K and Necessity, and does not consider many further axioms, such as the connections between modalities we explore in this paper. Our treatment of multi-step information sharing (Sec. 3 and 5) is more fine-grained, with proper significance given to both the senders and the intended recipients of messages. We are also able to handle disjunction, which is useful to model certain protocols. But our greatest difference lies in semantics — the Kripke semantics of our logic gives us greater confidence in our axioms, inference rules, and proof search methods. DKAL instead has a translation to Liberal Datalog [11] for deriving truth. Their follow-up DKAL 2 [28] does have a Kripke model, but lacks some of the expressivity discussed above. We consider our logic to be a specialization of former work on trust management systems [34, 49, 5] in the direction of distributed systems with defined information access. We focus in particular on how to specify protocols for sharing, forwarding, and trusting claims across networks. We do this by adopting and modifying the theory of belief and interaction from the past [41] and forming a decidable logic framework for specifying trust and interaction protocols. 1.6 Contributions and Paper Outline Our first contribution is the formulation of a theory to reason about communications over distributed networks, enabling us to prove the existence of trust across chains of communications. We establish a set of axioms (Figure 1) for sharing and considering information, which have not been the focus of previous work, and discuss how to reason about indirect communication and trust in the logic. We also touch upon a calculus for building proofs in the logic. The language for sharing and considering information allows us to state, what it means for one agent to trust another one with respect to the validity of a statement. Our calculus gives us some useful inference rules for validity in multi-agent systems (Figure 3). We extend the trust with levels, allowing us to model examples such as public key infrastructures (PKI) that may involve several steps of delegation. Our language will be rich enough to express these levels, and to derive certain properties of them (Theorem 1) that turn to be rich enough to concisely describe the trust relations in networks of agents. We start in Section 2 by discussing how we describe the relevant concepts with modalities and axioms. In Section 3 we show different ways of formulating indirect communications. We discuss reasoning with trust in Section 4, and formulate a formalism for sharing information and trust in a network in Section 5. We give further examples in Sections 6 and 7. We discuss two calculi in Section 10, and a Kripke model in 11, and give final remarks in Section 12. Related papers: Since the writing of this article, two papers have been published focusing on and expanding on particular parts of this article. One paper [51] develops the decidability of a schema of intuitionistic modal logic, to which is partially covered here, and furthermore develops how to derive relevant consequences. Another paper [52] covers how you can reason about trust thresholds. 2 Modal Reasoning We use a constructive modal logic to specify both the perspectives of different entities, as well as claims entities make to each other. We consider the general concept of domain, which subsumes specified entities, groups of entities, the public domain, and other entities identified by public keys. Let A be a finite set of entities, domains or agents. Following the traditional literature on trust, we consider two modalities which we will give a specified meaning: • For each a∈a , the modality ℬaB_a signifies what a believes. In general the statement ℬa​AB_aA means that we can prove that given presumed assumptions of agent a, we can show that a should accept A as true. We can think of it as a can verify A given their own knowledge and expertise, and given what has been communicated to and shared with them. • For each pair a,b∈a,b , the modality ℐa←bI_a 0.5[0.75]$←$b signifies interaction between a and b. The statement ℐa←b​AI_a 0.5[0.75]$←$bA expresses that through interaction between a and b, b has claimed and communicated to a enough information (claims) to imply that statement A is true. There is a significant difference between what one believes and what one claims to be true. Firstly, in general one would not assume that ℐa←b​A⇒ℬb​AI_a 0.5[0.75]$←$bA _bA as b may be more liberal with the truth than what they themselves truly believe and act upon. On the other hand, ℬb​A⇒ℐa←b​AB_bA _a 0.5[0.75]$←$bA is not necessarily true as b may not want to communicate all that they know to a. Entities are accountable for the logical consequences of their claims on top of the claims themselves. Similarly, we assume they can make logical inferences within their own model of belief. As such, we assume each modality ℳM satisfies the axiom K and the “necessity” inference rule. • Axiom K states that ℳ​A⇒ℳ​(A⇒B)⇒ℳ​BMA (A B) for any formulas A and B, signifying that if we learn that both A and A⇒BA B hold in belief or interaction expressed by ℳM, then the consequence B should also hold there. This axiom allows us to reason about inference that can be made within ℳM as an external observer. • Necessity states that if formula A is undeniably true (a logical tautology), then so is ℳ​AMA. This means everyone should accept obviously true logical statements; those which can be proven without further assumptions. This is weaker than saying A⇒ℳ​A , which states that if A happens to be true (e.g. given as an optional assumption), then ℳ​AMA is true. The latter implies ℳM knows all details of the current situation, whereas necessity only states ℳM knows about facts which hold in all possible situations. On top of the general axioms, we assume some additional ones which signify who has access to what information. We predominantly consider splitting axioms of the form ℳ​X⇒​ℛ​XMX . Awareness axioms state that agents down the line can reflect on and consider domains they have knowledge about: • For any A, ℬa​A⇒ℬa​ℬa​AB_aA _aB_aA. • For any A, ℐa←b​A⇒ℬa​ℐa←b​AI_a 0.5[0.75]$←$bA _aI_a 0.5[0.75]$←$bA. So an agent is aware of their own internal viewpoint, as well as everything communicated to them. Lastly we express the fact that claims made are intended as expressions of belief. For instance, if an agent a says some property is satisfied, then it is intended to mean that a claims they believe the property is satisfied. One may claim the latter is a weaker statement, but at the very least the there exists an implication, which we express as a modal axiom. • For any A, ℐa←b​A⇒ℐa←b​ℬb​AI_a 0.5[0.75]$←$bA _a 0.5[0.75]$←$bB_bA. 2.1 Optional hierarchy of agents We consider one more type of axiom we may add in order to increase the power of the logic. These are associated to the definition of shared domains, which combine several domains into one in order to evaluate consequences of their combined knowledge. We define a preorder ⊑ on agents a⊑ba b denoting that b inherits all knowledge of a: • If a⊑ba b then ℬa​A⇒ℬb​AB_aA _bA for any A. • If a⊑ca c and b⊑db d then ℐa←b​A⇒ℐc←d​AI_a 0.5[0.75]$←$bA _c 0.5[0.75]$←$dA for any A. Consequently, ℬa​A⇒ℬb​ℬc​AB_aA _bB_cA if a⊑ba b and a⊑ca c, and ℐa←c​A⇒ℬb​ℐa←c​AI_a 0.5[0.75]$←$cA _bI_a 0.5[0.75]$←$cA if a⊑ba b. As an example, we can take our domains to be subsets of entities, =​(′)A=P(A ), and let S⊑ZS Z if S is a subset of Z. For S⊆′S we have that ℬSB_S is the combined belief of all entities a∈Sa∈ S. We can for instance show that ℬa​A⇒ℬb​(A⇒B)⇒ℬa,b​(B)B_\a\A _\b\(A B) _\a,b\(B). Though we have that ℬa​A∨ℬb​A⇒ℬa,b​(A)B_\a\A _\b\A _\a,b\(A), the converse does not hold as we cannot prove ℬa​A⇒ℬb​(A⇒B)⇒(Ba​(B)∨ℬb​(B))B_\a\A _\b\(A B) (B_\a\(B) _\b\(B)). We can also consider a special domain marking knowledge accessible to all. This is in line with the traditional necessity modality □ which marks properties which are certainly true. We can incorporate this in our scheme by considering a special agent Ω∈ for which Ω⊑a a for all other agents and assert the above axiom. This makes ℐΩ←aI_ 0.5[0.75]$←$a into a kind of broadcast modality, as in what a makes public. One the other side of the spectrum, we could consider the theoretical gathering of all perspectives, which can be used to check if there are any inconsistent opinions. This would be given by some agent Υ such that b⊑Υb for any other agent b. In terms of our =​(′)A=P(A ) example, Ω=∅ = and Υ=′ =A . Last but not least, we could consider the □ modality itself as well, to easily mark assumptions which we want all domains to adopt. We assume that □​A⇒□​□​A A A, □​A⇒A A A and □​A⇒ℳ​A A for any other modality ℳM. Axiom Name ⊢ℳ​A⇒ℳ​(A⇒B)⇒ℳ​B (A B) Axiom K If ⊢A A then ⊢ℳ​A Necessity ⊢ℬa​A⇒ℬa​ℬa​A _aA _aB_aA Self awareness ⊢ℐa←b​A⇒ℬa​ℐa←b​A _a 0.5[0.75]$←$bA _aI_a 0.5[0.75]$←$bA Recipient awareness ⊢ℐa←b​A⇒ℐa←b​ℬb​A _a 0.5[0.75]$←$bA _a 0.5[0.75]$←$bB_bA Intent of claim If a⊑ba b then ⊢ℬa​A⇒ℬb​A _aA _bA Belief inheritance If a⊑ba b then ⊢ℐa←c​A⇒ℐb←c​A _a 0.5[0.75]$←$cA _b 0.5[0.75]$←$cA Receiver inheritance If a⊑ba b then ⊢ℐc←a​A⇒ℐc←b​A _c 0.5[0.75]$←$aA _c 0.5[0.75]$←$bA Sender inheritance ⊢□​A⇒ℳ​□​A A A Public awareness ⊢□​A⇒A A A Public verifiability Figure 1: Axiom system. ℳM ranges over modalities, a, b over agents, and A, B over formulas. 2.2 Logical Foundation We define the set of formulas F inductively as follows: A,B:=⊤∣⊥∣t∣​ℳ​A∣​A⇒B​∣A∧B∣​A∨BA,B:= t A B A B A B Note that by extension, we can formulate conjunction and disjunction over finite sets of formulas as well. We consider a finite set of modalities M satisfying axioms K and necessity, and other axioms given in the following form: Definition 1. A modal unfolding axiom is given by a pair (ℳ⇛l)(M l) where ℳ∈M is a modality, and l∈∗l ^* is a list of modalities. For each l∈∗l ^* and formula A we define l​(A)l(A) inductively as ε​(A)=A (A)=A and (ℳ⋅l)​(A)=ℳ​(l​(A))(M· l)(A)=M(l(A)). The unfolding axiom ℳ⇛lM l denotes the axiom ℳ​X⇒l​(X)MX l(X). Given some set of unfolding axioms Ax, we define our logic to be the intuitionistic multimodal logic, where each modality satisfies axiom K and necessity, and where furthermore the axioms from Ax are asserted. We denote this ℒL_ Ax. In the rest of the paper, we simply take the sets of assertions Ax as discussed before, summarised in Figure 1. There are many ways to set up such a logic. In Figure 2 is a setup for for a sequent calculus for the logic, with proof steps. A sequent is a pair Γ⊢A A consisting of a set of formulas Γ denoting the assumptions, and a formula A denoting the consequent. The fractions in the figure, called judgements, tell us how to construct new true sequents out of known true sequents. The calculus is can be improved to better suit applications, and is mostly included here for reference. Later, in Figure 4, we formulate a Fitch-style lambda-calculus which forms a fragment of dependent modal type theory [26], and is more practical for optimizing proofs111We shall consider the Proofs=Programs Curry-Howard correspondence there. What we are mainly interested in in the formulation of that variant is to get the substitutivity property, which allows us to simplify proofs.. Γ,A⊢AΓ⊢⊤Γ⊢⊥Γ⊢AΓ⊢AΓ⊢A⇒BΓ⊢BΓ,A⊢BΓ⊢A⇒BΓ⊢A∧BΓ⊢AΓ⊢A∧BΓ⊢B ,A A A A A B B ,A B A B A B A A B B Γ⊢AΓ⊢BΓ⊢A∧BΓ⊢AΓ⊢A∨BΓ⊢BΓ⊢A∨BΓ⊢A∨BΓ,A⊢CΓ,B⊢CΓ⊢C A B A B A A B B A B A B ,A C ,B C C Γ⊢ℳ​A1​…​Γ⊢ℳ​AnA1,…,An⊢BΓ⊢ℳ​BΓ⊢ℳ​Aℳ⇛lΓ⊢l​(A) _1… _n A_1,…,A_n B l l(A) Figure 2: Intuitionistic Multi-Modal K Logic The rest of this subsection is there to consider decidability of the logic. That is, do we have an algorithm to determine whether or not a sequent Γ⊢A A is provable. This is very useful in practice, as it allows us to check and verify the truth of properties given certain assumptions. This in turn allows us to hold entities accountable for their claims, as they should be aware of consequences of their claims and can be blamed if these uncover a lie. 2.3 Decidability The main property is that the logic of Figure 2 given the modal axioms of Figure 1 given that ⊑ is a preorder and A is finite, forms a decidable logic. The rest of this subsection explores the nuances of what kind of axiomatic systems Ax gives rise to a decidable logic, and can be skipped. More details can be found in [51]. To get decidability, Ax needs to satisfy three additional properties. • Ax is reflexive if ℳ⇛ℳ∈M ∈ Ax for each modality. • Ax is transitive if for any ℳ⇛l⋅l′∈M l·N·l ∈ Ax and ⇛r∈N r∈ Ax, we have ℳ⇛l⋅r⋅l′∈M l·r·l ∈ Ax. • Ax is decomposable if for any ℳ⇛⋅′⋅lM ·N ·l there is an ℛ∈R such that ℳ⇛⋅ℛ∈M ·R∈ Ax and for any ℳ⇛⋅r∈M ·T·r∈ Ax, we have ℛ⇛⋅r∈R ·r∈ Ax. The first two expose the categorical structure of the axioms, whereas the third property asserts that any “unfolding” of a modality into three or more modalities can be factored into a composition of axioms of the form ℳ⇛⋅ℛM ·R, as well as such decompositions are universal allowing for easier proof search. We denote a choice of ℛR as given in the third point by ℳ⊖M , meaning that if ℳ⇛⋅′⋅lM ·N ·l, then ℳ⊖M exists, ℳ⇛⋅(ℳ⊖)M ·(M ), and ℳ⊖⇛′⋅lM ·l. The reflexive and transitive closure of Ax is denoted as Ax. Lemma 1. For any Ax, the logics ℒL_ Ax and ℒ^L_ Ax are equivalent. Proof. We prove this by induction on the reflexive transitive closure. If ℳ⇛l∈^M l∈ Ax then either: • ℳ⇛l∈M l∈ Ax in which case we are done. • l=ℳl=M, in which case the axiom asserts that ℳ​A⇒ℳ​AMA for any formula A, which is trivially true. • It came from the composition of ℳ⇛l1⋅l2M l_1·N·l_2 and ⇛rN r, upon which we can apply the induction hypothesis. So for any formula A, we have that ℳ​(A)⇒l1​l2​(A)M(A) l_1Nl_2(A) and ​(l2​A)⇒r​(l2​A)N(l_2A) r(l_2A) are provable. Applying necessity on the latter, we get l1​l2​A⇒l1​r​l2​Al_1Nl_2A l_1rl_2A, and hence by intuitionistic reasoning (A⇒B,B⇒C⊢A⇒CA B,B C A C) we get that ℳ​A⇒l1​r​l2​AMA l_1rl_2A is provable. ∎ Lemma 2. If Ax is the system of axioms defined in Figure 1, then Ax Ax is decomposable. Proof. For ℬaB_a, note that ℬa⇛⋅′⋅l∈Ax^B_a ·N ·l∈ Ax means ⋅′⋅l=ℬa1⋅ℬa2⋅⋯⋅ℬanN·N ·l=B_a_1·B_a_2·…·B_a_n for some list of at least two agents such that a⊑aia a_i for all i∈1,…,ni∈\1,…,n\ (without inheritance, all aia_i are a). We simply take ℬa⊖ℬa1=ℬaB_a _a_1=B_a, for which ℬa⇛ℬa1⋅ℬaB_a _a_1·B_a and ℬa⇛ℬa2⋅⋯⋅ℬanB_a _a_2·…·B_a_n. For ℐa←bI_a 0.5[0.75]$←$b, axiom ℐa←b⇛⋅′⋅l∈Ax^I_a 0.5[0.75]$←$b ·N ·l∈ Ax means ⋅′⋅l=ℬa1⋅⋯⋅ℬan⋅ℐa′←b′⋅ℬb1⋅⋯⋅ℬbmN·N ·l=B_a_1·…·B_a_n·I_a 0.5[0.75]$←$b ·B_b_1·…·B_b_m where n+1+m≥2n+1+m≥ 2, a⊑aia a_i, a⊑a′a a , b⊑b′b b and b⊑bjb b_j. If n=0n=0 we simply take ℐa←b⊖ℐa′←b′=ℬbI_a 0.5[0.75]$←$b _a 0.5[0.75]$←$b =B_b, and if n>0n>0 we take ℐa←b⊖ℬa1=ℐa←bI_a 0.5[0.75]$←$b _a_1=I_a 0.5[0.75]$←$b. ∎ 3 Chains of Communications Modalities let us mark different domains of knowledge, and are as such useful for describing distributed networks of trust. In such networks, entities make claims, which may be communicated throughout the network, passing through multiple entities. At each location, it can then be determined what is known and what can be derived. We consider a canonical example, which we shall describe in a multitude of ways. The scenario is that a certain claim goes through multiple entities before getting to the targeted recipient. How is this claim sent, and what are the preconditions for the recipient to trust this claim? We consider four entities in a chain of communications: a a b b c c d d Here a will function as the origin of some claim A, and d as the intended recipient. However, suppose either a cannot send the claim directly to d, or d does not trust a directly. Whichever is the case, b and c are used as intermediaries. Example 1 (A global view). The simplest model simply asserts that the intermediaries pass along the the claim as their own. This can be stated by ℐb←a​A⇒ℐc←b​AI_b 0.5[0.75]$←$aA _c 0.5[0.75]$←$bA and ℐc←b​A⇒ℐd←c​AI_c 0.5[0.75]$←$bA _d 0.5[0.75]$←$cA. They say that if b receives claim A from a, it will send claim A to c, and similarly for c. The assertions compose into ℐb←a​A⇒ℐd←c​AI_b 0.5[0.75]$←$aA _d 0.5[0.75]$←$cA, hence if a sends the claim it will end up at d. The assertions in the above examples do require some trust by b and c, since they need to assert the truth of A. Secondly, note that we need access to all such assertions in order to make the final derivation. We shall address both these things in the next two examples. Example 2 (Trust and Responsiveness). The statement ℐb←a​A⇒ℐc←b​AI_b 0.5[0.75]$←$aA _c 0.5[0.75]$←$bA requires b to repeat a claim to another, which effectively means b must state that A is true. Hence the assertion requires trust of b in a. Moreover, the statement asserts a response by b, the sending of information to c. We can break it up in the following components: • Trust: ℬb​(ℐb←a​A⇒A)B_b(I_b 0.5[0.75]$←$aA A), if b receives the claim A from a, then it is believed. Note that with axiom ℐb←a​A⇒ℬb​ℐb←a​AI_b 0.5[0.75]$←$aA _bI_b 0.5[0.75]$←$aA, we may derive ℐb←a​A⇒ℬa​AI_b 0.5[0.75]$←$aA _aA. • Responsiveness: ℬb​(ℐb←a​A∧A)⇒ℐc←b​AB_b(I_b 0.5[0.75]$←$aA A) _c 0.5[0.75]$←$bA, if b receives the claim A and believes it, b will send it to c. We can split this up further into two statements. – ℬb​(ℐb←a​A∧A⇒ℐc←b​A)B_b(I_b 0.5[0.75]$←$aA A _c 0.5[0.75]$←$bA) where b believes in their own commitment to sending the claim along, and – ℬb​ℐc←b​A⇒ℐc←b​AB_bI_c 0.5[0.75]$←$bA _c 0.5[0.75]$←$bA saying we trust b keeps their commitment222One could argue this should be an axiom, though note it requires b to actively check and act on their commitments, hence it is reasonable to formulate it as a kind of trust in b instead of a separate axiom.. Suppose we want to ensure that d can derive that the message is passed along. The assertions defined prior as stated are not accessible to d, by which we mean we have no reason for thinking d knows or believes in them. So if d does not have them as prior assumptions, the entities are required communicate these statements of trust and responsiveness to d themselves. For brevity, we write ℐa1←a2←…←anI_a_1 0.5[0.75]$←$a_2 0.5[0.75]$←$… 0.5[0.75]$←$a_n for ℐa1←a2​ℐa2←a3​…​ℐan−1←anI_a_1 0.5[0.75]$←$a_2I_a_2 0.5[0.75]$←$a_3…I_a_n-1 0.5[0.75]$←$a_n. Example 3 (Communicating Trust). Instead of stating ℐb←a​A⇒ℐc←b​AI_b 0.5[0.75]$←$aA _c 0.5[0.75]$←$bA, we let b tell c that they will do this. We state that as ℐc←b​(ℐb←a​A⇒A)I_c 0.5[0.75]$←$b(I_b 0.5[0.75]$←$aA A), meaning b tells c that if they receive claim A from a, then A is true. Agent b need not explicitly send claim A anymore, they can simply forward a received claim from a and thereby implying the truth of A. This consequence is derived by applying axiom K, getting ℐc←b​ℐb←a​A⇒ℐc←b​AI_c 0.5[0.75]$←$bI_b 0.5[0.75]$←$aA _c 0.5[0.75]$←$bA. With everything being forwarded to d, we get statements: • ℐd←c←b​(ℐb←a​A⇒A)I_d 0.5[0.75]$←$c 0.5[0.75]$←$b(I_b 0.5[0.75]$←$aA A), it is forwarded that b trusts a. • ℐd←c​(ℐc←b​A⇒A)I_d 0.5[0.75]$←$c(I_c 0.5[0.75]$←$bA A), it is communicated that c trusts b. • ℬd​(ℐd←c​A⇒A)B_d(I_d 0.5[0.75]$←$cA A), d trusts c. All composed together, we can derive ℬd​(ℐd←c←b←a​A⇒A)B_d(I_d 0.5[0.75]$←$c 0.5[0.75]$←$b 0.5[0.75]$←$aA A), d trusts forwarded claims from a. The communicated statements of the previous example can be split up into different parts as done in example 2, if more clarification is needed. There is still one disadvantage regarding the above approach: each link in the chain must trust the previous link regarding statement A, even though that previous link may not be an expert on A. This leaves the system open to a kind of injected claim attack: if any link starts claiming A, everyone down the line will start to accept this claim, even though the claim did not originate from a. Though this problem cannot be entirely prevented, as we need to assume at least some integrity from the links, there are three partial solutions we shall consider. The first solution is to fine tune which claims are trusted. Instead of simply trusting A, we can only accept certain evidence of A. Even though a link may not be an expert on A, it could at least be trusted regarding other things, like communicated claims from other experts and verifying that such claims came from trusted individuals. This shall be the topic of Section 4. The second solution is to use multiple sources to double check claims, which we shall consider in Section 7 using threshold trust. The third solution has a different flavour, with none of the links in the communication chain needing to trust each other. Instead, they all have access to a single trusted source which can be consulted and used to double check if a claim is correct. This source can for instance be some kind of internet server, or a certification authority. Example 4 (Trusted Authority). Let v be a new entity everyone has access to. v v a a b b c c d d The behaviour of v can be specified as follows: it can send out claims (e.g. certifications) and will confirm to others that they indeed made this claim. This is given as follows: ⋀x,y∈ℐv←x←y←v​A⇒ℬv​A⇒ℐx←v​A _x,y I_v 0.5[0.75]$←$x 0.5[0.75]$←$y 0.5[0.75]$←$vA _vA _x 0.5[0.75]$←$vA, meaning if someone notifies v that they heard from someone else that v has claimed something, then if this is indeed true, then v confirms this claim. This can then be used by b given two statements: 1. ℐb←a←v​A⇒ℐv←b←a←v​A∨ℐb←v​AI_b 0.5[0.75]$←$a 0.5[0.75]$←$vA _v 0.5[0.75]$←$b 0.5[0.75]$←$a 0.5[0.75]$←$vA _b 0.5[0.75]$←$vA, which says that if b receives from a a claim made by v, it will check this with v directly, unless it has already received this claim from v. Note that with the specification of v, this implies ℐb←a←v​A⇒ℬv​A⇒ℐb←v​AI_b 0.5[0.75]$←$a 0.5[0.75]$←$vA _vA _b 0.5[0.75]$←$vA. 2. ℐb←v​A⇒ℐc←b←v​AI_b 0.5[0.75]$←$vA _c 0.5[0.75]$←$b 0.5[0.75]$←$vA. Supposing b, c and d have the first property, and a, b and c have the second property, we can derive ℬv​A⇒ℐa←v​A⇒ℐd←v​AB_vA _a 0.5[0.75]$←$vA _d 0.5[0.75]$←$vA. 4 Trust For cooperation, it is useful for entities to declare their trust in someone before that entity has made the relevant claim. This way, others can be assured that the entity can be convinced if needed, and how this can be done. We define trust in two stages. First we introduce some notation; for each list of modalities l=ℳ1,…,ℳnl=M_1,…,M_n and formula A, validity of l concerning A is given by: (ℳ1​…​ℳn)^​A:=ℳ1​…​ℳn​A⇒A (M_1…M_n)A:=M_1…M_nA A For instance, validity ℐα I_α for some α=a1←a2←…←anα=a_1 0.5[0.75]$←$a_2 0.5[0.75]$←$… 0.5[0.75]$←$a_n is the assertion that if claim A is communicated through chain of communications α, then A is true. Hence validity depends on four aspects: the sender, the receiver, the route the message took, and the content of the claim. The concept of trust is simply a statement of validity put into some context. Validity itself is the expression that “we”, meaning those considering the current set of assumptions, trust the claim made over some channel. We can also consider trust as either believed or claimed by a particular entity. E.g. ℬb​ℐα^​AB_b I_αA means b trusts claim A if communicated over α, and ℐβ​ℐα^I_β I_α is this trust being proclaimed over the chain of communications β. Usually, trust occurs in the recipient of the considered claim, e.g. in ℬb​ℐb←α^​AB_b I_b 0.5[0.75]$←$αA or ℐγ←b​ℐb←α^​AI_γ 0.5[0.75]$←$b I_b 0.5[0.75]$←$αA. Given our axioms, cautious trust a,b​A:=ℬa​(ℐa←b​A⇒ℬb​A∧ℬb​A⇒A)T_a,bA:=B_a(I_a 0.5[0.75]$←$bA _bA _bA A) as formulated by Liau [41] can be proven with the following validities: ℬa​ℐa←b^​ℬb​A∧ℬa​ℬb^​A⇒a,b​AB_a I_a 0.5[0.75]$←$bB_bA _a B_bA _a,bA ℐα I_α satisfies two rules which helps reasoning about it: • A⇒ℐα^​A I_αA, if a statement A is already true, then communications making this claim are trivially valid. • ℐα^​A∧ℐα^​B⇒ℐα^​(A∧B) I_αA I_αB I_α(A B), a combined statement’s validity can be determined in parts. The converse does however not hold. Lemma 3. For any modality ℳM, and formulas A and B, ℳ^​A∧ℳ^​(A⇒B)⇒ℳ^​B MA M(A B) MB is not necessarily provable. Proof. We show that there is a formula A and a Kripke frame which does not satisfy this statement. Let W have two worlds, v and w, and let ≤ be the identity relation. Let t and r be two tokens such that St=∅S_t= and Sr=wS_r=\w\. Lastly, let RℳR_M relate v with w, but not v with v. Then neither v nor w model ℳ​tMt, hence both model ℳ^​t Mt. Neither v nor w model t, so both model t⇒rt r, and hence ℳ^​(t⇒r) M(t r). Since v​Rℳ​vvR_Mv does not hold, and w⊧rw r, we have v⊧ℳ​rv . However, since v does not model r, v does not model M^​r Mr. So v does not model ℳ^​t∧ℳ^​(t⇒r)⇒ℳ^​r Mt M(t r) Mr. ∎ Lemma 4. ℐα←b^​A∧ℐα←b​ℐb←γ^​A⇒ℐα←b←γ^​A I_α 0.5[0.75]$←$bA _α 0.5[0.75]$←$b I_b 0.5[0.75]$←$γA I_α 0.5[0.75]$←$b 0.5[0.75]$←$γA Proof. ℐα←b​ℐb←γ^​A=ℐα←b​(ℐb←γ​A⇒A)I_α 0.5[0.75]$←$b I_b 0.5[0.75]$←$γA=I_α 0.5[0.75]$←$b(I_b 0.5[0.75]$←$γA A) implies ℐα←b​ℐb←γ​A⇒ℐα←b​AI_α 0.5[0.75]$←$bI_b 0.5[0.75]$←$γA _α 0.5[0.75]$←$bA by axiom K, where ℐα←b​ℐb←γ=ℐα←b←γI_α 0.5[0.75]$←$bI_b 0.5[0.75]$←$γ=I_α 0.5[0.75]$←$b 0.5[0.75]$←$γ. Hence combined with ℐα←b^​A=(ℐα←b​A⇒A) I_α 0.5[0.75]$←$bA=(I_α 0.5[0.75]$←$bA A) we get (ℐα←b←γ​A⇒A)=ℐα←b←γ^​A(I_α 0.5[0.75]$←$b 0.5[0.75]$←$γA A)= I_α 0.5[0.75]$←$b 0.5[0.75]$←$γA. ∎ Here validity is determined in two stages: • b communicates over α that they would trust claim A if coming over channel γ. • Communication of A over α are trusted. • As such, communications over γ through b and then over α are trusted. We gather derivable properties of validities studied in this section in Figure 3 for reference. Property Name ⊢ℳ^​A∧ℳ​A⇒A MA A Validity definition ⊢A⇒ℳ^​A A MA Validity unit ⊢ℳ^​A∧ℳ^​B⇒ℳ^​(A∧B) MA MB M(A B) Merging validities ⊢ℳ^​A∧^​A⇒ℳ​^​A MNA NA MNA Composition rule 1 ⊢ℳ​^​A∧ℳ^​A⇒ℳ​^​A NA MA MNA Composition rule 2 ⊢ℳ​^​A∧ℳ^​^​A∧ℳ^​A⇒ℳ​^​A NA M NA MNA MNA Composition rule 3 ⊬ℳ^​A∧ℳ^​(A⇒B)⇒ℳ^​B MA M(A B) MB No axiom K Figure 3: Validity properties (Derivable in the logic) 4.1 Indirect trust Let us look at a basic chaining of trust. We shall reason from the perspective of an agent a, though we could as easily have considered a communicating these derivations to other agents instead. The basic derivation done by applying axiom K to Lemma 4 is as follows: ℬa​ℐa←b^​A∧ℐa←b​ℐb←c^​A⇒ℬa​ℐa←b←c^​AB_a I_a 0.5[0.75]$←$bA _a 0.5[0.75]$←$b I_b 0.5[0.75]$←$cA _a I_a 0.5[0.75]$←$b 0.5[0.75]$←$cA Here a reasons that they trust b regarding property A. If b then communicates they trust c regarding property A, a can infer trust in c if at least claims go through b. This is an effective way of chaining trust, which requires a to trust b regarding A without argument. Note however that we also have the following derivation: ℬa​ℐa←b^​A∧ℐa←b​A⟹ℬa​AB_a I_a 0.5[0.75]$←$bA _a 0.5[0.75]$←$bA _aA In certain situations, we may want to fine-tune and reduce the necessary trust required of a. Let us instead suppose a accepts a certain proof from b, but does not want to simply accept the conclusion. The argument from b comes in two parts: 1. I (meaning b) trust c if they think A is true. 2. c has communicated to me that A is true. The obvious conclusion of these two statements is that b believes A is true. Instead of simply accepting this conclusion, a can choose to only accept the arguments themselves. They can state the following: ℬa​ℐa←b^​ℐb←c^​A∧ℬa​ℐa←b^​ℐb←c​AB_a I_a 0.5[0.75]$←$b I_b 0.5[0.75]$←$cA _a I_a 0.5[0.75]$←$bI_b 0.5[0.75]$←$cA stating that they trust b regarding their trust in c, as well as trust b to not falsely forward messages from c. It should be noted that a can derive ℬa​ℐa←b^​(ℐb←c^​A∧ℐb←c​A)B_a I_a 0.5[0.75]$←$b( I_b 0.5[0.75]$←$cA _b 0.5[0.75]$←$cA) but cannot derive ℬa​ℐa←b^​AB_a I_a 0.5[0.75]$←$bA as seen in Lemma 3. The arguments are accepted, but not the conclusion. Still, b’s declaration of trust allows a to gain trust as well: ℬa​ℐa←b^​(ℐb←c^​A∧ℐb←c​A)∧ℐa←b​ℐb←c^⇒ℬa​ℐa←b←c^​AB_a I_a 0.5[0.75]$←$b( I_b 0.5[0.75]$←$cA _b 0.5[0.75]$←$cA) _a 0.5[0.75]$←$b I_b 0.5[0.75]$←$c _a I_a 0.5[0.75]$←$b 0.5[0.75]$←$cA The argument goes as follows: b has already communicated its trust to a. So if b then later mentions that c claimed A, the two claims by b can be combined into a single package ℐa←b​(ℐb←c^​A∧ℐb←c​A)I_a 0.5[0.75]$←$b( I_b 0.5[0.75]$←$cA _b 0.5[0.75]$←$cA) which is trusted by a, and hence ℬa​(ℐb←c^​A∧ℐb←c​A)B_a( I_b 0.5[0.75]$←$cA _b 0.5[0.75]$←$cA). Having accepted the arguments then allows a to make the appropriate derivation, that ℬa​AB_aA. However, if in this case b had communicated A directly, a would not have accepted it: a trusts b regarding identifying experts on A, but a does not trust b as an expert on A itself. In general, suppose b has derived a statement B using assumptions A1,…,AnA_1,…,A_n. People like a can trust b’s conclusion directly, as stated by ℬa​ℐa←b^​BB_a I_a 0.5[0.75]$←$bB. Or a can instead require that b provide their arguments, and state ℬa​ℐa←b^​(A1∧⋯∧An)B_a I_a 0.5[0.75]$←$b(A_1 … A_n) or ℬa​(ℐa←b^​A1∧⋯∧ℐa←b^​An)B_a( I_a 0.5[0.75]$←$bA_1 … I_a 0.5[0.75]$←$bA_n). In the latter situation, b can only convince a by claiming all the arguments. There are still two things to be considered. First note that the proof of B via A1,…,AnA_1,…,A_n is never communicated. The proof is implied, and has to be done by a. Indeed, the burden of proof is naturally on the one who wants to trust, not on the entity providing the facts. One could imagine a asking b whether B is true, and b communicating A1∧⋯∧AnA_1 … A_n instead of B itself. Regardless of whether a trusts b regarding B, or regarding A1∧⋯∧AnA_1 … A_n, a will be able to convince themselves of B. The fact that B is the conclusion is implied by the claims. Note that with decidability, a will be able to verify that the proof exists. Secondly, it may not make sense for a to trust b regarding just a specific entity c. In most instances, the particular c in question may not be predetermined. We can state that a trusts b regarding any such entity, as can be formulated as: ℬa​⋀c∈ℐa←b^​(ℐb←c^​A∧ℐb←c​A)B_a _c I_a 0.5[0.75]$←$b( I_b 0.5[0.75]$←$cA _b 0.5[0.75]$←$cA) We call this higher-order trust. We do not trust b directly regarding A, but do trust b to identify experts of A. These orders of trust are an important tool for describing trust infrastructures. 5 Forwarding Networks We use our logic to describe trust in networks of entities. These can be used to describe public key infrastructures, as well as other multiparty networks of authentication. The core idea is to establish what the role of an entity in the network is, and what type of trust is required for productive cooperation with this entity. Agents may establish trust directly, or use intermediaries like certification authorities to establish trust. We start by describing the language on a high level. We write n​a,b​() trust_n\,a,b\,( P) to say that a trusts b regarding predicate :→ P:A , to a certain order n. Here, P is some predicate on agents, describing some fundamental claim one wants to convey to others, like whether an agent owns a certain public key, or whether an agent has a certain certificate or diploma333 P need not depend on a, and can be some other primitive statement.. Most likely, we take as our set of tokens T some statements that can refer to agents, e.g. with hasKey​(a)∈ hasKey(a) for each a∈a , we can use hasKey as a predicate. The order n is a natural number saying whether there is direct trust between two agents, or wether there is trust between agents as intermediaries in bigger networks. • 0​a,b​() trust_0\,a,b\,( P) means that if a receives a claim ​(b) P(b) from b, then a will accept this claim ​(b) P(b) as true. • n+1​a,b​() trust_n+1\,a,b\,( P) means that if a receives a claim from b regarding the n-th order trust of b in P, then a believes this trust is warranted. We can look at some examples of different orders of trust, in the realm of public key infrastructures. • Order 0 trust: This is the goal. Certification authorities record and communicate their order 0 trust regarding key ownership, and others can inherit this order 0 trust. • Order 1 trust: This is the order of trust one has in a certification authority. With order 1 trust in the CA, one can inherit their order 0 trust. • Order 2 trust: This is the trust one has in authorities which validate CAs, like domain trusted lists. With order 2 trust in a domain trusted list, one can inherit order 1 trust in the CA which is listed. • Order 3 trust: This is trust in entities which claim to have order 2 trust. For instance, trust in a government backing some domain trusted list. In order to establish trust across networks, claims will need to be forwarded and checked by intermediary agents. We have to specify the allowed paths of communication. Definition 2. A forwarding network is a set of non-empty, non-repeating lists of agents S⊆∗S ^*, such that: 1. If (a,β,c)∈S(a,β,c)∈ S then (β,c)∈S(β,c)∈ S and (a,β)∈S(a,β)∈ S (in other words, S is closed under taking sublists). 2. If (α,a,β,b,γ)∈S(α,a,β,b,γ)∈ S and (a,β′,b)∈S(a,β ,b)∈ S then (α,a,β′,b,γ)∈S(α,a,β ,b,γ)∈ S. If a,α,b∈Sa,α,b∈ S, we say that α is a defined path from b to a. Any segment of the path can be replaced by an alternative route, if specified by S. Note that α can be empty, in which case there is a direct path. We write S​(a,b,c)N_S(a,b,c) if there is a path a,α,b,β,c∈Sa,α,b,β,c∈ S. Lemma 5. Given a forwarding network S⊆∗S ^*, then SN_S has the following properties: • If S​(a,b,d)&S​(b,c,d)N_S(a,b,d)\&N_S(b,c,d) then S​(a,b,c)&S​(a,c,d)N_S(a,b,c)\&N_S(a,c,d). • If S​(a,b,c)&S​(a,c,d)N_S(a,b,c)\&N_S(a,c,d) then S​(a,b,d)&S​(b,c,d)N_S(a,b,d)\&N_S(b,c,d). 5.1 Forwarding Modality It is assumed that messages will be passed along across every specified path. This is independent of whether the message itself is trusted. As we have seen before, we can establish commitments for forwarding messages which do not require additional trust from the network members, except for trusting they will pass along the message. If you forward a message, you are not necessarily claiming that the content is true. We define the collection of all forwardings with a composite modality. For a,b∈a,b , let a←b​A:=⋀ℐa←γ←b​A∣γ∈∗​ such that ​(a,γ,b)∈SJ_a 0.5[0.75]$←$bA:= \I_a 0.5[0.75]$←$γ 0.5[0.75]$←$bA γ ^* such that (a,γ,b)∈ S\ Note that a←a​A=⊤J_a 0.5[0.75]$←$aA= by definition. The composite modality behaves like a modality itself: Lemma 6. The following properties hold: • a←bJ_a 0.5[0.75]$←$b satisfies axiom K and necessity. • a←b​A⟹ℬa​a←b​AJ_a 0.5[0.75]$←$bA _aJ_a 0.5[0.75]$←$bA. • a←b​A⟹a←b​ℬb​AJ_a 0.5[0.75]$←$bA _a 0.5[0.75]$←$bB_bA. Last but certainly not least, we have the additional property which allows us to factor these modalities further. Lemma 7. If S​(a,b,c)N_S(a,b,c) then ⊢a←c​A⟹a←b​b←c​A _a 0.5[0.75]$←$cA _a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA. Proof. Assume a←c​AJ_a 0.5[0.75]$←$cA, hence for any a,α,c∈Sa,α,c∈ S we have ℐa←α←c​AI_a 0.5[0.75]$←$α 0.5[0.75]$←$cA. To prove a←b​b←c​AJ_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA we prove ℐa←β←b​b←c​AI_a 0.5[0.75]$←$β 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA for each (a,β,b)∈S(a,β,b)∈ S. ℐa←β←b​b←c​AI_a 0.5[0.75]$←$β 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA in turn can be shown by proving ℐa←β←b​ℐb←γ←c​AI_a 0.5[0.75]$←$β 0.5[0.75]$←$bI_b 0.5[0.75]$←$γ 0.5[0.75]$←$cA for each (b,γ,c)∈S(b,γ,c)∈ S and applying axiom K, using that there are only finitely many possible γ-s. So suppose (a,β,b)∈S(a,β,b)∈ S and (b,γ,c)∈S(b,γ,c)∈ S, and suppose NS​(a,b,c)N_S(a,b,c), meaning (a,β′,b,γ′,c)∈S(a,β ,b,γ ,c)∈ S for some β′β and γ′γ . By applying property 2 of forwarding networks twice, (a,β,b,γ,c)∈S(a,β,b,γ,c)∈ S. Since a←c​AJ_a 0.5[0.75]$←$cA implies any path from c to a, it implies ℐa←β←b←γ←c​AI_a 0.5[0.75]$←$β 0.5[0.75]$←$b 0.5[0.75]$←$γ 0.5[0.75]$←$cA which by definition is ℐa←β←b​ℐb←γ←c​AI_a 0.5[0.75]$←$β 0.5[0.75]$←$bI_b 0.5[0.75]$←$γ 0.5[0.75]$←$cA, what we need. ∎ Basically, since a←cJ_a 0.5[0.75]$←$c attempts all paths of communication from c to a, these include all paths through agent b. We again use a←b J_a 0.5[0.75]$←$b to express validity of such communications: a←b^​A:=a←b​A⇒A J_a 0.5[0.75]$←$bA:=J_a 0.5[0.75]$←$bA A Note that this considers all possible paths, so such validity can be obtained if validity exists across any one path. • If (a,α,b)∈S(a,α,b)∈ S, then ℐa←α←b^​A⟹a←b^​A I_a 0.5[0.75]$←$α 0.5[0.75]$←$bA J_a 0.5[0.75]$←$bA. Lastly note that if no path from b to a exists, then a←b^A≡(⊤⇒A)≡A J_a 0.5[0.75]$←$bA≡( A)≡ A, hence trust across non existing paths only happens when the claim in question is true. 5.2 Orders of trust Direct trust from a in b can be defined as ℬa​ℐa←b^​(b)B_a I_a 0.5[0.75]$←$b P(b), meaning a trusts b regarding claim ​(b) P(b). Agent b is effectively claiming that predicate P holds for them, for instance saying: “I own public key k”, or “I trust c”. We consider the following two properties fundamental to the development. If S​(a,b,c)N_S(a,b,c) then: • a←b​b←c^​A∧a←b^​A⟹a←c^​AJ_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA J_a 0.5[0.75]$←$bA J_a 0.5[0.75]$←$cA • a←b​b←c^​A∧a←b^​b←c^​A∧a←b^​b←c​A⟹a←c^​AJ_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA J_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA J_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA J_a 0.5[0.75]$←$cA Note that the proof of the latter statement first deduces a←b^​A J_a 0.5[0.75]$←$bA from the pieces of evidence a←b^​b←c^​A J_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA and a←b^​b←c​A J_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA, and then applies the former statement. In the above statements, A can either be a basic statement ​(c) P(c), or itself a statement necessary for composing trust further. As such, we need sequences of communication and validity claims, whose length depends on the order of trust we want to express. Using P as a basis, we define a set of higher-order statements Can​()C^n_a( P) inductively as follows: Ca0​():=​(a)C^0_a( P):=\ P(a)\ Can+1​():=a←b^​A,a←b​A∣b∈,A∈Cbn​()C^n+1_a( P):=\ J_a 0.5[0.75]$←$bA,J_a 0.5[0.75]$←$bA b ,A ^n_b( P)\ Using the fact that each set is finite, we define higher order trust as a formula using conjunctions. Definition 3. We define n-th order validity of a in b as: n​(a←b)​():=⋀a←b^​A∣A∈Cbn​() val_n(a 0.5[0.75]$←$b)( P):= \ J_a 0.5[0.75]$←$bA A ^n_b( P)\ We define n-th order trust of a in b as: n​(a←b)​():=ℬa​n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P):=B_a val_n(a 0.5[0.75]$←$b)( P) For example, 0​(a←b)​()≡ℬa​a←b^​(b) trust_0(a 0.5[0.75]$←$b)( P) _a J_a 0.5[0.75]$←$b P(b) and 1​(a←b)​()≡⋀c(ℬa​a←b^​b←c^​(c)∧ℬa​a←b^​b←c​(c)) trust_1(a 0.5[0.75]$←$b)( P)≡ _c(B_a J_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$c P(c) B_a J_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$c P(c)). Let us establish some needed lemmas provable in the logic. Lemma 8. If S​(a,b,c)N_S(a,b,c) then ⊢n+1​(a←b)​()⇒a←b^​(n​(b←c)​()) val_n+1(a 0.5[0.75]$←$b)( P) J_a 0.5[0.75]$←$b( val_n(b 0.5[0.75]$←$c)( P)). Proof. Note that a←b^​A∧a←b^​B⇒a←b^​(A∧B) J_a 0.5[0.75]$←$bA J_a 0.5[0.75]$←$bB J_a 0.5[0.75]$←$b(A B) since a←b​(A∧B)⇒a←b​A∧a←b​BJ_a 0.5[0.75]$←$b(A B) _a 0.5[0.75]$←$bA _a 0.5[0.75]$←$bB. These implications can be generalised to conjunctions over finite sets. As a consequence, ⋀A∈Ccn​()a←b^​b←c^​A _A ^n_c( P) J_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA == ⋀A∈Ccn​()(a←b​b←c^​A⇒b←c^​A) _A ^n_c( P)(J_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA J_b 0.5[0.75]$←$cA) ⟹ (a←b​n​(b←c)​()⇒n​(b←c)​())(J_a 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P) val_n(b 0.5[0.75]$←$c)( P)) == a←b^​(n​(b←c)​()) J_a 0.5[0.75]$←$b( val_n(b 0.5[0.75]$←$c)( P)) ∎ We can compose validity in the following way. Lemma 9. If S​(a,b,c)N_S(a,b,c), the following properties hold: 1. ⊢n+1​(a←b)​()∧n​(b←c)​()⇒n​(a←c)​()\!\! val_n+1(a 0.5[0.75]$←$b)( P) val_n(b 0.5[0.75]$←$c)( P) val_n(a 0.5[0.75]$←$c)( P). 2. ⊢n+1​(a←b)​()∧a←b​n​(b←c)​()⇒n​(a←c)​()\!\! val_n+1(a 0.5[0.75]$←$b)( P) _a 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P) val_n(a 0.5[0.75]$←$c)( P). 3. ⊢n​(a←b)​()∧a←b​n​(b←c)​()⇒n​(a←c)​()\!\! val_n(a 0.5[0.75]$←$b)( P) _a 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P) val_n(a 0.5[0.75]$←$c)( P). 4. ⊢n+1​(a←b)​()∧a←b​n​(b←c)​()⇒n​(a←c)​()\!\! trust_n+1(a 0.5[0.75]$←$b)( P) J_a 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P) trust_n(a 0.5[0.75]$←$c)( P). 5. ⊢n​(a←b)​()∧a←b​n​(b←c)​()⇒n​(a←c)​()\!\! trust_n(a 0.5[0.75]$←$b)( P) _a 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P) trust_n(a 0.5[0.75]$←$c)( P). Proof. Let A∈Ccn​()A ^n_c( P), we want to show that a←c^​A J_a 0.5[0.75]$←$cA in the first three cases, which means a←c​A⇒AJ_a 0.5[0.75]$←$cA A. Since a←c​A⇒a←b​b←c​AJ_a 0.5[0.75]$←$cA _a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA it is sufficient to prove A from a←b​b←c​AJ_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA. So assume a←b​b←c​AJ_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA, 1. n+1​(a←b)​() val_n+1(a 0.5[0.75]$←$b)( P) implies a←b^​b←c​A J_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA, hence b←c​AJ_b 0.5[0.75]$←$cA. From n​(b←c)​() val_n(b 0.5[0.75]$←$c)( P) we have b←c^​A J_b 0.5[0.75]$←$cA, hence A. 2. Like before we can derive b←c​AJ_b 0.5[0.75]$←$cA. n+1​(a←b)​() val_n+1(a 0.5[0.75]$←$b)( P) also implies a←b^​b←c^​A J_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA, and from a←b​(n​(b←c)​())J_a 0.5[0.75]$←$b( val_n(b 0.5[0.75]$←$c)( P)) we have a←b​b←c^​AJ_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA, hence b←c^​A J_b 0.5[0.75]$←$cA, which together with b←c​AJ_b 0.5[0.75]$←$cA makes A. 3. From a←b​(n​(b←c)​())J_a 0.5[0.75]$←$b( val_n(b 0.5[0.75]$←$c)( P)) we have a←b​b←c^​AJ_a 0.5[0.75]$←$b J_b 0.5[0.75]$←$cA, hence with a←b​b←c​AJ_a 0.5[0.75]$←$bJ_b 0.5[0.75]$←$cA we get a←b​AJ_a 0.5[0.75]$←$bA. From n​(a←b)​() val_n(a 0.5[0.75]$←$b)( P) we get a←b^​A J_a 0.5[0.75]$←$bA, and hence A. 4. Apply axiom K and a←b​A⇒ℬa​a←b​AJ_a 0.5[0.75]$←$bA _aJ_a 0.5[0.75]$←$bA to property 2. 5. Apply axiom K and a←b​A⇒ℬa​a←b​AJ_a 0.5[0.75]$←$bA _aJ_a 0.5[0.75]$←$bA to property 3. ∎ Suppose we have some forwarding network S, Definition 4. We say that a trust statement n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P), is shared over S if n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) and c←b​n​(a←b)​()J_c 0.5[0.75]$←$b val_n(a 0.5[0.75]$←$b)( P) hold for any c∈c such that S​(c,a,b)N_S(c,a,b). Sharing trust over a network involves sending your claim of trust to all relevant parties. The forwarding network S specifies everyone who can use your claims and may depend on them, and as such gives a specification of where to send your claims. Theorem 1. Given a forwarding network S and S​(a,b,c)N_S(a,b,c), • If both n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) and n​(b←c)​() trust_n(b 0.5[0.75]$←$c)( P) are shared over S, then n​(a←c)​() trust_n(a 0.5[0.75]$←$c)( P) is shared over S. • If both n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) and n+1​(b←c)​() trust_n+1(b 0.5[0.75]$←$c)( P) are shared over S, then n​(a←c)​() trust_n(a 0.5[0.75]$←$c)( P) is shared over S. Proof. Suppose S​(a,b,c)N_S(a,b,c), and both n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) and n​(b←c)​() trust_n(b 0.5[0.75]$←$c)( P) are shared over S. Then it holds that n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) and a←b​n​(b←c)​()J_a 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P), so by property 5 of Lemma 9, n​(a←c)​() trust_n(a 0.5[0.75]$←$c)( P). Supposing moreover S​(d,a,c)N_S(d,a,c), then by Lemma 5, S​(d,b,c)N_S(d,b,c) and S​(d,a,b)N_S(d,a,b), so d←a​n​(a←b)​()J_d 0.5[0.75]$←$a val_n(a 0.5[0.75]$←$b)( P) and d←b​n​(b←c)​()J_d 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P), the latter implying d←a​a←b​n​(b←c)​()J_d 0.5[0.75]$←$aJ_a 0.5[0.75]$←$b val_n(b 0.5[0.75]$←$c)( P). With axiom K extended to J by Lemma 6, and by property 3 of Lemma 9, we get d←a​n​(a←c)​()J_d 0.5[0.75]$←$a val_n(a 0.5[0.75]$←$c)( P). We conclude that n​(a←c)​() trust_n(a 0.5[0.75]$←$c)( P) is shared over S. The second property proven similarly, using Properties 4 and 2 of Lemma 9 instead. ∎ 6 Examples from Public Key Infrastructures Suppose we have a finite directed graph G expressing agents and communication channels between agents. We use this to formulate a forwarding network S in two different ways: • We define S_G as the set of all shortest paths between vertices, with (a,β,c)∈S(a,β,c)∈ S_G a shortest path from c to a, and a∈Sa∈ S the shortest path from a to a. • Alternatively, if G is acyclic, we can define S′S _G to be the set of all paths which do not repeat vertices. We look at examples of Public Key infrastructures. We use a​→​ba n→b to say that b→ab→ a is an edge in G and n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) is shared over S_G. We use a​⇢​ba n b to say we can derive that n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) is shared over S_G (though b→ab→ a is not an edge). We derive a​⇢​ba n b statements using Theorem 1. The following variety of examples of public key infrastructures are taken from [14] and [47]. In all the examples given below, there is at most one non-repeating path between any two entities which does not revisit a vertex, which is therefore automatically the shortest path. Dedicated domain PKI The simplest model of indirect trust is the dedicated domain PKI, where a specific certification authority is used to validate keys. a a 1 10 0. . . . . . 0 0b b A variation of this is the shared domain PKI, where one CA validates keys from different users. Other systems may use more intermediate entities on the way towards validation. This can be done in two distinct ways, which are shown in the next two examples. Bridge Certification Authority In this system, everyone has their own CA which they trust and use to validate their keys. This CA is then linked to a bridge certification authority (BCA), and mutual trust exists between the CAs. When claims are made, each CA in the chain can in turn validate the claim given their trust in each other. Below is an example of a BCA system, with on the left the assumptions made, and on the right examples of what trust can be derived. CAa CA_a 1 10 0BCA1 1CAb CA_b 0 0a a 1 1b b 1 1 CAa CA_a 1 10 0BCA1 10 0CAb CA_b 0 0a a 1 10 0b b Domain Trusted list Another way of composing trust is by using a domain trusted list. This is different from a CA, as it validates CAs themselves, not ownership of keys directly. Hence, trust in a domain trusted list is of a higher order then trust in something like a bridge certification authority. In the diagram below, we see an example of such a system with two agents b and c we want to trust. On the left, we see the assumptions and on the right some trust derivations. TL1 11 1CAb CA_b 0 0b ba a 2 2CAc CA_c 0 0c c TL1 1CAb CA_b 0 0b ba a 2 21 10 0 Hierarchical PKI In one last example, we consider CAs linked in a hierarchical structure. Like in BCA, the Root CA (RCA) need only be trusted as a CA. RCA1 11 1CA2 CA_2 0 01 1b2 b_2a a 1 1CA1 CA_1 0 0b1 b_1CA3 CA_3 0 0b3 b_3 Direct mutual trust This is the goal of mutual trust. a a 0 0- - b 0 0 Personal CA Here, everyone has their own CA to consult and check trustworthiness of other agents. CAa CA_a 0 0CAb CA_b 0 0a a 1 10 0b b 0 01 1 We can derive (a​⇢0​b)(a 0 b) and (b​⇢0​a)(b 0 a). Mesh PKI Here, everyone has their own CA they can use to advocate validity of their claims. These CAs are connected in a mesh, where each CA trusts other CAs in their ability to authenticate keys. A user then only needs to trust their own CA in being able to authenticate others’ claims. CAa CA_a 1 10 0CAb CA_b 1 10 0CAc CA_c 0 0a a 1 1b b 1 1c c 1 1 Note that not all CAs need to be directly connected. As long as there is a chain of trust between each of the CAs, trust can be derived. 7 Threshold Trust There are situations in which a CA is considered unreliable. This can happen for two reasons: • CAs may be incorrect, as in they are lying or simply wrong about the validity of a claim. • CAs may be unresponsive regarding trust, as in they have not verified or communicated whether they consider someone trustworthy.444Note that the CA may still be responsive regarding forwarding of claims, as that is dealt with separately and can be independently verified. In both cases, we can use multiple CAs to mitigate the issue. In the first situation, we can confirm with multiple CAs whether an entity is trusted, before accepting the entity’s claims. In the second situation, we attempt to consult multiple CAs hoping at least some assert their trust in the entity. Consider Lemma 9 again, and see that when S​(a,i,b)N_S(a,i,b): 1​(a←i)​()∧ℐa←i​(0​(i←b)​())⟹(0​(a←b)​()) trust_1(a 0.5[0.75]$←$i)( P) _a 0.5[0.75]$←$i( val_0(i 0.5[0.75]$←$b)( P)) ( trust_0(a 0.5[0.75]$←$b)( P)) We see that trust composition requires two statements; the trust in the CA given by 0​(a←i)​() trust_0(a 0.5[0.75]$←$i)( P), and the sharing of validation by the CA given by ℐa←i​(0​(i←b)​())I_a 0.5[0.75]$←$i( val_0(i 0.5[0.75]$←$b)( P)). If we think a CA is dishonest, we lack the former statement, and if we think a CA is unresponsive, we lack the latter statement. Consider the following situation. We have two users a and b, and two CAs named i and j. We assume that trust of a in b gets delegated to both i and j, asserting S​(a,i,b)N_S(a,i,b) and S​(a,j,b)N_S(a,j,b): We get the following two results: • 1​(a←i)​()∨1​(a←j)​() trust_1(a 0.5[0.75]$←$i)( P) trust_1(a 0.5[0.75]$←$j)( P) and ℐa←i​(0​(i←b)​())∧ℐa←j​(0​(j←b)​())I_a 0.5[0.75]$←$i( val_0(i 0.5[0.75]$←$b)( P)) _a 0.5[0.75]$←$j( val_0(j 0.5[0.75]$←$b)( P)) implies 0​(a←b)​() trust_0(a 0.5[0.75]$←$b)( P). If only one of the two CAs is trusted by a, then both need to assert trust in b to guarantee that trust can be derived. • 1​(a←i)​()∧1​(a←j)​() trust_1(a 0.5[0.75]$←$i)( P) trust_1(a 0.5[0.75]$←$j)( P) and ℐa←i​(0​(i←b)​())∨ℐa←j​(0​(j←b)​())I_a 0.5[0.75]$←$i( val_0(i 0.5[0.75]$←$b)( P)) _a 0.5[0.75]$←$j( val_0(j 0.5[0.75]$←$b)( P)) implies 0​(a←b)​() trust_0(a 0.5[0.75]$←$b)( P). If both CAs are trusted, only one needs to assert trust in b. The above two examples resolve problems created by potential attacks on the network. Possible incorrectness of a CA may be due to an attacker impersonating the CA, and in the first example trust can still be derived if one CA gets corrupted this way. Possible unresponsiveness may be due to an attacker removing a CA’s published certificates, and in the second example trust can still be derived if one CA gets disrupted. In cases where both incorrectness and unresponsiveness are possibilities, we would need to consult at least three CAs to validate a claim. We get threshold PKI, with the simplest being a 2-out-of-3 threshold. Given formulas A, B, C, define: 2of3​(A,B,C):=(A∧B)∨(A∧C)∨(B∧C) 2of3(A,B,C):=(A B) (A C) (B C) Note that 2of3​(A,B,C)≡(A∨B)∧(A∨C)∧(B∨C) 2of3(A,B,C)≡(A B) (A C) (B C). Consider again agents a and b, together with three CAs i, j and k, and let S=a​i​b,a​j​b,a​k​bS=\aib,ajb,akb\, hence S​(a,i,b)N_S(a,i,b), S​(a,j,b)N_S(a,j,b), and S​(a,k,b)N_S(a,k,b). We get threshold trust: Lemma 10. If S​(a,i,b)N_S(a,i,b), S​(a,j,b)N_S(a,j,b), and S​(a,k,b)N_S(a,k,b), then: 2of3​(1​(a←i)​(),1​(a←j)​(),1​(a←k)​())∧ 2of3( trust_1(a 0.5[0.75]$←$i)( P), trust_1(a 0.5[0.75]$←$j)( P), trust_1(a 0.5[0.75]$←$k)( P)) 2of3​(ℐa←i​0​(i←b)​(),ℐa←j​0​(j←b)​(),ℐa←k​0​(k←b)​()) 2of3(I_a 0.5[0.75]$←$i val_0(i 0.5[0.75]$←$b)( P),I_a 0.5[0.75]$←$j val_0(j 0.5[0.75]$←$b)( P),I_a 0.5[0.75]$←$k val_0(k 0.5[0.75]$←$b)( P)) ⟹0​(a←b)​() trust_0(a 0.5[0.75]$←$b)( P) In [52], a logic is considered in which the trust thresholds are directly baked into the modalities themselves. 8 Wish Modality We have seen how agents can request information. As discussed before, we have to be careful when using the communication modality, as any message is considered to be a claim made by the sender. As is, there is no direct way to formally communicate facts which are not yet believed without having to lie, except by forwarding other people’s claims. This is quite inconvenient in networks based on trust. A solution is to use a wish modality aW_a, which tells us what an agent a wants to be true. The modality fits in the framework of this paper, since its satisfies axiom K; if agent a desires A to be true, and desires B to be true, then one should be able to reason that a wants both of them to be true. In cases where the agent wants either of the two to be true, but not necessarily both, we instead have to write a​(A∨B)W_a(A B). Most commonly, a wish can be used to send a request to another agent: ℐa,b​b​AI_a,bW_bA means b tells a it wishes A to be true. There are several examples of useful wishes one can utter: • With ℐa,b​b​tI_a,bW_bt, b tells a it likes some principal statement t to be true, e.g. key ownership. If a has the ability to make t true, this is a direct request from b to do so. • With ℐa,b​b​ℐc,a​AI_a,bW_bI_c,aA, b requests a to send the claim A to agent c. For instance, a is some authority which is requested to send signed credentials to some retailer c. • ℐa,b​b​ℬc​AI_a,bW_bB_cA is a request of b to a to help convince c of a certain fact. As an example, suppose t is the token expressing that a owns a certain product or key. Suppose we have a store c which can make t true, as specified by ℬc​t⇒tB_ct t. A customer a would like t to be true and sends a request to c, ℐc,a​a​tI_c,aW_at. Now, c is willing to make t true if a has the proper credentials, which is expressed by some other token u. So c sends ℐa,c​(u⇒t)I_a,c(u t). Now a needs to convince c of the truth of u. They can do so in several ways. If c does not trust a directly (at least not regarding u), we need an intermediate authority b which c trusts. Either a sends a request to b to validate and send the credentials to c using the above request protocol. Alternatively, a can let c know that b can vouch for the truth of the credentials, and c can request validation from b independently. One last example is using the wish modality to request information. Suppose agent b wants to know whether A is true or not. In this case, it would make the statement a​((A⇒ℬa​A)∧(¬A⇒ℬa​(¬A)))W_a((A _aA) ( A _a( A))), and send this to someone who may know. Of course, without prearranged obligations or guarantees, requests could be ignored. 9 Key Ownership and Proxy Trust One of the main motivations of this paper is to describe networks for authenticating ownership of keys. We could consider such keys as entities in their own right, used as aliases to send further messages. Even if one does not know the owner of a key, one may still reason about the belief and interactions of the owner, as represented by the key. In authorization logics, one may use statements of ownership to translate authority from an entity to their owned keys. Here we can similarly use key ownership here to translate trust in an entity to trust in their owned keys. We take as set of tokens =a,b∣a,b∈T=\C_a,b a,b \, where a,bC_a,b states a is controlled by b. We consider this as expressing a reflexive and transitive relationship, with ∀a,a _aC_a,a and ∀a,b,c(a,b⇒b,c⇒a,c) _a,b,c(C_a,b _b,c _a,c) as additional public assumptions. Unlike the delegation network N, ownership of keys is not public knowledge, and each entity would need to verify on their own whether a certain key is owned by a certain entity. In public key infrastructures, entities make claims about ownership of keys. One can trust a person directly when they claim to control a key, or one may trust a CA to validate someones claims of ownership of keys. This is modeled by instantiating the P in trust statements n​(a←b)​() trust_n(a 0.5[0.75]$←$b)( P) with ​(c)=d,c P(c)=C_d,c for different choices of d∈d . This implements the claim: I have control of agent d. Beside being examples of claims we want to validate, ownership of keys can have additional effects on the modalities. We can for instance add the following axioms to Δ : • ∀a,b(a,b⇒ℬb​a,b) _a,b(C_a,b _bC_a,b). • ∀a,b(a,b⇒(ℬa⇔ℬb)) _a,b(C_a,b (B_a X _b X)). • ∀a,b(a,b⇒∀c(ℐa,c​⇒ℐb,c​)) _a,b(C_a,b _c(I_a,c X _b,c X)). • ∀a,b(a,b⇒∀c(ℐc,a​⇒ℐc,b​)) _a,b(C_a,b _c(I_c,a X _c,b X)) These statements say the following; agents know which keys they control. If agent a is controlled by agent b, then a and b believe the same things as they are identifiers (alter-egos) of the same entity. Moreover, messages sent to and from a can be considered as sent to and from b respectively. Using the axioms above, we can transfer trust. If you trust an agent, then you can trust any agent controlled by them. This result will allow people to act using a digital identity they control, and still be trusted. Lemma 11. ⊢Δ∀b,c(b,c⇒∀a(a,c⇒a,b)) _ _b,c(C_b,c _a(V_a,c X _a,b X)). Proof. Suppose b,cC_b,c and a,c​AV_a,cA hold for some keys a,b,ca,b,c and statement A. We prove a,b​AV_a,bA. If ℐa,b​AI_a,bA, then since b,cC_b,c we know by the new axiom that the message was actually sent by c, so ℐa,c​AI_a,cA. By validity a,c​AV_a,cA, we know that c believes it: ℬc​AB_cA. Again by b,cC_b,c, we conclude that b believes it too: ℬb​AB_bA. If ℬb​AB_bA, then by b,cC_b,c and the new axioms, ℬc​AB_cA. Using reliability from a,c​AV_a,cA we can derive A. ∎ Corollary 1. ⊢Δ∀a,b,cℬab,c⇒(a,c⇒a,b) _ _a,b,cB_aC_b,c (T_a,c X _a,b X). Proof. Suppose ℬa​b,cB_aC_b,c and a,c​A=ℬa​a,c​AT_a,cA=B_aV_a,cA holds for some keys a,b,ca,b,c and statement A. Then a,b​A=ℬa​a,b​AT_a,bA=B_aV_a,bA is true by lifting Lemma 11 to perspective ℬaB_a using axiom (K). ∎ 10 Logic Calculi There are many different equivalent ways of formulating a calculus whose power matches our logic. We highlight mainly three different calculi: 1. A basic intuitionistic sequent calculus for provability, as formulated in Figure 2. 2. A lambda calculus following the Fitch-style proof systems, as used in for instant dependent modal logics. [15] 3. A cut-free sequent calculus for decidable proof search. We shall briefly describe a version of the second, which is suitable for expressing proofs as terms in a lambda calculus. Note that the logic used in this paper is less powerful then those considered in recent works, e.g. as used in dependent modal type theory in [10, 26]. This is in order to facilitate easier proof searching, as is useful when imposing accountability of implied statements, as well as to not unnecessarily introduce unneeded complexity. The logic can of course be extended if needed, possibly sacrificing decidability. We define modal contexts as follows: Γ,Δ:=ε∣Γ,x:A∣Γ,ℳ , := ,x:A ,\M\ These may contain assumptions in the form of statements A with associated name x for proofs. Additionally, we have modalities ℳ\M\ expressing a modal shift: e.g. the context x:A,ℬa,y:Bx:A,\B_a\,y:B can be read as, supposing A is true, and we consider the perspective of a, where we additionally suppose a believes in B. Let [−][-] be the map from contexts to ∗M^* defined as: [ε]=ε[ ]= , [Γ,x:A]=Γ[ ,x:A]= and [Γ,ℳ]=[Γ],ℳ[ ,\M\]=[ ],M. [Δ]=εΓ,x:A,Δ⊢var​(x):AΓ⊢∗:⊤Γ⊢P:⊥Γ⊢absA​(P):AΓ,ℳ⊢P:AΓ⊢lockℳ​(P):ℳ​A [ ]= ,x:A, var(x):A *: P: abs_A(P):A ,\M\ P:A lock_M(P):MA Γ⊢P:ℳAℳ⇛l[Δ]=lΓ,Δ⊢keyℳ⇛l​(P):AΓ⊢P:AΓ⊢Q:A⇒BΓ⊢P⋅Q:BΓ,x:A⊢P:BΓ⊢λx:A.P:A⇒B P:MA l [ ]=l , key_M l(P):A P:A Q:A B P· Q:B ,x:A P:B λ x:A.P:A B Γ⊢P:A1∧A2Γ⊢πi​(P):AiΓ⊢P:A1Γ⊢Q:A2Γ⊢(P,Q):A1∧A2 P:A_1 A_2 _i(P):A_i P:A_1 Q:A_2 (P,Q):A_1 A_2 Γ,x:A⊢P:CΓ,y:B⊢Q:CΓ⊢R:A∨BΓ⊢caseC​(R)​π1​(x)↦P,π2​(y)↦Q:CΓ⊢P:AiΓ⊢πi​(P):A1∨A2 ,x:A P:C ,y:B Q:C R:A B case_C(R)\ [origin=c]180.0$π$_1(x) P, [origin=c]180.0$π$_2(y) Q\:C P:A_i [origin=c]180.0$π$_i(P):A_1 A_2 Figure 4: Fitch-style lambda calculus for our logic See Figure 4 for the Fitch-style lambda calculus. Comparing this to the literature, here we rename mod to lock, and unmod to key. Furthermore, we do not apply our additional modal axioms on the contexts directly, instead folding those into the key operations. Hence, once one unlocks a term, the modal context gets fixed. Shifting modal contexts now becomes a meta operation on terms, similar to variable substitutions. As an example, a possible lambda term for the sequent x:ℬa​(ℐa←b​A⇒A)⊢ℐa←b​A⇒ℬa​Ax:B_a(I_a 0.5[0.75]$←$bA A) _a 0.5[0.75]$←$bA _aA is λy:ℐa←bA.lockℬa(keyℬa⇛ℬa(var(x))⋅lockℐa←b(keyℐa←b⇛ℬa⋅ℐa←b(var(y))))λ y:I_a 0.5[0.75]$←$bA. lock_B_a( key_B_a _a( var(x))· lock_I_a 0.5[0.75]$←$b( key_I_a 0.5[0.75]$←$b _a·I_a 0.5[0.75]$←$b( var(y)))). 10.1 Meta constructions The intuitionistic modal logic forms a foundation for expressing a plethora of concepts. We can construct a variety of other operations using the above tools. For instance, with ⊥ marking absurdity, we can define negation ¬A A as ¬A=A⇒⊥ A=A following the standard intuitionistic tradition. Another useful thing we can define is conjunction and disjunction over finite sets of formulas. Given a finite subset of S⊆finS _finF, we define the following: • ⋀S S is the conjunction over S, which holds precisely when all A∈SA∈ S hold. • ⋁S S is the disjunction over S, which holds precisely if some A∈SA∈ S holds. These are defined inductively, iterating binary conjunctions and disjunctions. In particular, ⋀∅≡⊤ ≡ and ⋁∅≡⊥ ≡ . When describing distributed networks, we consider the additional finite set of agents A. Both modalities and tokens may refer to specific agents, marking their perspectives and primitive statements of interest. As such, formulas can refer to specific agents, and the referenced agent could be used as a parameter. Given such a parametrized formula f:→f:A , and given a subset S⊆S , we write: • ∀x∈Sf​(x) _x∈ Sf(x) for the universal quantification, equivalent to ⋀f​(a)∣a∈S \f(a) a∈ S\. • ∃x∈Sf​(x) _x∈ Sf(x) for the existential quantification, equivalent to ⋁f​(a)∣a∈S \f(a) a∈ S\. We will write ∀xf​(x) _xf(x) and ∃xf​(x) _xf(x) in case S=S=A, and ∀P​(x)f​(x) _P(x)f(x) and ∃P​(x)f​(x) _P(x)f(x) if S=a∈∣P​(a)S=\a P(a)\ with P some predicate on agents. These quantifiers could be expressed using formulas polymorphic over agent variables as well. However, we would like to avoid such unnecessary language generalisations at this point, focusing instead on the underlying theory of trust. 10.2 Notes on Decidable Proof Search Let us briefly address how the decomposability property on axioms facilitates easier proof search. Note that given decomposability, we can reduce any proof to one only using keyℳ⇛ key_M and keyℳ⇛⋅ℳ⊖ key_M ·M operators. Supposing we want to prove ​BNB with a context containing ℳ​AMA, then we know it is sufficient to use the formula in context as ​ANA and/or ​(ℳ⊖)​AN(M )A depending which axioms are available. More concretely, proof search may happen in a cut-free variant of the logic extending the usual calculi and their cut elimination proofs [44]. These calculi use contexts excluding the ℳ\M\ modal locks, instead defining for each modality N an explicit operation −1N^-1 on contexts where −1​(Γ)N^-1( ) contains A for any ℳ​A∈ΓMA∈ such that ℳ⇛M , and (ℳ⊖)​A(M )A for any ℳ​A∈ΓMA∈ such that ℳ⇛⋅ℳ⊖M ·M . We then handle modalities with the single rule: ℳ−1​(Γ)⊢AΓ⊢ℳ​A M^-1( ) A We then establish that the associated calculus has cut elimination, and note that we can associate a subformula order establishing that a proof search must terminate. We get that if M is finite and Ax is a reflexive, transitive and decomposable set of axioms, the associated calculus has decidable proof search. Decidability may moreover be derived by proving our axioms as an instance of those systems specified an proven to be decidable in [23], though this has not been verified. This need not mean that the proof search is practical. Moreover, this is sensitive to the chosen axioms; note the need for decomposability and the lack of axioms of the form ℳ​A⇒ℛ​AMNA in our formalism. 11 Kripke Model The proof system introduced in the previous section connects our logic to a categorical model [26], describing explicitly the handling and manipulation of proofs. We shall furthermore consider a Kripke model for establishing soundness and giving an interpretation of the formulas in terms of possible worlds which describe who knows and says what. For the Kripke semantics, we consider a traditional variant [12, 17] of intuitionistic modal logic, and also briefly consider Simpson’s [48] version. Definition 5. A modal Kripke frame on (,)(T,M) consists of a quadruple (W,≤,S−,R−)(W,≤,S_-,R_-) where W is a set of worlds, ≤ is a preorder on W, S associates to each token t∈t a subset St⊆WS_t W on W and R associates to each modality ℳ∈M a binary endorelation Rℳ⊆W2R_M W^2 on W such that: 1. If v≤wv≤ w and v∈Stv∈ S_t, then w∈Stw∈ S_t. 2. If v≤wv≤ w and w​Rℳ​w′wR_Mw , then there is a v′∈Wv ∈ W such that v​Rℳ​v′vR_Mv and v′≤w′v ≤ w . At their core, Kripke frames consider possible worlds W, where each world not only describes which fundamental statements are true with subsets StS_t, but also what is communicated and believed. The relation RℬaR_B_a for instance describes for each v all possible worlds w agent a thinks they might be in. So if v​Rℬa​w∈StvR_B_aw∈ S_t, v​Rℬa​k∉StvR_B_ak∉ S_t, then in world v agent a is unsure of whether statement t is true, hence ℬa​tB_at is not true in v. Last but not least, ≤ implements an interpretation of intuitionistic logic. In this situation, statements are not simply true or false. Some statements either become true or proven, and some statements remain false or unproven. As such, we can think of ≤ as a progression of time; the more agents communicate with each other, the more they learn. This progression may be nondeterministic, as multiple distinct worlds may spawn from one world. Note that in some definitions of Kripke frames, the second condition is removed and instead this property is baked into the interpretation of modal formulas. In our case, some conditions on R cannot be avoided regardless, since they need to accommodate axioms like ℐa←b​A⇒ℬa​ℐa←b​AI_a 0.5[0.75]$←$bA _aI_a 0.5[0.75]$←$bA. Let Ax be a set of unfolding modal axioms on M. Definition 6. A modal Kripke frame (W,≤,S,R)(W,≤,S,R) conforms to Ax if ℳ⇛1⋅⋯⋅n∈M _1·…·N_n∈ Ax implies R1;…;Rn⊆RℳR_N_1;…;R_N_n R_M. Given a Kripke frame, we define satisfiability of a formula A in a world w∈Ww∈ W inductively on formulas: • w⊧⊤w , w⊧̸⊥w , and w⊧tw t iff w∈Stw∈ S_t • w⊧ℳ​Aw iff ∀v.(w​Rℳ​v⟹v⊧A)∀ v.(wR_Mv v A) • w⊧A⇒Bw A B iff ∀v.((w≤v∧v⊧A)⟹v⊧B)∀ v.((w≤ v v A) v B) • w⊧A∧Bw A B iff w⊧Aw A and w⊧Bw B • w⊧A∨Bw A B iff w⊧Aw A or w⊧Bw B Some immediate properties: • If w⊧Aw A and w≤vw≤ v then v⊧Av A. • If the frame conforms to Ax and ℳ⇛1⋅⋯⋅n∈M _1·…·N_n∈ Ax, then w⊧ℳ​A⇒1​…​n​Aw _1…N_nA for each A∈A and w∈Ww∈ W. Theorem 2. If ⊢A A is provable, then w⊧Aw A for any modal Kripke frame (W,≤,S,R)(W,≤,S,R) and world w∈W​´w∈ W . 12 Conclusions We end this paper with some final considerations. 12.1 Regarding Privacy In this paper, we have not considered privacy. We have considered responsiveness based on consent, for instance with the wish modality, where an authority has promised to share some fact after being asked to do so. However, this does not prevent the same authority from sharing this fact without consent of the owner of the information. Privacy specifically considers statements about entities not believing or receiving messages about some sensitive piece of data. Adding privacy may be an interesting next step, which would involve focusing more on negative statements, and would potentially necessitate adding further modalities. 12.2 Universal Knowledge Both the modalities ℬΩB_ and □ describe a kind of public knowledge. The main difference is that we have □​X⇒X X X, but not for ℬΩB_ . For instance if we can prove ℬΩ​(A1∧⋯∧An⇒B)B_ (A_1 … A_n B) we know that everyone (except us) can prove A1,…,An⊢BA_1,…,A_n B. The □ can be used to make universally held assumptions. This particular use of the □ modality goes back to early proof theory by Gentzen [25], and is inspired by the work on dual-context modal logic developed in [37]. In the latter, a second context is used to describe formulas which are marked by the □ modality, and an equivalence is established between dual-context modal logic and modal logic with □ . We could further generalize the logic of this paper to use □ for marking public statements. In order for this new modality to describe public knowledge, we need to add the following extra assumptions to the logic: • □​A⇒A A A. • □​A⇒ℳ​□​A A A for any modality ℳM. As a result, □ satisfies the usual K4 axioms from modal logic. Using the □ modality would allow one to reason about public statements internally, an agent may reason about consequences of releasing public statements. However, it seems counter intuitive to think of public statements as anything but universally known, and hence we keep the fact whether a statement is public completely external in this paper. One could also generalise the public modality further. It might be useful to instead specify different scopes of knowledge and groups of entities, using a preorder on modalities: ℳ≤M if N is aware of everything known by the perspective ℳM. This can be expressed with axioms: • ℳ​A⇒​AMA • ℳ​A⇒​ℳ​AMA . An exploration of such systems is subject to future research. 12.3 Evaluating Risk of Trust Trust may not always result in certainty. There is always a risk when building ones guarantees based on statements from other entities. As such, it is worthwhile to determine some risk related to a proof. Though this may be done by generalizing to probabilistic modal logic, a lot could already be achieved by simply analyzing proofs within the current logic. The fact of the matter is, that assuming sufficient trust, certain guarantees can be made. The risk is the trust assumptions themselves. As such, one can collect different proofs of the same guarantee, and analyze the sets of trust assumptions necessary to make the proof. Then one could associate a certain level of risk to each assumption, depending on who needs to be trusted to get the result. This could be a specific risk (e.g. 5 percent), or some unspecified constant risk ε . For instance, in the 2-3 threshold example, associating a 55 percent risk to trusting each of the three CAs gets us a total failure risk of only 0.7250.725 percent, which is a significantly reduced risk. References [1] M. Abadi, M. Burrows, B. Lampson, and G. Plotkin (1993-09) A calculus for access control in distributed systems. ACM Trans. Program. Lang. Syst. 15 (4), p. 706–734. External Links: ISSN 0164-0925, Link, Document Cited by: §1.5, §1.5, §1.5. [2] M. Abadi (2003) Logic in access control. In Proceedings of the 18th Annual IEEE Symposium on Logic in Computer Science, LICS ’03, USA, p. 228. External Links: ISBN 0769518842 Cited by: §1.5. [3] M. Abadi (2006-09) Access control in a core calculus of dependency. SIGPLAN Not. 41 (9), p. 263–273. External Links: ISSN 0362-1340, Link, Document Cited by: §1.5, §1.5. [4] M. Acclavio, D. Catta, and F. Olimpieri (2023) Canonicity in modal lambda calculus. ArXiv abs/2304.05465. Cited by: §1.4. [5] A. Aldini (2014) A calculus for trust and reputation systems. In Trust Management VIII, J. Zhou, N. Gal-Oz, J. Zhang, and E. Gudes (Eds.), Berlin, Heidelberg, p. 173–188. Cited by: §1.4, §1.5. [6] B. Aziz and G. Hamilton (2007) Modelling and analysis of PKI-based systems using process calculi. International Journal of Foundations of Computer Science 18 (03), p. 593–618. External Links: Document Cited by: §1.4. [7] M. Becker, C. Fournet, and A. Gordon (2007) Design and semantics of a decentralized authorization language. In 20th IEEE Computer Security Foundations Symposium (CSF’07), Vol. , p. 3–15. External Links: Document Cited by: §1.5, §1.5. [8] M. Y. Becker (2012-11) Information flow in trust management systems. J. Comput. Secur. 20 (6), p. 677–708. External Links: ISSN 0926-227X Cited by: §1.5, §1.5. [9] G. Bellin, V. De Paiva, and E. Ritter (2001-11) Extended curry-howard correspondence for a basic constructive modal logic. In Proceedings of the 2nd Workshop on Methods for Modalities, Amsterdam, Netherlands. Cited by: §1.4. [10] L. Birkedal, R. Clouston, B. Mannaa, R. Ejlers Møgelberg, A. M. Pitts, and B. Spitters (2020) Modal dependent type theory and dependent right adjoints. Mathematical Structures in Computer Science 30 (2), p. 118–138. External Links: Document Cited by: §10. [11] A. Blass and Y. Gurevich (2008) Two forms of one useful logic: existential fixed point logic and liberal datalog. Bull. EATCS 95, p. 164–182. Cited by: §1.5. [12] M. Božić and K. Došen (1984) Models for normal intuitionistic modal logics. Studia Logica 43 (3), p. 217–245. External Links: Document Cited by: §11. [13] M. Burrows, M. Abadi, and R. Needham (1990-02) A logic of authentication. ACM Trans. Comput. Syst. 8 (1), p. 18–36. External Links: ISSN 0734-2071, Link, Document Cited by: §1.5, §1.5. [14] CEF eDelivery (2018-05) CEF edelivery building block v1.2. trust models guidance. Cited by: §6. [15] R. Clouston (2018) Fitch-style modal lambda calculi. In Foundations of Software Science and Computation Structures, C. Baier and U. Dal Lago (Eds.), Cham, p. 258–275. External Links: ISBN 978-3-319-89366-2 Cited by: item 2. [16] M. Dastani, A. Herzig, J. Hulstijn, and L. van der Torre (2005) Inferring trust. In Computational Logic in Multi-Agent Systems, J. Leite and P. Torroni (Eds.), Berlin, Heidelberg, p. 144–160. Cited by: §1.4. [17] K. Dosen (1985) Models for stronger normal intuitionistic modal logics. Studia Logica 44, p. 39–70. External Links: Link Cited by: §11. [18] B. Dundua and L. Uridia (2010) Trust and belief , interrelation. In Proceedings of the Third Workshop on Agreement Technologies, Bahia Blanca, Argentina, November 1, 2010, External Links: Link Cited by: §1.4. [19] H. El Bakkali and B.I. Kaitouni (2001) A predicate calculus logic for the PKI trust model analysis. In Proceedings IEEE International Symposium on Network Computing and Applications. NCA 2001, Vol. , p. 368–371. External Links: Document Cited by: §1.4. [20] U. Frendrup, H. Hüttel, and J. Nyholm Jensen (2002) Modal logics for cryptographic processes. Electronic Notes in Theoretical Computer Science 68 (2), p. 124–141. Note: EXPRESS’02, 9th International Workshop on Expressiveness in Concurrency (Satellite Workshop of CONCUR 2002) External Links: ISSN 1571-0661, Document, Link Cited by: §1.4. [21] D. Garg and M. Abadi (2008) A modal deconstruction of access control logics. In Foundations of Software Science and Computational Structures, R. Amadio (Ed.), Berlin, Heidelberg, p. 216–230. External Links: ISBN 978-3-540-78499-9 Cited by: §1.5. [22] D. Garg, L. Bauer, K. D. Bowers, F. Pfenning, and M. K. Reiter (2006) A linear logic of authorization and knowledge. In Proceedings of the 11th European Conference on Research in Computer Security, ESORICS’06, Berlin, Heidelberg, p. 297–312. External Links: ISBN 354044601X, Link, Document Cited by: §1.5. [23] D. Garg, V. Genovese, and S. Negri (2012) Countermodels from sequent calculi in multi-modal logics. In 2012 27th Annual IEEE Symposium on Logic in Computer Science, Vol. , p. 315–324. External Links: Document Cited by: §10.2. [24] D. Garg and F. Pfenning (2006) Non-interference in constructive authorization logic. In 19th IEEE Computer Security Foundations Workshop (CSFW’06), Vol. , p. 11 p.–296. External Links: Document Cited by: §1.5. [25] G. Gentzen (1935) Untersuchungen über das logische Schließen I. Mathematische Zeitschrift 39, p. 176–210. External Links: Link Cited by: §12.2. [26] D. Gratzer, G. A. Kavvos, A. Nuyts, and L. Birkedal (2020) Multimodal dependent type theory. In Proceedings of the 35th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS ’20, New York, NY, USA, p. 492–506. External Links: ISBN 9781450371049, Link, Document Cited by: §1.4, §10, §11, §2.2. [27] Y. Gurevich and I. Neeman (2008) DKAL: distributed-knowledge authorization language. In 2008 21st IEEE Computer Security Foundations Symposium, Vol. , p. 149–162. External Links: Document Cited by: §1.5. [28] Y. Gurevich and I. Neeman (2009) DKAL 2 - A Simplified and Improved Authorization Language. Technical report Technical Report MSR-TR-2009-11, Microsoft Research. Cited by: §1.5. [29] Y. Gurevich and A. Roy (2009) Operational semantics for dkal: application and analysis. In Trust, Privacy and Security in Digital Business, S. Fischer-Hübner, C. Lambrinoudakis, and G. Pernul (Eds.), Berlin, Heidelberg, p. 149–158. External Links: ISBN 978-3-642-03748-1 Cited by: §1.5. [30] A. Herzig (2003) Modal probability, belief, and actions. Fundamenta Informaticae 57 (2-4), p. 323–344. External Links: Document, Link, https://journals.sagepub.com/doi/pdf/10.3233/FUN-2003-572-410 Cited by: §1.4. [31] A. K. Hirsch and M. R. Clarkson (2013) Belief semantics of authorization logic. In 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS’13, Berlin, Germany, November 4-8, 2013, A. Sadeghi, V. D. Gligor, and M. Yung (Eds.), p. 561–572. External Links: Link, Document Cited by: §1.5. [32] A. K. Hirsch, P. H. A. de Amorim, E. Cecchetti, R. Tate, and O. Arden (2020) First-order logic for flow-limited authorization. In 33rd IEEE Computer Security Foundations Symposium, CSF 2020, Boston, MA, USA, June 22-26, 2020, p. 123–138. External Links: Link, Document Cited by: §1.5. [33] J. Hu, Y. Zhang, R. Li, and Z. Lu (2010) A logic for authorization provenance. In Proceedings of the 5th ACM Symposium on Information, Computer and Communications Security, ASIACCS ’10, New York, NY, USA, p. 238–249. External Links: ISBN 9781605589367, Link, Document Cited by: §1.5. [34] J. Huang and D. Nicol (2009) A calculus of trust and its application to PKI and identity management. In Proceedings of the 8th Symposium on Identity and Trust on the Internet, IDtrust ’09, New York, NY, USA, p. 23–37. External Links: ISBN 9781605584744, Link, Document Cited by: §1.4, §1.5. [35] Z. Iranmanesh, M. Amini, and R. Jalili (2008) A logic for multi-domain authorization considering administrators. In 2008 IEEE Workshop on Policies for Distributed Systems and Networks, Vol. , p. 189–196. External Links: Document Cited by: §1.5. [36] A. Kamenik, P. Laud, A. Pankova, T. Siil, and N. Snetkov (2022-10) SPOF2.3 - eID infrastruktuuri usaldusmudel (Trust Model for eID Infrastructure). Research report Technical Report D-16-158, Cybernetica AS. Note: https://w.ria.e/sites/default/files/documents/2022-11/SPOF-2.3-eid-infrastruktuuri-usaldusmudel-28.10.2022.pdf Cited by: §1.3. [37] G. A. Kavvos (2017) Dual-context calculi for modal logic. In 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), Vol. , p. 1–12. External Links: Document Cited by: §12.2. [38] L. Kofoed and S. Mukopadhyay (2022) Principles of SSI V3. Note: https://sovrin.org/principles-of-ssi/ Cited by: §1.2. [39] R. Kohlas, J. Jonczy, and R. Haenni (2008) A trust evaluation method based on logic and probability theory. In Trust Management I, Y. Karabulut, J. Mitchell, P. Herrmann, and C. D. Jensen (Eds.), Boston, MA, p. 17–32. Cited by: §1.4. [40] C. Leturc and G. Bonnet (2018) A normal modal logic for trust in the sincerity. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’18, Richland, SC, p. 175–183. Cited by: §1.4. [41] C. Liau (2003) Belief, information acquisition, and trust in multi-agent systems—a modal logic formulation. Artificial Intelligence 149 (1), p. 31–60. External Links: ISSN 0004-3702, Document, Link Cited by: §1.4, §1.5, §4. [42] J. López, P. Samarati, and J. L. Ferrer (Eds.) (2007) Public key infrastructure, 4th european PKI workshop: theory and practice, europki 2007, palma de mallorca, spain, june 28-30, 2007, proceedings. Lecture Notes in Computer Science, Vol. 4582, Springer. External Links: Link, Document, ISBN 978-3-540-73407-9 Cited by: 47. [43] A. Nanevski, F. Pfenning, and B. Pientka (2008-06) Contextual modal type theory. ACM Trans. Comput. Logic 9 (3). External Links: ISSN 1529-3785, Link, Document Cited by: §1.4. [44] F. Pfenning (2000) Structural cut elimination: i. intuitionistic and classical logic. Information and Computation 157 (1), p. 84–141. External Links: ISSN 0890-5401, Document, Link Cited by: §B.1, §10.2. [45] P.V. Rangan (1988) An axiomatic basis of trust in distributed systems. In Proceedings. 1988 IEEE Symposium on Security and Privacy, Vol. , p. 204–211. External Links: Document Cited by: §1.4. [46] D. Reed, R. Joosten, and O. v. Deventer (2021) The basic building blocks of SSI. In Self-Sovereign Identity: Decentralized Digital Identity and Verifiable Credentials, A. Preukschat and D. Reed (Eds.), p. 21–38. Cited by: §1.2. [47] H. Rifà-Pous and J. Herrera-Joancomartí (2007) An interdomain PKI model based on trust lists. See Public key infrastructure, 4th european PKI workshop: theory and practice, europki 2007, palma de mallorca, spain, june 28-30, 2007, proceedings, López et al., p. 49–64. External Links: Link, Document Cited by: §6. [48] A. K. Simpson (1994) The proof theory and semantics of intuitionistic modal logic. Ph.D. Thesis, University of Edinburgh, UK. External Links: Link Cited by: §11. [49] M. P. Singh (2011) Trust as dependence: a logical approach. In Adaptive Agents and Multi-Agent Systems, External Links: Link Cited by: §1.4, §1.5. [50] E. G. Sirer, W. de Bruijn, P. Reynolds, A. Shieh, K. Walsh, D. Williams, and F. B. Schneider (2011) Logical attestation: an authorization architecture for trustworthy computing. In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles, SOSP ’11, New York, NY, USA, p. 249–264. External Links: ISBN 9781450309776, Link, Document Cited by: §1.5. [51] N. Voorneveld (2026) Forward proof search for intuitionistic multimodal k logics. In Automated Reasoning with Analytic Tableaux and Related Methods, G. L. Pozzato and T. Uustalu (Eds.), Cham, p. 335–353. Cited by: §1.6, §2.3. [52] N. Voorneveld (2026-03) Threshold trust logic. In Secure IT Systems, 30th Nordic Conference, NordSec 2025, Tartu, Estonia, November 12-13, 2025, Proceedings, R. Matulevičius, L. Kamm, and M. Iqbal (Eds.), Lecture Notes in Computer Science, Vol. 16325. Cited by: §1.6, §7. [53] F. Wolter and M. Zakharyaschev (1999) Intuitionistic modal logic. In Logic and Foundations of Mathematics: Selected Contributed Papers of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995, A. Cantini, E. Casari, and P. Minari (Eds.), p. 227–238. Cited by: §1.4. Appendix A Operational Semantics for Modal Lambda Calculus In this appendix, we expand on the semantics of our lambda calculus. For l∈∗l ^*, we inductively define Γ,l ,\l\ as: • Γ,=Γ ,\\= , • Γ,ℳ,l=Γ,ℳ,l ,\M,l\= ,\M\,\l\. First of all, we have weakening in two directions: • If Γ,Δ⊢P:A , P:A then Γ,x:A,Δ⊢P:A ,x:A, P:A. • If Γ⊢P:A P:A then ℳ,Γ⊢P:A\M\, P:A. We write l⇛r1​∣…∣​rnl r_1 … r_n if l=ℳ1⋅⋯⋅ℳnl=M_1·…·M_n and ℳi⇛riM_i r_i for each i. Given [Γ]⇛l1​∣…∣​ln[ ] l_1 … l_n we can define the substitution Γ​⟨l1∣…∣ln⟩ l_1 … l_n as: ε​⟨⟩=ε = , and (Γ,x:A)⟨l1∣…∣ln⟩=Γ⟨l1∣…∣ln⟩,x:A( ,x:A) l_1 … l_n = l_1 … l_n ,x:A, and (Γ,ℳ)​⟨l1∣…∣ln⟩=Γ​⟨l1∣…∣ln−1⟩,ln( ,\M\) l_1 … l_n = l_1 … l_n-1 ,\l_n\. Then [Γ​⟨l1∣…∣ln⟩]=l1,…,ln[ l_1 … l_n ]=l_1,…,l_n. This modal substitution can be extended to terms as well, allowing for context shifting. Lemma 12. If [Γ]⇛l1​∣…∣​ln[ ] l_1 … l_n and Γ⊢P:A P:A then Γ​⟨l1∣…∣ln⟩⊢P​⟨l1∣…∣ln⟩:A l_1 … l_n P l_1 … l_n :A for some P​⟨l1∣…∣ln⟩P l_1 … l_n . Proof. We define P​⟨l1∣…∣ln⟩P l_1 … l_n by induction: • var​(x)​⟨l1∣…∣ln⟩=var​(x) var(x) l_1 … l_n = var(x). • ∗⟨l1∣…∣ln⟩=∗* l_1 … l_n =*. • absA​(P)​⟨l1∣…∣ln⟩=absA​(P​⟨l1∣…∣ln⟩) abs_A(P) l_1 … l_n = abs_A(P l_1 … l_n ). • lockℳ​(P)​⟨l1​∣…∣​ln⟩=lockℳ​(P​⟨l1​∣…∣​ln∣ℳ⟩) lock_M(P) l_1 … l_n = lock_M(P l_1 … l_n ). • Suppose Γ⊢P:ℳ​A P:MA, ℳ⇛rM r, [Δ]=r[ ]=r and [Γ,Δ]⇛l1​∣…∣​ln[ , ] l_1 … l_n. We can divide the last statement into [Γ]⇛l1​∣…∣​li[ ] l_1 … l_i and [Δ]=r⇛li+1​∣…∣​ln[ ]=r l_i+1 … l_n. By transitivity, ℳ⇛li+1,…,lnM l_i+1,…,l_n, so we can define keyℳ⇛r​(P)​⟨l1∣…∣ln⟩=keyℳ⇛li+1⋅⋯⋅ln​(P​⟨l1∣…∣li⟩) key_M r(P) l_1 … l_n = key_M l_i+1·…· l_n(P l_1 … l_i ). ∎ We can also perform partial modal substitutions. For instance, if [Γ]=ℳ1,…,ℳn,[Δ][ ]=M_1,…,M_n,[ ] and [Δ]⇛l1​∣…∣​lm[ ] l_1 … l_m, then we can write (−)​⟨l1∣…∣ln⟩(-) l_1 … l_n as applied to Γ and terms in context Γ as a shorthand for (−)​⟨ℳ1​∣…∣​ℳn​∣l1∣​…∣ln⟩(-) _1 … _n l_1 … l_n . Similarly, we can define variable substitution. Given Γ,x:A,Δ⊢P:B ,x:A, P:B and Γ⊢Q:A Q:A there is a P​[Q/x]P[Q/x] such that Γ,Δ⊢P​[Q/x]:B , P[Q/x]:B. This implements a cut rule. We get the following normalisation procedure, which tidies up the terms and hence the proofs. • keyℳ⇛l​(lockℳ​(P))↝P​⟨l⟩ key_M l( lock_M(P)) P l . • (λx:A.P)⋅Q↝P[Q/x](λ x:A.P)· Q P[Q/x]. • πi​((P1,P2))↝Pi _i((P_1,P_2)) P_i. • caseC​(πi​(R))​π1​(x1)↦P1,π2​(x2)↦P2↝Pi​[R/xi] case_C( [origin=c]180.0$π$_i(R))\ [origin=c]180.0$π$_1(x_1) P_1, [origin=c]180.0$π$_2(x_2) P_2\ P_i[R/x_i]. • absA​(abs⊥​(P))↝absA​(P) abs_A( abs_ (P)) abs_A(P). • absA⇒B​(P)⋅Q↝absB​(P) abs_A B(P)· Q abs_B(P). • πi​(absA1∧A2​(P))↝absAi​(P) _i( abs_A_1 A_2(P)) abs_A_i(P). • caseC​(absA∨B​(R)​…)↝absC​(R) case_C( abs_A B(R)\…\) abs_C(R). Appendix B Proof of Decidability Suppose we have a finite set of modalities M and a reflexive, transitive and decomposable set of axioms Ax. We shall write ℳ⇛lM l to mean that ℳ⇛l∈M l∈ Ax. We take the partial map ⊖:2⇀ :M^2 , where ℳ⊖M is defined as a witness to the decomposability property. So ℳ⇛⋅(ℳ⊖)M ·(M ) if and only if ℳ⊖M is defined, and if ℳ⇛⋅lM ·l with l having at least one element, then ℳ⊖M is defined and ℳ⊖⇛lM l. We write ℳ⊖↓M if it is defined, and ℳ⊖↑M if it is not. A flat context Γ is given by a list of formulas. In other words, it is a context without any ℳ\M\ in it. For any modality ℳM, we define a function (_)⊖ℳ(\_) on flat contexts (which is written as ℳ−1​(−)M^-1(-) in the main body of the paper), where: • ()⊖=()() =(). • (Γ,ℳ​B)⊖=( ,MB) = Γ⊖,B,(ℳ⊖)​Bif ℳ⇛, and ℳ⊖↓Γ⊖,Bif ℳ⇛, and ℳ⊖↑Γ⊖,(ℳ⊖)​Bif ¬ℳ⇛, and ℳ⊖↓Γ⊖if ¬ℳ⇛, and ℳ⊖↑ cases ,B,(M )B&if M , and M \\ ,B&if M , and M \\ ,(M )B&if , and M \\ &if , and M \\ cases • (Γ,B)⊖=Γ⊖( ,B) = for any other B. We write Γ⊆Δ if any formula of Γ is in Δ . We write Γ⊑Δ if any formula of Γ is either in Δ , or of the form ℳ​CMC with Δ containing ​CNC for some N such that ⇛ℳN . This is capturing the fact that a formula ​CNC is more general then ℳ​CMC if ⇛ℳN . Hence in Γ⊑Δ , Δ is a stronger set of assumptions than Γ . By transitivity of Ax, ⊑ is transitive. Lemma 13. If Γ⊑Δ , then Γ⊖ℳ⊑Δ⊖ℳ Proof. For C∈(Γ⊖ℳ)C∈( ), either C=C′C=C , ​C′∈ΓNC ∈ with ⇛ℳN , or C=(⊖ℳ)​C′C=(N )C and ​C′∈ΓNC ∈ . Regardless of the case, Δ has ℛ​C′RC such that ℛ⇛R (note that ℛR could be N). • If C=C′C=C , and ​C′∈ΓNC ∈ with ⇛ℳN . By transitivity, ℛ⇛ℳR , hence C=C′∈(Δ⊖ℳ)C=C ∈( ). • If C=(⊖ℳ)​C′C=(N )C , and ​C′∈ΓNC ∈ , then ℛ⇛ℳ⋅(⊖ℳ)R ·(N ), hence ℛ⊖ℳR is defined and ℛ⊖ℳ⇛⊖ℳR by definition. Hence (ℛ⊖ℳ)​C′∈(Δ⊖ℳ)(R )C ∈( ), which covers for C=(⊖ℳ)​C′C=(N )C . ∎ Lemma 14. If ℳ⇛M , then (Γ⊖ℳ)⊑(Γ⊖)( ) ( ). Proof. For C∈(Γ⊖ℳ)C∈( ), then we have two cases: • If C=C′C=C , ℛ​C′∈ΓRC ∈ with ℛ⇛ℳR . By transitivity, ℛ⇛R , hence C=C′∈(Γ⊖)C=C ∈( ). • If C=(ℛ⊖ℳ)​C′C=(R )C and ℛ​C′∈ΓRC ∈ . By transitivity, ℛ⇛⋅(ℛ⊖ℳ)R ·(R ), hence (ℛ⊖)(R ) is defined and (ℛ⊖)⇛(ℛ⊖ℳ)(R ) (R ). So (ℛ⊖)​C′∈(Γ⊖)(R )C ∈( ) which covers for C=(ℛ⊖ℳ)​C′C=(R )C . ∎ Lemma 15. If ℳ⊖M is defined, then (Γ⊖ℳ)⊑((Γ⊖)⊖(ℳ⊖))( ) (( ) (M )). Proof. For C∈(Γ⊖ℳ)C∈( ), then we have two cases: • If C=C′C=C , ℛ​C′∈ΓRC ∈ with ℛ⇛ℳR . By transitivity, ℛ⇛⋅ℳ⊖R ·M , hence ℛ⊖R exists and (ℛ⊖)⇛(ℳ⊖)(R ) (M ). So, (ℛ⊖)C′∈((Γ⊖)(R )C ∈(( ), and C′∈((Γ⊖)⊖(ℳ⊖))C ∈(( ) (M )). • If C=(ℛ⊖ℳ)​C′C=(R )C and ℛ​C′∈ΓRC ∈ . By transitivity, ℛ⇛⋅ℳ⊖⋅ℛ⊖ℳR ·M ·R . Hence ℛ⊖R exists and (ℛ⊖)⇛ℳ⊖⋅ℛ⊖ℳ(R ) ·R . Hence (ℛ⊖)⊖(ℳ⊖)(R ) (M ) exists, and (ℛ⊖)⊖(ℳ⊖)⇛ℛ⊖ℳ(R ) (M ) . So (ℛ⊖)​C′∈(Γ⊖)(R )C ∈( ) and ((ℛ⊖)⊖(ℳ⊖))​C′∈((Γ⊖)⊖(ℳ⊖))((R ) (M ))C ∈(( ) (M )) which covers for C=(ℛ⊖ℳ)​C′C=(R )C . ∎ B.1 The sequent calculus Γ,t,Δ⊢t​(V​a​r)Γ,A⊢BΓ⊢A⇒B​(I​m​p​R)Γ,A⇒B⊢AΓ,A⇒B,B⊢DΓ,A⇒B⊢D​(I​m​p​L) ,t, t(Var) ,A B A B(ImpR) ,A B A ,A B,B D ,A B D(ImpL) Γ⊖ℳ⊢AΓ⊢ℳ​A​(M​o​d​R)Γ⊢⊤​(T​o​p​R)Γ,⊥,Δ⊢A​(B​o​t​L) A (ModR) (TopR) , , A(BotL) Γ,A1∧A2,A1⊢BΓ,A1∧A2⊢B​(A​n​d​L​1)Γ,A1∧A2,A2⊢BΓ,A1∧A2⊢B​(A​n​d​L​2)Γ⊢B1Γ⊢B2Γ⊢B1∧B2​(A​n​d​R) ,A_1 A_2,A_1 B ,A_1 A_2 B(AndL1) ,A_1 A_2,A_2 B ,A_1 A_2 B(AndL2) B_1 B_2 B_1 B_2(AndR) Γ,A1∨A2,A1⊢BΓ,A1∨A2,A2⊢BΓ,A1∨A2⊢B​(O​r​L)Γ⊢B1Γ⊢B1∨B2​(O​r​R​1)Γ⊢B2Γ⊢B1∨B2​(O​r​R​2) ,A_1 A_2,A_1 B ,A_1 A_2,A_2 B ,A_1 A_2 B(OrL) B_1 B_1 B_2(OrR1) B_2 B_1 B_2(OrR2) Figure 5: Decidable Sequent Calculus Figure 5 gives the sequent calculus, showing proof rules to determine which sequent of the form Γ⊢A A, with Γ a flat context, are provable. Proposition 1 (Structural weakening). Suppose D gives a proof of Γ⊢C C, and Γ⊑Δ , then there is a proof ′D of Δ⊢C C, where ′D has the same shape of D. Proof. Can be done by induction on D. In the ModR rule, we use Lemma 13 in order to make the inductive call. Note in particular that the (Var) rule only targets tokens, and if Γ has token t, then Δ has token t as well. ∎ We shall freely apply the above lemma to modify contexts accordingly, including swapping and copying formulas. Proposition 2 (Identity theorem). For any formula A, the sequent Γ,A⊢A ,A A is provable. Proof. Proven by induction on A: • If A=tA=t, A=⊥A= or A=⊤A= , it is directly provable by (Var), (BotL) and (TopR) respectively. • If A=A1⇒A2A=A_1 A_2: (IH) Γ,A1⇒A2,A1⊢A1 ,A_1 A_2,A_1 A_1 (IH) Γ,A1⇒A2,A1,A2⊢A2 ,A_1 A_2,A_1,A_2 A_2 (ImpL) Γ,A1⇒A2,A1⊢A2 ,A_1 A_2,A_1 A_2 (ImpR) Γ,A1⇒A2⊢A1⇒A2 ,A_1 A_2 A_1 A_2 • If A=ℳ​BA=MB, (IH) Γ⊖ℳ,B,(ℳ⊖ℳ)​B⊢B ,B,(M )B B (Mod) Γ,ℳ​B⊢ℳ​B ,MB Leave out (ℳ⊖ℳ)​B(M )B if (ℳ⊖ℳ)(M ) is undefined. ℳ⇛ℳM holds by reflexivity, hence (Γ,ℳ​B)⊖ℳ( ,MB) contains B. • The ∧ and ∨ cases are standard. ∎ Proposition 3 (Cut elimination). Given a proof D of Γ⊢A A, and a proof ℰE of Γ,A⊢B ,A B, then we can construct a proof ℱF of Γ⊢B B. Proof. We use Pfenning’s structural cut elimination proof [44] as a basis. We do induction on: Size of A, size of ℰE, size of D, in that order. We consider the size of ℳ​AMA and ​ANA for any two modalities ℳM and N to be the same. We focus on the non-standard new case our calculus includes: Both D and ℰE end with ModR. Case. =D= 1D_1 (d) Γ⊖ℳ⊢A A (ModR) Γ⊢ℳ​A ℰ=E= ℰ1E_1 (e1) Γ⊖,A,(ℳ⊖)​A⊢B ,A,(M )A B (ModR) Γ,ℳ​A⊢​B ,MA Note that the (ℳ⊖)​A(M )A and A under ℰ1E_1 exist depending on whether ℳ⊖M exists and ℳ⇛M holds. We shall inductively cut these two formulas. If the formula to be cut does not exist, the respective cut can be left out. To cut (ℳ⊖)​A(M )A, we use Lemma 15 to note that (Γ⊖ℳ)⊑((Γ⊖)⊖(ℳ⊖))( ) (( ) (M )), and by Lemma 13, ((Γ⊖)⊖(ℳ⊖))⊑((Γ⊖,A)⊖(ℳ⊖))(( ) (M )) (( ,A) (M )) (regardless of whether this A is there), hence by structural weakening 1D_1 gives a proof of (Γ⊖,A)⊖(ℳ⊖)⊢A( ,A) (M ) A. To cut A, then since A is there by the prerequisite that ℳ⇛M , we can use Lemma 14 to see that (Γ⊖ℳ)⊑(Γ⊖)( ) ( ). Hence by structural theorem, 1D_1 gives a proof of Γ⊖⊢A A. We use these versions of 1D_1 to perform the cuts in the following way, ℱ=F= 1′D_1 Γ⊖⊢A A 1′D_1 (Γ⊖,A)⊖(ℳ⊖)⊢A( ,A) (M ) A Γ⊖,A⊢(ℳ⊖)​A ,A (M )A ℰ1E_1 Γ⊖,A,(ℳ⊖)​A⊢B ,A,(M )A B (IH-ℰE) Γ⊖,A⊢B ,A B (IH-A) Γ⊖⊢B B Γ⊢​B ∎ It should be noted that in the above proof, the fact that Ax is decomposable is used in a fundamental way. Γ⊖ can for each ℳ​AMA only spin up A and (ℳ⊖)​A(M )A at most. We can freely cut the A, as a reduced formula (A compared to ℳ​AMA) can be used as a basis for the induction hypothesis, allowing us to liberally change the needed proofs as the size of formula is the dominant (first) inductive argument in the cut elimination proof. Cutting (ℳ⊖)​A(M )A is more difficult: we ensure that for determining termination of the cut elimination proof, we consider the size of (ℳ⊖)​A(M )A to be the same as the size of ℳ​AMA. This is ok, as they both have the same number of proper subformulas, and hence the measure for checking termination on the formula to be cut does not increase. Cutting ℳ⊖)AM )A can be done using a reduced proof ℰE and same size proof D. Decomposability also ensures that we need only consider one such modal formula of the same size as ℳ​AMA. Cutting multiple is problematic, as it makes the termination argument more difficult (maybe impossible), so we let (ℳ⊖)​A(M )A be sufficient for implying all other modal formulas ℛ​ARA which may be needed. So if we have multiple ℛR for which ℳ⇛⋅ℛM ·R, the single formula (ℳ⊖)​A(M )A covers all of them. B.2 Proving Decidability The decidability proof follows the usual recipe. Consider again the subformula property ≤ of formulas, which is extended to include A≤ℳ​A and ℳ​A≤​BMA if A≤BA≤ B. Since we only have a finite number of modalities M, we get that for any formula A there are only finitely many B such that B≤AB≤ A. We observe that for any A∈(Γ⊖ℳ)A∈( ) there is a B∈ΓB∈ such that A≤BA≤ B. As a consequence, we can show by induction on proofs that suppose D proves Γ⊢A A, and Δ⊢B B appears somewhere in D, then for any C∈Δ,BC∈ ,B there is a D∈Γ,AD∈ ,A such that C≤DC≤ D. Given structural weakening shown in Proposition 1, we see that whenever a Δ⊢A A appears above a Γ⊢A A in a proof, where Δ⊆Γ , then we have a redundancy: the proof of Δ⊢A A could replace the proof of Γ⊢A A, reducing the proof tree. We say that D is reduced if there are no Δ⊢A A appearing above Γ⊢A A in D such that Δ⊆Γ . By the above remark, if there is a proof of a sequent, there is a reduced proof of that sequent. For each sequent Γ⊢A A, there is a bound n such that any reduced proof of Γ⊢A A has at most height n. This is because by the subformula property, any Δ⊢B B in a proof of Γ⊢A A can only contain formulas drawn from the set of subformulas of Γ,A ,A, of which there are finitely many (note there are only finitely many modalities): we say there are m such formulas. So suppose S is a set of sequents Δ⊢B B possibly appearing in proofs of Γ⊢A A, such that no two sequents are eachother weakenings, then S has at most m⋅2m· 2^m elements (m possible consequents, and 2m2^m possible ordered lists of non-repeating assumptions). So any branch in a reduced proof has at most m⋅2m· 2^m giving a bound on the height of reduced trees. Since each proof steps makes at most two branches, the bound on the height gives a bound on the size of reduced proofs. Hence, up to weakening, there are only a finite number of reduced proof trees we need to check to see if Γ⊢A A has a proof. For a theoretical argument of decidability, we simply check all possible reduced proofs to find if one works. More practical algorithms involve more targeted proof searches. B.3 Extra: Proving axioms We can show that the sequent calculus correctly adapts ℒL_ Ax, as it has the same rules as the standard sequent calculus for intuitionistic logics on the non-modal side. Let us prove the axioms: • For axiom K, note that (ℳ​A,ℳ​(A⇒B))⊖ℳ(MA,M(A B)) by reflexivity of Ax includes A and A⇒BA B, which proves B. So (ℳ​A,ℳ​(A⇒B))⊖ℳ⊢B(MA,M(A B)) B is provable, and hence by the ModR rule, ℳ​A,ℳ​(A⇒B)⊢ℳ​BMA,M(A B) . Axiom K is then constructed by using the ImpR rule twice. • For necessity, note that with have a proof of ⊢A A, then since (⋅⊖ℳ)=⋅(· )=·, this is also a proof of (⋅⊖ℳ)⊢A(· ) A. Hence by the ModR rule, ⊢ℳ​A is provable. • Suppose ℳ⇛∈AxM ∈ Ax, then (ℳ​A⊖)(MA ) has A. So (ℳ​A⊖)⊢A(MA ) A is provable by the identity theorem, and hence ℳ​A⊢​AMA is provable by ModR, and ⊢ℳ​A⇒​A by ImpR. • Suppose ℳ⇛⋅ℛ∈AxM ·R∈ Ax, then (ℳ​A⊖)(MA ) has (ℳ⊖)​A(M )A and ℳ⊖⇛ℛ∈AxM ∈ Ax by decomposability. So (ℳ​A⊖)⊖ℛ(MA ) has A, and we can prove (ℳ​A⊖)⊖ℛ⊢A(MA ) A, and apply ModR twice to get ℳ​A⊢​ℛ​AMA , proving ⊢ℳ​A⇒​ℛ​A with ImpR. • We generalize the previous case, showing that by induction on the length of l, if ℳ⇛l∈AxM l∈ Ax and ℳ​A∈ΓMA∈ , then Γ⊢l​(A) l(A). Base case for length of l being 1 (or even 2) has been covered above. Suppose l=,l′l=N,l , then ℳ⊖M exists and ℳ⊖⇛l′∈AxM l ∈ Ax. So supposing ℳ​A∈ΓMA∈ , then (ℳ⊖)​A∈(Γ⊖)(M )A∈( ), and by induction hypothesis, (Γ⊖)⊢l′​(A)( ) l (A). Using ModR, we can conclude that Γ⊢​l′​(A) (A) where ​l′=lNl =l. This finishes the induction. We conclude that if ℳ⇛l∈AxM l∈ Ax, then ℳ​A⊢l​(A)MA l(A) and hence ⊢ℳ​A⇒l​(A) l(A) by ImpR. Appendix C Equivalence between Calculi Given two flat contexts Γ and Δ , we say that Γ⊢Δ is provable if Γ⊢B B is provable for any B∈ΔB∈ . Lemma 16. If Γ⊢Δ and Δ⊢Φ are provable, then Γ⊢Φ is provable. Proof. Starting with Γ,Δ⊢C , C for some C∈ΦC∈ , use a sequence of sequents Γ,B1,…,Bi−1⊢Bi ,B_1,…,B_i-1 B_i to cut away Δ , until getting to Γ⊢C C. ∎ Consider a modal context Γ , which may include modalities ℳ\M\. Hence Γ is either flat (has no modalities), or of the form Γ′,ℳ,Γ′ ,\M\, with Γ′ flat and Γ′ a modal context. Definition 7. Given context Γ and flat context Δ , then Γ⊢Δ is constructable if, by induction on Γ : • If Γ is flat, then Γ⊢Δ is constructable if it is provable. • If Γ=Γ0,ℳ,Γ1 = _0,\M\, _1 with Γ0 _0 flat, then Γ⊢Δ is constructable if there is a flat context Φ such that: – Γ0⊢Φ _0 is provable. – Φ⊖ℳ,Γ1⊢Δ , _1 is constructable. If Γ⊢Φ is constructable, and Φ,Ψ⊢Δ , is constructable, then Γ,Ψ⊢Δ , is constructible. Lemma 17 (Weakening of Constructability). If Φ⊆Ψ and Γ,Φ,Δ⊢Ω , , is constructable, then Γ,Ψ,Δ⊢Ω , , is constructable. Lemma 18. If Γ⊢Δ0 _0 and Γ⊢Δ1 _1 are constructable, then Γ⊢Δ0,Δ1 _0, _1 is constructable. Proof. Prove is done by induction on Γ . If Γ is flat, then Γ⊢Δ0 _0 and Γ⊢Δ1 _1 are provable, hence Γ⊢Δ0,Δ1 _0, _1 is provable. If Γ=Γ0,ℳ,Γ1 = _0,\M\, _1 with Γ0 _0 flat, then there are Φ0 _0 and Φ1 _1 such that Γ0⊢Φ0 _0 _0 and Γ0⊢Φ1 _0 _1 are provable and hence Γ0⊢Φ0,Φ1 _0 _0, _1 is provable, and both Φ0⊖ℳ,Γ1⊢Δ0 _0 , _1 _0 and Φ1⊖ℳ,Γ1⊢Δ1 _1 , _1 _1 are constructive. Note that (Φ0,Φ1)⊖ℳ=(Φ0⊖ℳ),(Φ1⊖ℳ)( _0, _1) =( _0 ),( _1 ), hence by weakening (Φ0,Φ1)⊖ℳ,Γ1⊢Δ0( _0, _1) , _1 _0 and (Φ0,Φ1)⊖ℳ,Γ1⊢Δ1( _0, _1) , _1 _1 are constructive. By induction hypothesis, (Φ0,Φ1)⊖ℳ,Γ1⊢Δ0,Δ1( _0, _1) , _1 _0, _1. ∎ Theorem 3. If there is a term Γ⊢P:A P:A, then Γ⊢A A is constructable. Proof. Induction on t: t⋅rt· r with t:A⇒Bt:A B and r:Ar:A, then by induction hypothesis, Γ⊢A⇒B A B and Γ⊢A A are constructable, hence by Lemma 18, Γ⊢A⇒B,A A B,A is constructable. We show that A⇒B,A⊢BA B,A B is provable, to construct Γ⊢B B. (Var) A⇒B,A⊢A B,A A (Var) A⇒B,A,B⊢BA B,A,B B (ImpL) A⇒B,A⊢BA B,A B λ​A.(t)λ A.(t) with t:Bt:B, then by induction hypothesis Γ,A⊢B ,A B is constructable. Take the tail-end provable Γ,A⊢B ,A B and apply (ImpR) to construct Γ⊢A⇒B A B. keyℳ⇛α​(t) key_M α(t) with Γ⊢t:ℳ​A t:MA. By induction hypothesis, Γ⊢ℳ​A is constructable. We do case distinction on α: • If α=α=N then ℳ⇛M . ℳ​A,ℳ⊢AMA,\M\ A is constructible since ℳ​A⊢ℳ​AMA and ℳ​A⊖⊢AMA A is provable by (Var). Composing ℳ​A,⊢AMA,\N\ A with Γ,⊢ℳ​A ,\N\ , we construct , ,Γ⊢A\N\, A. • If α=1,…,nα=N_1,…,N_n with n>1n>1, we know ℳ⇛1⋅⋯⋅nM _1·…·N_n, and hence can define ℛ1=ℳR_1=M, ℛi+1=ℛi⊖iR_i+1=R_i _i up to ℛnR_n for which we have ℛn⇛nR_n _n. Then ℳ​A,1⊢ℛ1​AMA,\N_1\ _1A, and ℛi​A,i⊢ℛi+1​AR_iA,\N_i\ _i+1A up to ℛn,n⊢AR_n,\N_n\ A are all constructable, composing into Γ,1,…,n⊢A ,\N_1\,…,\N_n\ A. lockℳ​(t) lock_M(t) with Γ,ℳ⊢t:A ,\M\ t:A. By induction hypothesis, we end with a proof of Γ⊖ℳ⊢A A with constructable Γ⊢Γ . This gives us a proof of Γ⊢ℳ​A using (ModR). Composing Γ⊢Γ and Γ⊢ℳ​A we construct Γ⊢ℳ​A . Other cases are simpler. ∎ Lemma 19. For any term (Γ⊖ℳ),Δ⊢t:A( ), t:A, there is a term Γ,ℳ,Δ⊢r:A ,\M\, r:A. Proof. (Γ⊖ℳ),Δ⊢t:A( ), t:A can be weakened to Γ,ℳ,(Γ⊖ℳ),Δ⊢t:A ,\M\,( ), t:A. We reason that for any formula B from (Γ⊖ℳ)( ) there is a lambda term Γ,ℳ⊢rB:B ,\M\ r_B:B, and hence we can use substitutions to get a term Γ,ℳ,Γ⊢r:A ,\M\, r:A. Suppose B∈(Γ⊖ℳ)B∈( ), then either: • x:​B∈Γx:NB∈ for some x, and ⇛ℳN . Then Γ,ℳ⊢key⇛ℳ​(var​(x)):B ,\M\ key_N ( var(x)):B. • B=(⊖ℳ)​B′B=(N )B and x:​B′∈Γx:NB ∈ . Then Γ,ℳ⊢lock⊖ℳ​(key⇛ℳ,⊖ℳ​(var​(x))):(⊖ℳ)​B′ ,\M\ lock_N ( key_N ,N ( var(x))):(N )B . ∎ Theorem 4. If Γ⊢A A is provable, there is a lambda term t such that Γ⊢t:A t:A Proof. By induction on proofs. Most cases are standard, we shall just consider the modal case. (ModR) Proving Γ⊢ℳ​A , using a proof of Γ⊖ℳ⊢A A. By induction hypothesis, we have a term Γ⊖ℳ⊢t:A t:A, and we can apply Lemma 19 to get Γ,ℳ⊢r:A ,\M\ r:A and hence Γ⊢lockℳ​(s):A lock_M(s):A. ∎ Theorem 5. If Γ⊢A A is constructable, there is a lambda term t such that Γ⊢t:A t:A Proof. By induction on Γ . If Γ=Γ = , then Γ⊢A A is provable, and we use the previous theorem. If Γ=Γ,ℳ,Γ′ = ,\M\, , with provable Γ⊢Δ , and constructable Δ⊖ℳ,Γ′⊢A , A. We use the previous theorem and the induction hypothesis to find terms Γ⊢tD:D t_D:D for each D∈ΔD∈ , and Δ⊖ℳ,Γ′⊢r:A , r:A. Applying Lemma 19 to the latter, we get Δ,ℳ,Γ′⊢s:A ,\M\, s:A. Substituting each tDt_D into s we get the desired term Γ,ℳ,Γ′⊢t′:A ,\M\, t :A. ∎ We can conclude that Γ⊢A A is constructible if and only if there is a lambda term t such that Γ⊢t:A t:A. If in particular Γ is flat, then Γ⊢A A is provable in the sequent calculus if and only if there is a lambda term t such that Γ⊢t:A t:A. Hence by decidability, there is an algorithm which determines for each sequent Γ⊢A A with Γ a flat context, whether a lambda term exists. This can be extended to any modal context as follows. We define the map sending pairs (Γ⊢A)( A) of modal context and formulas to a formula ⟨Γ,A⟩ ,A as follows: • ⟨⊢A⟩:=A A :=A, • ⟨(Γ,x:B)⊢A⟩:=⟨Γ⊢B⇒A⟩ ( ,x:B) A := B A , • ⟨(Γ,ℳ)⊢A⟩:=⟨Γ⊢ℳA⟩ ( ,\M\) A := . By induction, ⟨(x:B,Γ)⊢A⟩=B⇒⟨Γ⊢A⟩ (x:B, ) A =B A and ⟨(ℳ,Γ)⊢A⟩=ℳ⟨Γ⊢A⟩ (\M\, ) A =M A . Lemma 20. For each Γ , Δ and A, the sequence Γ,Δ⊢A , A has a lambda term if and only if Γ⊢⟨Δ⊢A⟩ A has a lambda term. Proof. We prove both directions of the implication by induction on the length of Δ . The base case is simple, since ⟨⊢A⟩=A A =A. First from left to right: • If Δ=Δ′,x:B = ,x:B and Γ,Δ′,x:B⊢t:A , ,x:B t:A. Then Γ,Δ′⊢λx:B.t:B⇒A , λ x:B.t:B A which by induction hypothesis gives us a term Γ⊢t′:⟨Δ′⊢B⇒A⟩ t : B A , where ⟨Δ′⊢B⇒A⟩=⟨Δ′,x:B⊢A⟩ B A = ,x:B A . • If Δ=Δ′,ℳ = ,\M\ and Γ,Δ,ℳ⊢t:A , ,\M\ t:A. Then Γ,Δ′⊢lockℳ​(t):ℳ​A , lock_M(t):MA which by induction hypothesis gives us a term Γ⊢t′:⟨Δ′⊢ℳA⟩ t : , where ⟨Δ′⊢ℳA⟩=⟨Δ′,ℳ⊢A⟩ = ,\M\ A . The converse: • If Δ=x:B,Δ′ =x:B, and Γ⊢t:⟨x:B,Δ′⊢A⟩ t: x:B, A , then since ⟨x:B,Δ′⊢A⟩=B⇒⟨Δ′⊢A⟩ x:B, A =B A we have Γ,x:B⊢t⋅var(x):⟨Δ′⊢A⟩ ,x:B t· var(x): A . This gives us by induction hypothesis a term Γ,x:B,Δ′⊢t′:A ,x:B, t :A as desired. • If Δ=ℳ,Δ′ =\M\, and Γ⊢t:⟨ℳ,Δ′⊢A⟩ t: \M\, A , then since ⟨ℳ,Δ′⊢A⟩=ℳ⟨Δ′⊢A⟩ \M\, A =M A we have Γ,ℳ⊢keyℳ⇛ℳ(t):⟨Δ′⊢A⟩ ,\M\ key_M (t): A . This gives us by induction hypothesis a term Γ,ℳ,Δ′⊢t′:A ,\M\, t :A as desired. ∎ We conclude that Γ⊢A A has a lambda term if and only if ⊢⟨Γ⊢A⟩ A has a lambda term if and only if it has a proof in the decidable sequent calculus. Hence it is decidable whether Γ⊢A A has a lambda term. Appendix D Additional Proofs Proof of Lemma 5. If S​(a,b,d)N_S(a,b,d) then (a,α,b,β,d)∈S(a,α,b,β,d)∈ S for some possibly empty α,β∈∗α,β ^*. If S​(b,c,d)N_S(b,c,d) then (b,γ,c,δ,d)∈S(b,γ,c,δ,d)∈ S for some γ,δ∈∗γ,δ ^*. By property 2 of a forwarding network, we can replace β in (a,α,b,β,d)(a,α,b,β,d) with (γ,c,δ)(γ,c,δ), creating (a,α,b,γ,c,δ,d)∈S(a,α,b,γ,c,δ,d)∈ S. This shows that S​(a,c,d)N_S(a,c,d) as desired. Moreover, using property 1 of forwarding networks, we can show that (b,γ,c,δ,d)∈S(b,γ,c,δ,d)∈ S as well, and hence S​(b,c,d)N_S(b,c,d). The second property has a similar proof. ∎ Proof of Lemma 6. First note that if ℳM and N satisfy axiom K and necessity, then ℳ​MN satisfies axiom K and necessity, since firstly ⊢A A implies ⊢​A implies ⊢ℳ​A . Secondly, ⊢​A∧​(A⇒B)⇒​B (A B) so ℳ​A∧ℳ​(A⇒B)⇒ℳ​BMNA (A B) by necessity and axiom K for ℳM. We conclude that ℐαI_α satisfies axiom K and necessity for any appropriate α. Proving Axiom K, suppose a←b​AJ_a 0.5[0.75]$←$bA and a←b​(A⇒B)J_a 0.5[0.75]$←$b(A B), we want to show that a←b​BJ_a 0.5[0.75]$←$bB. Let γ∈∗γ ^* such that (a,γ,b)∈S(a,γ,b)∈ S, then ℐa←γ←b​AI_a 0.5[0.75]$←$γ 0.5[0.75]$←$bA and ℐa←γ←b​(A⇒B)I_a 0.5[0.75]$←$γ 0.5[0.75]$←$b(A B), hence by axiom K on ℐa←γ←bI_a 0.5[0.75]$←$γ 0.5[0.75]$←$b we get ℐa←γ←b​BI_a 0.5[0.75]$←$γ 0.5[0.75]$←$bB. This is for all relevant γ, hence a←b​BJ_a 0.5[0.75]$←$bB. Proving necessity, if ⊢A A, then ⊢ℐa←γ←b​A _a 0.5[0.75]$←$γ 0.5[0.75]$←$bA for all γ such that (a,γ,b)∈S(a,γ,b)∈ S, hence ⊢a←b​A _a 0.5[0.75]$←$bA. For a←b​A⇒ℬa​a←b​AJ_a 0.5[0.75]$←$bA _aJ_a 0.5[0.75]$←$bA and a←b​A⇒a←b​ℬb​AJ_a 0.5[0.75]$←$bA _a 0.5[0.75]$←$bB_bA, simply note: 1) that ℐa←γ←bI_a 0.5[0.75]$←$γ 0.5[0.75]$←$b starts with some ℐa←a′I_a 0.5[0.75]$←$a for which ℐa←a′​B⇒ℬa​ℐa←a′​BI_a 0.5[0.75]$←$a B _aI_a 0.5[0.75]$←$a B for any B, hence ℐa←γ←b​A⇒ℬa​ℐa←γ←b​AI_a 0.5[0.75]$←$γ 0.5[0.75]$←$bA _aI_a 0.5[0.75]$←$γ 0.5[0.75]$←$bA. 2) ℐa←γ←bI_a 0.5[0.75]$←$γ 0.5[0.75]$←$b ends with some ℐb′←bI_b 0.5[0.75]$←$b for which ℐb′←b​A⇒ℐb′←b​ℬb​AI_b 0.5[0.75]$←$bA _b 0.5[0.75]$←$bB_bA. Using axiom K on the rest of ℐa←γ←bI_a 0.5[0.75]$←$γ 0.5[0.75]$←$b (which may be empty), we get ℐa←γ←b​A⇒ℐa←γ←b​ℬb​AI_a 0.5[0.75]$←$γ 0.5[0.75]$←$bA _a 0.5[0.75]$←$γ 0.5[0.75]$←$bB_bA. ∎ Lemma 21. S_G, the set of shortest paths on a finite graph is a forwarding network. Proof. Note first that a shortest path between two vertices is given by a non-empty list of distinct vertices. The shortest path from a to a is simply given by (a)(a). We prove the three additional properties: 1. Suppose given a shortest path (a,b1​…,bn,c)(a,b_1…,b_n,c) from c to a, then (a,b1​…,bn)(a,b_1…,b_n) has to be a shortest path from bnb_n to a; if not, there would be a shorter path from c to a. Hence (a,b1​…,bn)∈S(a,b_1…,b_n)∈ S_G, and by a similar argument, (b1​…,bn,c)∈S(b_1…,b_n,c)∈ S. 2. Suppose (a1,…,an,b,c1,…,cm,d,e1,…,ek)(a_1,…,a_n,b,c_1,…,c_m,d,e_1,…,e_k) is a shortest path, we leave ambiguous which elements are the source and target, since n and/or k could be zero. This is a path of length n+m+k+1n+m+k+1. We know that (b,c1,…,cm,d)(b,c_1,…,c_m,d) is the shortest path from d to b. Taking some other shortest path (b,c1′,…,cp′,d)(b,c _1,…,c _p,d) from d to b, we note that it has to have the same length, hence p=mp=m. Hence (a1,…,an,b,c1′,…,cp′,d,e1,…,ek)(a_1,…,a_n,b,c _1,…,c _p,d,e_1,…,e_k) is also a path of length n+m+k+1n+m+k+1. Note in particular that (a1,…,an,b,c1′,…,cp′,d,e1,…,ek)(a_1,…,a_n,b,c _1,…,c _p,d,e_1,…,e_k) cannot have repeats, since otherwise we could cut out the cycle and create a shorter path, contradicting that that the shortest path in of length n+m+k+1n+m+k+1. We conclude that (a1,…,an,b,c1′,…,cp′,d,e1,…,ek)(a_1,…,a_n,b,c _1,…,c _p,d,e_1,…,e_k) is also a shortest path, and hence in S_G. ∎ Lemma 22. If G is acyclic, S′S _G is a forwarding network. Proof. This works since paths are closed under subpaths (property 1), and the result of replacing subpaths by alternative subpaths (property 2) cannot create a path without repeats. ∎