# Sequent calculus

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This article presents the language and sequent calculus of second-order linear logic and the basic properties of this sequent calculus. The core of the article uses the two-sided system with negation as a proper connective; the one-sided system, often used as the definition of linear logic, is presented at the end of the page.

## Formulas

Atomic formulas, written α,β,γ, are predicates of the form $p(t_1,\ldots,t_n)$, where the ti are terms from some first-order language. The predicate symbol p may be either a predicate constant or a second-order variable. By convention we will write first-order variables as x,y,z, second-order variables as X,Y,Z, and ξ for a variable of arbitrary order (see Notations).

Formulas, represented by capital letters A, B, C, are built using the following connectives:

 α atom $A\orth$ negation $A \tens B$ tensor $A \parr B$ par multiplicatives $\one$ one $\bot$ bottom multiplicative units $A \plus B$ plus $A \with B$ with additives $\zero$ zero $\top$ top additive units $\oc A$ of course $\wn A$ why not exponentials $\exists \xi.A$ there exists $\forall \xi.A$ for all quantifiers

Each line (except the first one) corresponds to a particular class of connectives, and each class consists in a pair of connectives. Those in the left column are called positive and those in the right column are called negative. The tensor and with connectives are conjunctions while par and plus are disjunctions. The exponential connectives are called modalities, and traditionally read of course A for $\oc A$ and why not A for $\wn A$. Quantifiers may apply to first- or second-order variables.

There is no connective for implication in the syntax of standard linear logic. Instead, a linear implication is defined similarly to the decomposition $A\imp B=\neg A\vee B$ in classical logic, as $A\limp B:=A\orth\parr B$.

Free and bound variables and first-order substitution are defined in the standard way. Formulas are always considered up to renaming of bound names. If A is a formula, X is a second-order variable and $B[x_1,\ldots,x_n]$ is a formula with variables xi, then the formula A[B / X] is A where every atom $X(t_1,\ldots,t_n)$ is replaced by $B[t_1,\ldots,t_n]$.

## Sequents and proofs

A sequent is an expression $\Gamma\vdash\Delta$ where Γ and Δ are finite multisets of formulas. For a multiset $\Gamma=A_1,\ldots,A_n$, the notation $\wn\Gamma$ represents the multiset $\wn A_1,\ldots,\wn A_n$. Proofs are labelled trees of sequents, built using the following inference rules:

• Identity group: $\LabelRule{\rulename{axiom}} \NulRule{ A \vdash A } \DisplayProof$$\AxRule{ \Gamma \vdash A, \Delta } \AxRule{ \Gamma', A \vdash \Delta' } \LabelRule{\rulename{cut}} \BinRule{ \Gamma, \Gamma' \vdash \Delta, \Delta' } \DisplayProof$
• Negation: $\AxRule{ \Gamma \vdash A, \Delta } \UnaRule{ \Gamma, A\orth \vdash \Delta } \LabelRule{n_L} \DisplayProof$$\AxRule{ \Gamma, A \vdash \Delta } \UnaRule{ \Gamma \vdash A\orth, \Delta } \LabelRule{n_R} \DisplayProof$
• Multiplicative group:
• tensor: $\AxRule{ \Gamma, A, B \vdash \Delta } \LabelRule{ \tens_L } \UnaRule{ \Gamma, A \tens B \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash A, \Delta } \AxRule{ \Gamma' \vdash B, \Delta' } \LabelRule{ \tens_R } \BinRule{ \Gamma, \Gamma' \vdash A \tens B, \Delta, \Delta' } \DisplayProof$
• par: $\AxRule{ \Gamma, A \vdash \Delta } \AxRule{ \Gamma', B \vdash \Delta' } \LabelRule{ \parr_L } \BinRule{ \Gamma, \Gamma', A \parr B \vdash \Delta, \Delta' } \DisplayProof$$\AxRule{ \Gamma \vdash A, B, \Delta } \LabelRule{ \parr_R } \UnaRule{ \Gamma \vdash A \parr B, \Delta } \DisplayProof$
• one: $\AxRule{ \Gamma \vdash \Delta } \LabelRule{ \one_L } \UnaRule{ \Gamma, \one \vdash \Delta } \DisplayProof$$\LabelRule{ \one_R } \NulRule{ \vdash \one } \DisplayProof$
• bottom: $\LabelRule{ \bot_L } \NulRule{ \bot \vdash } \DisplayProof$$\AxRule{ \Gamma \vdash \Delta } \LabelRule{ \bot_R } \UnaRule{ \Gamma \vdash \bot, \Delta } \DisplayProof$
• plus: $\AxRule{ \Gamma, A \vdash \Delta } \AxRule{ \Gamma, B \vdash \Delta } \LabelRule{ \plus_L } \BinRule{ \Gamma, A \plus B \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash A, \Delta } \LabelRule{ \plus_{R1} } \UnaRule{ \Gamma \vdash A \plus B, \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash B, \Delta } \LabelRule{ \plus_{R2} } \UnaRule{ \Gamma \vdash A \plus B, \Delta } \DisplayProof$
• with: $\AxRule{ \Gamma, A \vdash \Delta } \LabelRule{ \with_{L1} } \UnaRule{ \Gamma, A \with B \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma, B \vdash \Delta } \LabelRule{ \with_{L2} } \UnaRule{ \Gamma, A \with B \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash A, \Delta } \AxRule{ \Gamma \vdash B, \Delta } \LabelRule{ \with_R } \BinRule{ \Gamma \vdash A \with B, \Delta } \DisplayProof$
• zero: $\LabelRule{ \zero_L } \NulRule{ \Gamma, \zero \vdash \Delta } \DisplayProof$
• top: $\LabelRule{ \top_R } \NulRule{ \Gamma \vdash \top, \Delta } \DisplayProof$
• Exponential group:
• of course: $\AxRule{ \Gamma, A \vdash \Delta } \LabelRule{ d_L } \UnaRule{ \Gamma, \oc A \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash \Delta } \LabelRule{ w_L } \UnaRule{ \Gamma, \oc A \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma, \oc A, \oc A \vdash \Delta } \LabelRule{ c_L } \UnaRule{ \Gamma, \oc A \vdash \Delta } \DisplayProof$$\AxRule{ \oc A_1, \ldots, \oc A_n \vdash B ,\wn B_1, \ldots, \wn B_m } \LabelRule{ \oc_R } \UnaRule{ \oc A_1, \ldots, \oc A_n \vdash \oc B ,\wn B_1, \ldots, \wn B_m } \DisplayProof$
• why not: $\AxRule{ \Gamma \vdash A, \Delta } \LabelRule{ d_R } \UnaRule{ \Gamma \vdash \wn A, \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash \Delta } \LabelRule{ w_R } \UnaRule{ \Gamma \vdash \wn A, \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash \wn A, \wn A, \Delta } \LabelRule{ c_R } \UnaRule{ \Gamma \vdash \wn A, \Delta } \DisplayProof$$\AxRule{ \oc A_1, \ldots, \oc A_n, A \vdash \wn B_1, \ldots, \wn B_m } \LabelRule{ \wn_L } \UnaRule{ \oc A_1, \ldots, \oc A_n, \wn A \vdash \wn B_1, \ldots, \wn B_m } \DisplayProof$
• Quantifier group (in the $\exists_L$ and $\forall_R$ rules, ξ must not occur free in Γ or Δ):
• there exists: $\AxRule{ \Gamma , A \vdash \Delta } \LabelRule{ \exists_L } \UnaRule{ \Gamma, \exists\xi.A \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash \Delta, A[t/x] } \LabelRule{ \exists^1_R } \UnaRule{ \Gamma \vdash \Delta, \exists x.A } \DisplayProof$$\AxRule{ \Gamma \vdash \Delta, A[B/X] } \LabelRule{ \exists^2_R } \UnaRule{ \Gamma \vdash \Delta, \exists X.A } \DisplayProof$
• for all: $\AxRule{ \Gamma, A[t/x] \vdash \Delta } \LabelRule{ \forall^1_L } \UnaRule{ \Gamma, \forall x.A \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma, A[B/X] \vdash \Delta } \LabelRule{ \forall^2_L } \UnaRule{ \Gamma, \forall X.A \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash \Delta, A } \LabelRule{ \forall_R } \UnaRule{ \Gamma \vdash \Delta, \forall\xi.A } \DisplayProof$

The left rules for of course and right rules for why not are called dereliction, weakening and contraction rules. The right rule for of course and the left rule for why not are called promotion rules. Note the fundamental fact that there are no contraction and weakening rules for arbitrary formulas, but only for the formulas starting with the $\wn$ modality. This is what distinguishes linear logic from classical logic: if weakening and contraction were allowed for arbitrary formulas, then $\tens$ and $\with$ would be identified, as well as $\plus$ and $\parr$, $\one$ and $\top$, $\zero$ and $\bot$. By identified, we mean here that replacing a $\tens$ with a $\with$ or vice versa would preserve provability.

Sequents are considered as multisets, in other words as sequences up to permutation. An alternative presentation would be to define a sequent as a finite sequence of formulas and to add the exchange rules:

$\AxRule{ \Gamma_1, A, B, \Gamma_2 \vdash \Delta } \LabelRule{\rulename{exchange}_L} \UnaRule{ \Gamma_1, B, A, \Gamma_2 \vdash \Delta } \DisplayProof$$\AxRule{ \Gamma \vdash \Delta_1, A, B, \Delta_2 } \LabelRule{\rulename{exchange}_R} \UnaRule{ \Gamma \vdash \Delta_1, B, A, \Delta_2 } \DisplayProof$

## Equivalences

Two formulas A and B are (linearly) equivalent, written $A\linequiv B$, if both implications $A\limp B$ and $B\limp A$ are provable. Equivalently, $A\linequiv B$ if both $A\vdash B$ and $B\vdash A$ are provable. Another formulation of $A\linequiv B$ is that, for all Γ and Δ, $\Gamma\vdash\Delta,A$ is provable if and only if $\Gamma\vdash\Delta,B$ is provable.

Two related notions are isomorphism (stronger than equivalence) and equiprovability (weaker than equivalence).

### De Morgan laws

Negation is involutive:

$A\linequiv A\biorth$

Duality between connectives:

 $( A \tens B )\orth$ $\linequiv A\orth \parr B\orth$ $( A \parr B )\orth$ $\linequiv A\orth \tens B\orth$ $\one\orth$ $\linequiv \bot$ $\bot\orth$ $\linequiv \one$ $( A \plus B )\orth$ $\linequiv A\orth \with B\orth$ $( A \with B )\orth$ $\linequiv A\orth \plus B\orth$ $\zero\orth$ $\linequiv \top$ $\top\orth$ $\linequiv \zero$ $( \oc A )\orth$ $\linequiv \wn ( A\orth )$ $( \wn A )\orth$ $\linequiv \oc ( A\orth )$ $( \exists \xi.A )\orth$ $\linequiv \forall \xi.( A\orth )$ $( \forall \xi.A )\orth$ $\linequiv \exists \xi.( A\orth )$

### Fundamental equivalences

• Associativity, commutativity, neutrality:
$A \tens (B \tens C) \linequiv (A \tens B) \tens C$$A \tens B \linequiv B \tens A$$A \tens \one \linequiv A$
$A \parr (B \parr C) \linequiv (A \parr B) \parr C$$A \parr B \linequiv B \parr A$$A \parr \bot \linequiv A$
$A \plus (B \plus C) \linequiv (A \plus B) \plus C$$A \plus B \linequiv B \plus A$$A \plus \zero \linequiv A$
$A \with (B \with C) \linequiv (A \with B) \with C$$A \with B \linequiv B \with A$$A \with \top \linequiv A$
$A \plus A \linequiv A$$A \with A \linequiv A$
• Distributivity of multiplicatives over additives:
$A \tens (B \plus C) \linequiv (A \tens B) \plus (A \tens C)$$A \tens \zero \linequiv \zero$
$A \parr (B \with C) \linequiv (A \parr B) \with (A \parr C)$$A \parr \top \linequiv \top$
• Defining property of exponentials:
$\oc(A \with B) \linequiv \oc A \tens \oc B$$\oc\top \linequiv \one$
$\wn(A \plus B) \linequiv \wn A \parr \wn B$$\wn\zero \linequiv \bot$
• Monoidal structure of exponentials:
$\oc A \tens \oc A \linequiv \oc A$$\oc \one \linequiv \one$
$\wn A \parr \wn A \linequiv \wn A$$\wn \bot \linequiv \bot$
• Digging:
$\oc\oc A \linequiv \oc A$$\wn\wn A \linequiv \wn A$
• Other properties of exponentials:
$\oc\wn\oc\wn A \linequiv \oc\wn A$$\oc\wn \one \linequiv \one$
$\wn\oc\wn\oc A \linequiv \wn\oc A$$\wn\oc \bot \linequiv \bot$

These properties of exponentials lead to the lattice of exponential modalities.

• Commutation of quantifiers (ζ does not occur in A):
$\exists \xi. \exists \psi. A \linequiv \exists \psi. \exists \xi. A$$\exists \xi.(A \plus B) \linequiv \exists \xi.A \plus \exists \xi.B$$\exists \zeta.(A\tens B) \linequiv A\tens\exists \zeta.B$$\exists \zeta.A \linequiv A$
$\forall \xi. \forall \psi. A \linequiv \forall \psi. \forall \xi. A$$\forall \xi.(A \with B) \linequiv \forall \xi.A \with \forall \xi.B$$\forall \zeta.(A\parr B) \linequiv A\parr\forall \zeta.B$$\forall \zeta.A \linequiv A$

### Definability

The units and the additive connectives can be defined using second-order quantification and exponentials, indeed the following equivalences hold:

• $\zero \linequiv \forall X.X$
• $\one \linequiv \forall X.(X \limp X)$
• $A \plus B \linequiv \forall X.(\oc(A \limp X) \limp \oc(B \limp X) \limp X)$

The constants $\top$ and $\bot$ and the connective $\with$ can be defined by duality.

Any pair of connectives that has the same rules as $\tens/\parr$ is equivalent to it, the same holds for additives, but not for exponentials.

Other basic equivalences exist.

## Properties of proofs

### Cut elimination and consequences

Theorem (cut elimination)

For every sequent $\Gamma\vdash\Delta$, there is a proof of $\Gamma\vdash\Delta$ if and only if there is a proof of $\Gamma\vdash\Delta$ that does not use the cut rule.

This property is proved using a set of rewriting rules on proofs, using appropriate termination arguments (see the specific articles on cut elimination for detailed proofs), it is the core of the proof/program correspondence.

It has several important consequences:

Definition (subformula)

The subformulas of a formula A are A and, inductively, the subformulas of its immediate subformulas:

• the immediate subformulas of $A\tens B$, $A\parr B$, $A\plus B$, $A\with B$ are A and B,
• the only immediate subformula of $\oc A$ and $\wn A$ is A,
• $\one$, $\bot$, $\zero$, $\top$ and atomic formulas have no immediate subformula,
• the immediate subformulas of $\exists x.A$ and $\forall x.A$ are all the A[t / x] for all first-order terms t,
• the immediate subformulas of $\exists X.A$ and $\forall X.A$ are all the A[B / X] for all formulas B (with the appropriate number of parameters).

Theorem (subformula property)

A sequent $\Gamma\vdash\Delta$ is provable if and only if it is the conclusion of a proof in which each intermediate conclusion is made of subformulas of the formulas of Γ and Δ.

Proof. By the cut elimination theorem, if a sequent is provable, then it is provable by a cut-free proof. In each rule except the cut rule, all formulas of the premisses are either formulas of the conclusion, or immediate subformulas of it, therefore cut-free proofs have the subformula property.

The subformula property means essentially nothing in the second-order system, since any formula is a subformula of a quantified formula where the quantified variable occurs. However, the property is very meaningful if the sequent Γ does not use second-order quantification, as it puts a strong restriction on the set of potential proofs of a given sequent. In particular, it implies that the first-order fragment without quantifiers is decidable.

Theorem (consistency)

The empty sequent $\vdash$ is not provable. Subsequently, it is impossible to prove both a formula A and its negation $A\orth$; it is impossible to prove $\zero$ or $\bot$.

Proof. If a sequent is provable, then it is the conclusion of a cut-free proof. In each rule except the cut rule, there is at least one formula in conclusion. Therefore $\vdash$ cannot be the conclusion of a proof. The other properties are immediate consequences: if $\vdash A\orth$ and $\vdash A$ are provable, then by the left negation rule $A\orth\vdash$ is provable, and by the cut rule one gets empty conclusion, which is not possible. As particular cases, since $\one$ and $\top$ are provable, $\bot$ and $\zero$ are not, since they are equivalent to $\one\orth$ and $\top\orth$ respectively.

### Expansion of identities

Let us write $\pi:\Gamma\vdash\Delta$ to signify that π is a proof with conclusion $\Gamma\vdash\Delta$.

Proposition (η-expansion)

For every proof π, there is a proof π' with the same conclusion as π in which the axiom rule is only used with atomic formulas. If π is cut-free, then there is a cut-free π'.

Proof. It suffices to prove that for every formula A, the sequent $A\vdash A$ has a cut-free proof in which the axiom rule is used only for atomic formulas. We prove this by induction on A.

• If A is atomic, then $A\vdash A$ is an instance of the atomic axiom rule.
• If $A=A_1\tens A_2$ then we have
$\AxRule{ \pi_1 : A_1 \vdash A_1 } \AxRule{ \pi_2 : A_2 \vdash A_2 } \LabelRule{ \tens_R } \BinRule{ A_1, A_2 \vdash A_1 \tens A_2 } \LabelRule{ \tens_L } \UnaRule{ A_1 \tens A_2 \vdash A_1 \tens A_2 } \DisplayProof$
where π1 and π2 exist by induction hypothesis.
• If $A=A_1\parr A_2$ then we have
$\AxRule{ \pi_1 : A_1 \vdash A_1 } \AxRule{ \pi_2 : A_2 \vdash A_2 } \LabelRule{ \parr_L } \BinRule{ A_1 \parr A_2 \vdash A_1, A_2 } \LabelRule{ \parr_R } \UnaRule{ A_1 \parr A_2 \vdash A_1 \parr A_2 } \DisplayProof$
where π1 and π2 exist by induction hypothesis.
• All other connectives follow the same pattern.

The interesting thing with η-expansion is that, we can always assume that each connective is explicitly introduced by its associated rule (except in the case where there is an occurrence of the $\top$ rule).

### Reversibility

Definition (reversibility)

A connective c is called reversible if

• for every proof $\pi:\Gamma\vdash\Delta,c(A_1,\ldots,A_n)$, there is a proof π' with the same conclusion in which $c(A_1,\ldots,A_n)$ is introduced by the last rule,
• if π is cut-free then there is a cut-free π'.

Proposition

The connectives $\parr$, $\bot$, $\with$, $\top$ and $\forall$ are reversible.

Proof. Using the η-expansion property, we assume that the axiom rule is only applied to atomic formulas. Then each top-level connective is introduced either by its associated (left or right) rule or in an instance of the $\zero_L$ or $\top_R$ rule.

For $\parr$, consider a proof $\pi\Gamma\vdash\Delta,A\parr B$. If $A\parr B$ is introduced by a $\parr_R$ rule (not necessarily the last rule in π), then if we remove this rule we get a proof of $\vdash\Gamma,A,B$ (this can be proved by a straightforward induction on π). If it is introduced in the context of a $\zero_L$ or $\top_R$ rule, then this rule can be changed so that $A\parr B$ is replaced by A,B. In either case, we can apply a final $\parr$ rule to get the expected proof.

For $\bot$, the same technique applies: if it is introduced by a $\bot_R$ rule, then remove this rule to get a proof of $\vdash\Gamma$, if it is introduced by a $\zero_L$ or $\top_R$ rule, remove the $\bot$ from this rule, then apply the $\bot$ rule at the end of the new proof.

For $\with$, consider a proof $\pi:\Gamma\vdash\Delta,A\with B$. If the connective is introduced by a $\with$ rule then this rule is applied in a context like

$\AxRule{ \pi_1 \Gamma' \vdash \Delta', A } \AxRule{ \pi_2 \Gamma' \vdash \Delta', B } \LabelRule{ \with } \BinRule{ \Gamma' \vdash \Delta', A \with B } \DisplayProof$

Since the formula $A\with B$ is not involved in other rules (except as context), if we replace this step by π1 in π we finally get a proof $\pi'_1:\Gamma\vdash\Delta,A$. If we replace this step by π2 we get a proof $\pi'_2:\Gamma\vdash\Delta,B$. Combining π1 and π2 with a final $\with$ rule we finally get the expected proof. The case when the $\with$ was introduced in a $\top$ rule is solved as before.

For $\top$ the result is trivial: just choose π' as an instance of the $\top$ rule with the appropriate conclusion.

For $\forall$, consider a proof $\pi:\Gamma\vdash\Delta,\forall\xi.A$. Up to renaming, we can assume that ξ occurs free only above the rule that introduces the quantifier. If the quantifier is introduced by a $\forall$ rule, then if we remove this rule, we can check that we get a proof of $\Gamma\vdash\Delta,A$ on which we can finally apply the $\forall$ rule. The case when the $\forall$ was introduced in a $\top$ rule is solved as before.

Note that, in each case, if the proof we start from is cut-free, our transformations do not introduce a cut rule. However, if the original proof has cuts, then the final proof may have more cuts, since in the case of $\with$ we duplicated a part of the original proof.

A corresponding property for positive connectives is focalization, which states that clusters of positive formulas can be treated in one step, under certain circumstances.

## One-sided sequent calculus

The sequent calculus presented above is very symmetric: for every left introduction rule, there is a right introduction rule for the dual connective that has the exact same structure. Moreover, because of the involutivity of negation, a sequent $\Gamma,A\vdash\Delta$ is provable if and only if the sequent $\Gamma\vdash A\orth,\Delta$ is provable. From these remarks, we can define an equivalent one-sided sequent calculus:

• Formulas are considered up to De Morgan duality. Equivalently, one can consider that negation is not a connective but a syntactically defined operation on formulas. In this case, negated atoms $\alpha\orth$ must be considered as another kind of atomic formulas.
• Sequents have the form $\vdash\Gamma$.

The inference rules are essentially the same except that the left hand side of sequents is kept empty:

• Identity group:
$\LabelRule{\rulename{axiom}} \NulRule{ \vdash A\orth, A } \DisplayProof$$\AxRule{ \vdash \Gamma, A } \AxRule{ \vdash \Delta, A\orth } \LabelRule{\rulename{cut}} \BinRule{ \vdash \Gamma, \Delta } \DisplayProof$
• Multiplicative group:
$\AxRule{ \vdash \Gamma, A } \AxRule{ \vdash \Delta, B } \LabelRule{ \tens } \BinRule{ \vdash \Gamma, \Delta, A \tens B } \DisplayProof$$\AxRule{ \vdash \Gamma, A, B } \LabelRule{ \parr } \UnaRule{ \vdash \Gamma, A \parr B } \DisplayProof$$\LabelRule{ \one } \NulRule{ \vdash \one } \DisplayProof$$\AxRule{ \vdash \Gamma } \LabelRule{ \bot } \UnaRule{ \vdash \Gamma, \bot } \DisplayProof$
$\AxRule{ \vdash \Gamma, A } \LabelRule{ \plus_1 } \UnaRule{ \vdash \Gamma, A \plus B } \DisplayProof$$\AxRule{ \vdash \Gamma, B } \LabelRule{ \plus_2 } \UnaRule{ \vdash \Gamma, A \plus B } \DisplayProof$$\AxRule{ \vdash \Gamma, A } \AxRule{ \vdash \Gamma, B } \LabelRule{ \with } \BinRule{ \vdash, \Gamma, A \with B } \DisplayProof$$\LabelRule{ \top } \NulRule{ \vdash \Gamma, \top } \DisplayProof$
• Exponential group:
$\AxRule{ \vdash \Gamma, A } \LabelRule{ d } \UnaRule{ \vdash \Gamma, \wn A } \DisplayProof$$\AxRule{ \vdash \Gamma } \LabelRule{ w } \UnaRule{ \vdash \Gamma, \wn A } \DisplayProof$$\AxRule{ \vdash \Gamma, \wn A, \wn A } \LabelRule{ c } \UnaRule{ \vdash \Gamma, \wn A } \DisplayProof$$\AxRule{ \vdash \wn\Gamma, B } \LabelRule{ \oc } \UnaRule{ \vdash \wn\Gamma, \oc B } \DisplayProof$
• Quantifier group (in the $\forall$ rule, ξ must not occur free in Γ):
$\AxRule{ \vdash \Gamma, A[t/x] } \LabelRule{ \exists^1 } \UnaRule{ \vdash \Gamma, \exists x.A } \DisplayProof$$\AxRule{ \vdash \Gamma, A[B/X] } \LabelRule{ \exists^2 } \UnaRule{ \vdash \Gamma, \exists X.A } \DisplayProof$$\AxRule{ \vdash \Gamma, A } \LabelRule{ \forall } \UnaRule{ \vdash \Gamma, \forall \xi.A } \DisplayProof$

Theorem

A two-sided sequent $\Gamma\vdash\Delta$ is provable if and only if the sequent $\vdash\Gamma\orth,\Delta$ is provable in the one-sided system.

The one-sided system enjoys the same properties as the two-sided one, including cut elimination, the subformula property, etc. This formulation is often used when studying proofs because it is much lighter than the two-sided form while keeping the same expressiveness. In particular, proof-nets can be seen as a quotient of one-sided sequent calculus proofs under commutation of rules.

## Variations

### Exponential rules

• The promotion rule, on the right-hand side for example,

$\AxRule{ \oc A_1, \ldots, \oc A_n \vdash B, \wn B_1, \ldots, \wn B_m } \LabelRule{ \oc_R } \UnaRule{ \oc A_1, \ldots, \oc A_n \vdash \oc B, \wn B_1, \ldots, \wn B_m } \DisplayProof$ can be replaced by a multi-functorial promotion rule $\AxRule{ A_1, \ldots, A_n \vdash B, B_1, \ldots, B_m } \LabelRule{ \oc_R \rulename{mf}} \UnaRule{ \oc A_1, \ldots, \oc A_n \vdash \oc B, \wn B_1, \ldots, \wn B_m } \DisplayProof$ and a digging rule $\AxRule{ \Gamma \vdash \wn\wn A, \Delta } \LabelRule{ \wn\wn} \UnaRule{ \Gamma \vdash \wn A, \Delta } \DisplayProof$, without modifying the provability.

Note that digging violates the subformula property.

• In presence of the digging rule $\AxRule{ \Gamma \vdash \wn\wn A, \Delta } \LabelRule{ \wn\wn} \UnaRule{ \Gamma \vdash \wn A, \Delta } \DisplayProof$, the multiplexing rule $\AxRule{\Gamma\vdash A^{(n)},\Delta} \LabelRule{\rulename{mplex}} \UnaRule{\Gamma\vdash \wn A,\Delta} \DisplayProof$ (where A(n) stands for n occurrences of formula A) is equivalent (for provability) to the triple of rules: contraction, weakening, dereliction.

### Non-symmetric sequents

The same remarks that lead to the definition of the one-sided calculus can lead the definition of other simplified systems:

• A one-sided variant with sequents of the form $\Gamma\vdash$ could be defined.
• When considering formulas up to De Morgan duality, an equivalent system is obtained by considering only the left and right rules for positive connectives (or the ones for negative connectives only, obviously).
• Intuitionistic linear logic is the two-sided system where the right-hand side is constrained to always contain exactly one formula (with a few associated restrictions).
• Similar restrictions are used in various semantics and proof search formalisms.

### Mix rules

It is quite common to consider mix rules: $\LabelRule{\rulename{Mix}_0} \NulRule{\vdash} \DisplayProof \qquad \AxRule{\Gamma \vdash \Delta} \AxRule{\Gamma' \vdash \Delta'} \LabelRule{\rulename{Mix}_2} \BinRule{\Gamma,\Gamma' \vdash \Delta,\Delta'} \DisplayProof$