# Coherent semantics

Coherent semantics was invented by Girard in the paper The system F, 15 years later[1]with the objective of building a denotationnal interpretation of second order intuitionnistic logic (aka polymorphic lambda-calculus).

Coherent semantics is based on the notion of stable functions that was initially proposed by Gérard Berry. Stability is a condition on Scott continuous functions that expresses the determinism of the relation between the output and the input: the typical Scott continuous but non stable function is the parallel or because when the two inputs are both set to true, only one of them is the reason why the result is true but there is no way to determine which one.

A further achievement of coherent semantics was that it allowed to endow the set of stable functions from X to Y with a structure of domain, thus closing the category of coherent spaces and stable functions. However the most interesting point was the discovery of a special class of stable functions, linear functions, which was the first step leading to Linear Logic.

## The cartesian closed structure of coherent semantics

There are three equivalent definitions of coherent spaces: the first one, coherent spaces as domains, is interesting from a historical point of view as it emphazises the fact that coherent spaces are particular cases of Scott domains. The second one, coherent spaces as graphs, is the most commonly used and will be our "official" definition in the sequel. The last one, cliqued spaces is a particular example of a more general scheme that one could call "symmetric reducibility"; this scheme is underlying lots of constructions in linear logic such as phase semantics or the proof of strong normalisation for proof-nets.

### Coherent spaces

A coherent space X is a collection of subsets of a set $\web X$ satisfying some conditions that will be detailed shortly. The elements of X are called the cliques of X (for reasons that will be made clear in a few lines). The set $\web X$ is called the web of X and its elements are called the points of X; thus a clique is a set of points. Note that the terminology is a bit ambiguous as the points of X are the elements of the web of X, not the elements of X.

The definitions below give three equivalent conditions that have to be satisfied by the cliques of a coherent space.

#### As domains

The cliques of X have to satisfy:

• subset closure: if $x\subset y\in X$ then $x\in X$,
• singletons: $\{a\}\in X$ for $a\in\web X$.
• binary compatibility: if A is a family of pairwise compatible cliques of X, that is if $x\cup y\in X$ for any $x,y\in A$, then $\bigcup A\in X$.

A coherent space is thus ordered by inclusion; one easily checks that it is a domain. In particular finite cliques of X correspond to compact elements.

#### As graphs

There is a reflexive and symetric relation $\coh_X$ on $\web X$ (the coherence relation) such that any subset x of $\web X$ is a clique of X iff $\forall a,b\in x,\, a\coh_X b$. In other terms X is the set of complete subgraphs of the simple unoriented graph of the $\coh_X$ relation; this is the reason why elements of X are called cliques.

The strict coherence relation $\scoh_X$ on X is defined by: $a\scoh_X b$ iff $a\neq b$ and $a\coh_X b$.

A coherent space in the domain sense is seen to be a coherent space in the graph sense by setting $a\coh_X b$ iff $\{a,b\}\in X$; conversely one can check that cliques in the graph sense are subset closed and satisfy the binary compatibility condition.

A coherent space is completely determined by its web and its coherence relation, or equivalently by its web and its strict coherence.

#### As cliqued spaces

Definition (Duality)

Let $x, y\subseteq \web{X}$ be two sets. We will say that they are dual, written $x\perp y$ if their intersection contains at most one element: $\mathrm{Card}(x\cap y)\leq 1$. As usual, it defines an orthogonality relation over $\powerset{\web{X}}$.

The last way to express the conditions on the cliques of a coherent space X is simply to say that we must have $X\biorth = X$.

#### Equivalence of definitions

Let X be a cliqued space and define a relation on $\web X$ by setting $a\coh_X b$ iff there is $x\in X$ such that $a, b\in x$. This relation is obviously symetric; it is also reflexive because all singletons belong to X: if $a\in \web X$ then {a} is dual to any element of $X\orth$ (actually {a} is dual to any subset of $\web X$), thus {a} is in $X\biorth$, thus in X.

Let $a\coh_X b$. Then $\{a,b\}\in X$; indeed there is an $x\in X$ such that $a, b\in x$. This x is dual to any $y\in X\orth$, that is meets any $y\in X\orth$ in a most one point. Since $\{a,b\}\subset x$ this is also true of {a,b}, so that {a,b} is in $X\biorth$ thus in X.

Now let x be a clique for $\coh_X$ and y be an element of $X\orth$. Suppose $a, b\in x\cap y$, then since a and b are coherent (by hypothesis on x) we have $\{a,b\}\in X$ and since $y\in X\orth$ we must have that {a,b} and y meet in at most one point. Thus a = b and we have shown that x and y are dual. Since y was arbitrary this means that x is in $X\biorth$, thus in X. Finally we get that any set of pairwise coherent points of X is in X. Conversely given $x\in X$ its points are obviously pairwise coherent so eventually we get that X is a coherent space in the graph sense.

Conversely given a coherent space X in the graph sense, one can check that it is a cliqued space. Call anticlique a set $y\subset \web X$ of pairwise incoherent points: for all a,b in y, if $a\coh_X b$ then a = b. Any anticlique intersects any clique in at most one point: let x be a clique and y be an anticlique, then if $a,b\in x\cap y$, since $a, b\in x$ we have $a\coh_X b$ and since y is an anticlique we have a = b. Thus $y\in X\orth$. Conversely given any $y\in X\orth$ and $a, b\in y$, suppose $a\coh_X b$. Then $\{a,b\}\in X$, thus $\{a,b\}\perp y$ which entails that {a,b} has at most one point so that a = b: we have shown that any two elements of y are incoherent.

Thus the collection of anticliques of X is the dual $X\orth$ of X. Note that the incoherence relation defined above is reflexive and symetric, so that $X\orth$ is a coherent space in the graph sense. Thus we can do for $X\orth$ exactly what we've just done for X and consider the anti-anticliques, that is the anticliques for the incoherent relation which are the cliques for the in-incoherent relation. It is not difficult to see that this in-incoherence relation is just the coherence relation we started with; we thus obtain that $X\biorth = X$, so that X is a cliqued space.

### Stable functions

Definition (Stable function)

Let X and Y be two coherent spaces. A function $F:X\longrightarrow Y$ is stable if it satisfies:

• it is non decreasing: for any $x,y\in X$ if $x\subset y$ then $F(x)\subset F(y)$;
• it is continuous (in the Scott sense): if A is a directed family of cliques of X, that is if for any $x,y\in A$ there is a $z\in A$ such that $x\cup y\subset z$, then $\bigcup_{x\in A}F(x) = F(\bigcup A)$;
• it satisfies the stability condition: if $x,y\in X$ are compatible, that is if $x\cup y\in X$, then $F(x\cap y) = F(x)\cap F(y)$.

This definition is admitedly not very tractable. An equivalent and most useful caracterisation of stable functions is given by the following theorem.

Theorem

Let $F:X\longrightarrow Y$ be a non-decreasing function from the coherent space X to the coherent space Y. The function F is stable iff it satisfies: for any $x\in X$, $b\in\web Y$, if $b\in F(x)$ then there is a finite clique $x_0\subset x$ such that:

• $b\in F(x_0)$,
• for any $y\subset x$ if $b\in F(y)$ then $x_0\subset y$ (x0 is the minimum sub-clique of x such that $b\in F(x_0)$).

Note that the stability condition doesn't depend on the coherent space structure and can be expressed more generally for continuous functions on domains. However, as mentionned in the introduction, the restriction to coherent spaces allows to endow the set of stable functions from X to Y with a structure of coherent space.

Definition (The space of stable functions)

Let X and Y be coherent spaces. We denote by Xfin the set of finite cliques of X. The function space $X\imp Y$ is defined by:

• $\web{X\imp Y} = X_{\mathrm{fin}}\times \web Y$,
• $(x_0, a)\coh_{X\imp Y}(y_0, b)$ iff $\begin{cases}\text{if } x_0\cup y_0\in X\text{ then } a\coh_Y b,\\ \text{if } x_0\cup y_0\in X\text{ and } a = b\text{ then } x_0 = y_0\end{cases}$.

One could equivalently define the strict coherence relation on $X\imp Y$ by: $(x_0,a)\scoh_{X\imp Y}(y_0, b)$ iff when $x_0\cup y_0\in X$ then $a\scoh_Y b$ (equivalently $x_0\cup y_0\not\in X$ or $a\scoh_Y b$).

Definition (Trace of a stable function)

Let $F:X\longrightarrow Y$ be a function. The trace of F is the set:

 $\mathrm{Tr}(F) = \{(x_0, b), x_0\text{ minimal such that } b\in F(x_0)\}$.


Theorem

F is stable iff Tr(F) is a clique of the function space $X\imp Y$

In particular the continuity of F entails that if x0 is minimal such that $b\in F(x_0)$, then x0 is finite.

Definition (The evaluation function)

Let f be a clique in $X\imp Y$. We define a function $\mathrm{Fun}\,f:X\longrightarrow Y$ by: $\mathrm{Fun}\,f(x) = \{b\in Y,\text{ there is }x_0\subset x\text{ such that }(x_0, b)\in f\}$.

Theorem (Closure)

If f is a clique of the function space $X\imp Y$ then we have $\mathrm{Tr}(\mathrm{Fun}\,f) = f$. Conversely if $F:X\longrightarrow Y$ is a stable function then we have $F = \mathrm{Fun}\,\mathrm{Tr}(F)$.

### Cartesian product

Definition (Cartesian product)

Let X1 and X2 be two coherent spaces. We define the coherent space $X_1\with X_2$ (read X1 with X2):

• the web is the disjoint union of the webs: $\web{X_1\with X_2} = \{1\}\times\web{X_1}\cup \{2\}\times\web{X_2}$;
• the coherence relation is the serie composition of the relations on X1 and X2: $(i, a)\coh_{X_1\with X_2}(j, b)$ iff either $i\neq j$ or i = j and $a\coh_{X_i} b$.

This definition is just the way to put a coherent space structure on the cartesian product. Indeed one easily shows the

Theorem

Given cliques x1 and x2 in X1 and X2, we define the subset $\langle x_1, x_2\rangle$ of $\web{X_1\with X_2}$ by: $\langle x_1, x_2\rangle = \{1\}\times x_1\cup \{2\}\times x_2$. Then $\langle x_1, x_2\rangle$ is a clique in $X_1\with X_2$.

Conversely, given a clique $x\in X_1\with X_2$, for i = 1,2 we define $\pi_i(x) = \{a\in X_i, (i, a)\in x\}$. Then πi(x) is a clique in Xi and the function $\pi_i:X_1\with X_2\longrightarrow X_i$ is stable.

Furthemore these two operations are inverse of each other: $\pi_i(\langle x_1, x_2\rangle) = x_i$ and $\langle\pi_1(x), \pi_2(x)\rangle = x$. In particular any clique in $X_1\with X_2$ is of the form $\langle x_1, x_2\rangle$.

Altogether the results above (and a few other more that we shall leave to the reader) allow to get:

Theorem

The category of coherent spaces and stable functions is cartesian closed.

In particular this means that if we define $\mathrm{Eval}:(X\imp Y)\with X\longrightarrow Y$ by: $\mathrm{Eval}(\langle f, x\rangle) = \mathrm{Fun}\,f(x)$ then Eval is stable.

## The monoidal structure of coherent semantics

### Linear functions

Definition (Linear function)

A function $F:X\longrightarrow Y$ is linear if it is stable and furthemore satisfies: for any family A of pairwise compatible cliques of X, that is such that for any $x, y\in A$, $x\cup y\in X$, we have $\bigcup_{x\in A}F(x) = F(\bigcup A)$.

In particular if we take A to be the empty family, then we have $F(\emptyset) = \emptyset$.

The condition for linearity is quite similar to the condition for Scott continuity, except that we dropped the constraint that A is directed. Linearity is therefore much stronger than stability: most stable functions are not linear.

However most of the functions seen so far are linear. Typically the function $\pi_i:X_1\with X_2\longrightarrow X_i$ is linear from wich one may deduce that the with construction is also a cartesian product in the category of coherent spaces and linear functions.

As with stable function we have an equivalent and much more tractable caracterisation of linear function:

Theorem

Let $F:X\longrightarrow Y$ be a continuous function. Then F is linear iff it satisfies: for any clique $x\in X$ and any $b\in F(x)$ there is a unique $a\in x$ such that $b\in F(\{a\})$.

Just as the caracterisation theorem for stable functions allowed us to build the coherent space of stable functions, this theorem will help us to endow the set of linear maps with a structure of coherent space.

Definition (The linear functions space)

Let X and Y be coherent spaces. The linear function space $X\limp Y$ is defined by:

• $\web{X\limp Y} = \web X\times \web Y$,
• $(a,b)\coh_{X\limp Y}(a', b')$ iff $\begin{cases}\text{if }a\coh_X a'\text{ then } b\coh_Y b'\\ \text{if }a\coh_X a' \text{ and }b=b'\text{ then }a=a'\end{cases}$

Equivalently one could define the strict coherence to be: $(a,b)\scoh_{X\limp Y}(a',b')$ iff $a\scoh_X a'$ entails $b\scoh_Y b'$.

Definition (Linear trace)

Let $F:X\longrightarrow Y$ be a function. The linear trace of F denoted as LinTr(F) is the set:

 $\mathrm{LinTr}(F) = \{(a, b)\in\web X\times\web Y$ such that $b\in F(\{a\})\}$.


Theorem

If F is linear then LinTr(F) is a clique of $X\limp Y$.

Definition (Evaluation of linear function)

Let f be a clique of $X\limp Y$. We define the function $\mathrm{LinFun}\,f:X\longrightarrow Y$ by: $\mathrm{LinFun}\,f(x) = \{b\in\web Y$ such that there is an $a\in x$ satisfying $(a,b)\in f\}$.

Theorem (Linear closure)

Let f be a clique in $X\limp Y$. Then we have $\mathrm{LinTr}(\mathrm{LinFun}\, f) = f$. Conversely if $F:X\longrightarrow Y$ is linear then we have $F = \mathrm{LinFun}\,\mathrm{LinTr}(F)$.

It remains to define a tensor product and we will get that the category of coherent spaces with linear functions is monoidal symetric (it is actually *-autonomous).

### Tensor product

Definition (Tensor product)

Let X and Y be coherent spaces. Their tensor product $X\tens Y$ is defined by: $\web{X\tens Y} = \web X\times\web Y$ and $(a,b)\coh_{X\tens Y}(a',b')$ iff $a\coh_X a'$ and $b\coh_Y b'$.

Theorem

The category of coherent spaces with linear maps and tensor product is monoidal symetric closed.

The closedness is a consequence of the existence of the linear isomorphism:

 $\varphi:X\tens Y\limp Z\ \stackrel{\sim}{\longrightarrow}\ X\limp(Y\limp Z)$


that is defined by its linear trace: $\mathrm{LinTr}(\varphi) = \{(((a, b), c), (a, (b, c))),\, a\in\web X,\, b\in \web Y,\, c\in\web Z\}$.

### Linear negation

Definition (Linear negation)

Let X be a coherent space. We define the incoherence relation on $\web X$ by: $a\incoh_X b$ iff $a\coh_X b$ entails a = b. The incoherence relation is reflexive and symetric; we call dual or linear negation of X the associated coherent space denoted $X\orth$, thus defined by: $\web{X\orth} = \web X$ and $a\coh_{X\orth} b$ iff $a\incoh_X b$.

The cliques of $X\orth$ are called the anticliques of X. As seen in the section on cliqued spaces we have $X\biorth=X$.

Theorem

The category of coherent spaces with linear maps, tensor product and linear negation is *-autonomous.

This is in particular consequence of the existence of the isomorphism:

 $\varphi:X\limp Y\ \stackrel{\sim}{\longrightarrow}\ Y\orth\limp X\orth$


defined by its linear trace: $\mathrm{LinTr}(\varphi) = \{((a, b), (b, a)),\, a\in\web X,\, b\in\web Y\}$.

## Exponentials

In linear algebra, bilinear maps may be factorized through the tensor product. Similarly there is a coherent space $\oc X$ that allows to factorize stable functions through linear functions.

Definition (Of course)

Let X be a coherent space; recall that Xfin denotes the set of finite cliques of X. We define the space $\oc X$ (read of course X) by: $\web{\oc X} = X_{\mathrm{fin}}$ and $x_0\coh_{\oc X}y_0$ iff $x_0\cup y_0$ is a clique of X.

Thus a clique of $\oc X$ is a set of finite cliques of X the union of wich is a clique of X.

Theorem

Let X be a coherent space. Denote by $\beta:X\longrightarrow \oc X$ the stable function whose trace is: $\mathrm{Tr}(\beta) = \{(x_0, x_0),\, x_0\in X_{\mathrm{fin}}\}$. Then for any coherent space Y and any stable function $F: X\longrightarrow Y$ there is a unique linear function $\bar F:\oc X\longrightarrow Y$ such that $F = \bar F\circ \beta$.

Furthermore we have $X\imp Y = \oc X\limp Y$.

Theorem (The exponential isomorphism)

Let X and Y be two coherent spaces. Then there is a linear isomorphism:

 $\varphi:\oc(X\with Y)\quad\stackrel{\sim}{\longrightarrow}\quad \oc X\tens\oc Y$.


The iso $\varphi$ is defined by its trace: $\mathrm{Tr}(\varphi) = \{(x_0, (\pi_1(x_0), \pi_2(x_0)), x_0\text{ finite clique of } X\with Y\}$.

This isomorphism, that sends an additive structure (the web of a with is obtained by disjoint union) onto a multiplicative one (the web of a tensor is obtained by cartesian product) is the reason why the of course is called an exponential.

## Dual connectives and neutrals

By linear negation all the constructions defined so far ($\with, \tens, \oc$) have a dual.

### The direct sum

The dual of $\with$ is $\plus$ defined by: $X\plus Y = (X\orth\with Y\orth)\orth$. An equivalent definition is given by: $\web{X\plus Y} = \web{X\with Y} = \{1\}\times \web X \cup \{2\}\times\web Y$ and $(i, a)\coh_{X\plus Y} (j, b)\text{ iff } i = j = 1 \text{ and } a\coh_X b,\text{ or }i = j = 2\text{ and } a\coh_Y b$.

Theorem

Let x' be a clique of $X\plus Y$; then x' is of the form $\{i\}\times x$ where $i = 1\text{ and }x\in X$, or $i = 2\text{ and }x\in Y$.

Denote $\mathrm{inl}:X\longrightarrow X\plus Y$ the function defined by $\mathrm{inl}(x) = \{1\}\times x$ and by $\mathrm{inr}:Y\longrightarrow X\plus Y$ the function defined by $\mathrm{inr}(x) = \{2\}\times x$. Then inl and inr are linear.

If $F:X\longrightarrow Z$ and $G:Y\longrightarrow Z$ are linear functions then the function $H:X\plus Y \longrightarrow Z$ defined by H(inl(x)) = F(x) and H(inr(y)) = G(y) is linear.

In other terms $X\plus Y$ is the direct sum of X and Y. Note that in the theorem all functions are linear. Things doesn't work so smoothly for stable functions. Historically it was after noting this defect of coherent semantics w.r.t. the intuitionnistic implication that Girard was leaded to discover linear functions.

### The par and the why not

We now come to the most mysterious constructions of coherent semantics: the duals of the tensor and the of course.

The par is the dual of the tensor, thus defined by: $X\parr Y = (X\orth\tens Y\orth)\orth$. From this one can deduce the definition in graph terms: $\web{X\parr Y} = \web{X\tens Y} = \web X\times \web Y$ and $(a,b)\scoh_{X\parr Y} (a',b')$ iff $a\scoh_X a'$ or $b\scoh_Y b'$. With this definition one sees that we have:

 $X\limp Y = X\orth\parr Y$


for any coherent spaces X and Y. This equation can be seen as an alternative definition of the par: $X\parr Y = X\orth\limp Y$.

Similarly the dual of the of course is called why not defined by: $\wn X = (\oc X\orth)\orth$. From this we deduce the definition in the graph sense which is a bit tricky: $\web{\wn X}$ is the set of finite anticliques of X, and given two finite anticliques x and y of X we have $x\scoh_{\wn X} y$ iff there is $a\in x$ and $b\in y$ such that $a\scoh_X b$.

Note that both for the par and the why not it is much more convenient to define the strict coherence than the coherence.

With these two last constructions, the equation between the stable function space, the of course and the linear function space may be written:

 $X\imp Y = \wn X\orth\parr Y$.


### One and bottom

Depending on the context we denote by $\one$ or $\bot$ the coherent space whose web is a singleton and whose coherence relation is the trivial reflexive relation.

Theorem

$\one$ is neutral for tensor, that is, there is a linear isomorphism $\varphi:X\tens\one\ \stackrel{\sim}{\longrightarrow}\ X$.

Similarly $\bot$ is neutral for par.

### Zero and top

Depending on the context we denote by $\zero$ or $\top$ the coherent space with empty web.

Theorem

$\zero$ is neutral for the direct sum $\plus$, $\top$ is neutral for the cartesian product $\with$.

Remark: It is one of the main defect of coherent semantics w.r.t. linear logic that it identifies the neutrals: in coherent semantics $\zero = \top$ and $\one = \bot$. However there is no known semantics of LL that solves this problem in a satisfactory way.

## After coherent semantics

Coherent semantics was an important milestone in the modern theory of logic of programs, in particular because it leaded to the invention of Linear Logic, and more generally because it establishes a strong link between logic and linear algebra; this link is nowadays aknowledged by the customary use of monoidal categories in logic. In some sense coherent semantics is a precursor of many forthcoming works that explore the linear nature of logic as for example geometry of interaction which interprets proofs by operators or finiteness semantics which interprets formulas as vector spaces and resulted in differential linear logic...

Lots of this work have been motivated by the fact that coherent semantics is not complete as a semantics of programs (technically one says that it is not fully abstract). In order to see this, let us firts come back on the origin of the central concept of stability which as pointed above originated in the study of the sequentiality in programs.

### Sequentiality

Sequentiality is a property that we will not define here (it would diserve its own article). We rely on the intuition that a function of n arguments is sequential if one can determine which of these argument is examined first during the computation. Obviously any function implemented in a functionnal language is sequential; for example the function or defined à la CAML by:

 or = fun (x, y) -> if x then true else y


examines its argument x first. Note that this may be expressed more abstractly by the property: $\mathrm{or}(\bot, x) = \bot$ for any boolean x: the function or needs its first argument in order to compute anything. On the other hand we have $\mathrm{or}(\mathrm{true}, \bot) = \mathrm{true}$: in some case (when the first argument is true), the function doesn't need its second argument at all.

The typical non sequential function is the parallel or (that one cannot define in a CAML like language).

For a while one may have believed that the stability condition on which coherent semantics is built was enough to capture the notion of sequentiality of programs. A hint was the already mentionned fact that the parallel or is not stable. This diserves a bit of explanation.

#### The parallel or is not stable

Let B be the coherent space of booleans, also know as the flat domain of booleans: $\web B = \{tt, ff\}$ where tt and ff are two arbitrary distinct objects (for example one may take tt = 0 and ff = 1) and for any $b_1, b_2\in \web B$, define $b_1\coh_B b_2$ iff b1 = b2. Then B has exactly three cliques: the empty clique that we shall denote $\bot$, the singleton {tt} that we shall denote T and the singleton {ff} that we shall denote F. These three cliques are ordered by inclusion: $\bot \leq T, F$ (we use $\leq$ for $\subset$ to enforce the idea that coherent spaces are domains).

Recall the definition of the with, and in particular that any clique of $B\with B$ has the form $\langle x, y\rangle$ where x and y are cliques of B. Thus $B\with B$ has 9 cliques: $\langle\bot,\bot\rangle,\ \langle\bot, T\rangle,\ \langle\bot, F\rangle,\ \langle T,\bot\rangle,\ \dots$ that are ordered by the product order: $\langle x,y\rangle\leq \langle x,y\rangle$ iff $x\leq x'$ and $y\leq y'$.

With these notations in mind one may define the parallel or by:

$\begin{array}{rcl} \mathrm{Por} : B\with B &\longrightarrow& B\\ \langle T,\bot\rangle &\longrightarrow& T\\ \langle \bot,T\rangle &\longrightarrow& T\\ \langle F, F\rangle &\longrightarrow& F \end{array}$

The function is completely determined if we add the assumption that it is non decreasing; for example one must have $\mathrm{Por}\langle\bot,\bot\rangle = \bot$ because the lhs has to be less than both T and F (because $\langle\bot,\bot\rangle \leq \langle T,\bot\rangle$ and $\langle\bot,\bot\rangle \leq \langle F,F\rangle$).

The function is not stable because $\langle T,\bot\rangle \cap \langle \bot, T\rangle = \langle\bot, \bot\rangle$, thus $\mathrm{Por}(\langle T,\bot\rangle \cap \langle \bot, T\rangle) = \bot$ whereas $\mathrm{Por}\langle T,\bot\rangle \cap \mathrm{Por}\langle \bot, T\rangle = T\cap T = T$.

Another way to see this is: suppose x and y are two cliques of B such that $tt\in \mathrm{Por}\langle x, y\rangle$, which means that $\mathrm{Por}\langle x, y\rangle = T$; according to the caracterisation theorem of stable functions, if Por were stable then there would be a unique minimum x0 included in x, and a unique minimum y0 included in y such that $\mathrm{Por}\langle x_0, y_0\rangle = T$. This is not the case because both $\langle T,\bot\rangle$ and $\langle T,\bot\rangle$ are minimal such that their value is T.

In other terms, knowing that $\mathrm{Por}\langle x, y\rangle = T$ doesn't tell which of x of y is responsible for that, although we know by the definition of Por that only one of them is. Indeed the Por function is not representable in sequential programming languages such as (typed) lambda-calculus.

So the first genuine idea would be that stability caracterises sequentiality; but...

#### The Gustave function is stable

The Gustave function, so-called after an old joke, was found by Gérard Berry as an example of a function that is stable but non sequential. It is defined by:

$\begin{array}{rcl} B\with B\with B &\longrightarrow& B\\ \langle T, F, \bot\rangle &\longrightarrow& T\\ \langle \bot, T, F\rangle &\longrightarrow& T\\ \langle F, \bot, T\rangle &\longrightarrow& T\\ \langle x, y, z\rangle &\longrightarrow& F \end{array}$

The last clause is for all cliques x, y and z such that $\langle x, y ,z\rangle$ is incompatible with the three cliques $\langle T, F, \bot\rangle$, $\langle \bot, T, F\rangle$ and $\langle F, \bot, T\rangle$, that is such that the union with any of these three cliques is not a clique in $B\with B\with B$. We shall denote x1, x2 and x3 these three cliques.

We furthemore assume that the Gustave function is non decreasing, so that we get $G\langle\bot,\bot,\bot\rangle = \bot$.

We note that x1, x2 and x3 are pairwise incompatible. From this we can deduce that the Gustave function is stable: typically if $G\langle x,y,z\rangle = T$ then exactly one of the xis is contained in $\langle x, y, z\rangle$.

However it is not sequential because there is no way to determine which of its three arguments is examined first: it is not the first one otherwise we would have $G\langle\bot, T, F\rangle = \bot$ and similarly it is not the second one nor the third one.

In other terms there is no way to implement the Gustave function by a lambda-term (or in any sequential programming language). Thus coherent semantics is not complete w.r.t. lambda-calculus.

The research for a right model for sequentiality was the motivation for lot of work, e.g., sequential algorithms by Gérard Bérry and Pierre-Louis Currien in the early eighties, that were more recently reformulated as a kind of game model, and the theory of hypercoherent spaces by Antonio Bucciarelli and Thomas Ehrhard.

### Multiplicative neutrals and the mix rule

Coherent semantics is slightly degenerated w.r.t. linear logic because it identifies multiplicative neutrals (it also identifies additive neutrals but that's yet another problem): the coherent spaces $\one$ and $\bot$ are equal.

The first consequence of the identity $\one = \bot$ is that the formula $\one\limp\bot$ becomes provable, and so does the formula $\bot$. Note that this doesn't entail (as in classical logic or intuitionnistic logic) that linear logic is incoherent because the principle $\bot\limp A$ for any formula A is still not provable.

The equality $\one = \bot$ has also as consequence the fact that $\bot\limp\one$ (or equivalently the formula $\one\parr\one$) is provable. This principle is also known as the mix rule

$\AxRule{\vdash \Gamma} \AxRule{\vdash \Delta} \LabelRule{\rulename{mix}} \BinRule{\vdash \Gamma,\Delta} \DisplayProof$

as it can be used to show that this rule is admissible:

$\AxRule{\vdash\Gamma} \LabelRule{\bot_R} \UnaRule{\vdash\Gamma, \bot} \AxRule{\vdash\Delta} \LabelRule{\bot_R} \UnaRule{\vdash\Delta, \bot} \BinRule{\vdash \Gamma, \Delta, \bot\tens\bot} \NulRule{\vdash \one\parr\one} \LabelRule{\rulename{cut}} \BinRule{\vdash\Gamma,\Delta} \DisplayProof$

None of the two principles $1\limp\bot$ and $\bot\limp\one$ are valid in linear logic. To correct this one could extend the syntax of linear logic by adding the mix-rule. This is not very satisfactory as the mix rule violates some principles of Polarized linear logic, typically the fact that as sequent of the form $\vdash P_1, P_2$ where P1 and P2 are positive, is never provable.

On the other hand the mix-rule is valid in coherent semantics so one could try to find some other model that invalidates the mix-rule. For example Girard's Coherent Banach spaces were an attempt to address this issue.

## References

1. Girard, Jean-Yves. The System F of Variable Types, Fifteen Years Later. Theoretical Computer Science. Volume 45, Issue 2, pp. 159-192, doi:10.1016/0304-3975(86)90044-7, 1986.