http://llwiki.ens-lyon.fr/mediawiki/api.php?action=feedcontributions&user=Laurent+Regnier&feedformat=atomLLWiki - User contributions [en]2024-03-29T02:30:53ZUser contributionsMediaWiki 1.19.20+dfsg-0+deb7u3http://llwiki.ens-lyon.fr/mediawiki/index.php/Coherent_semanticsCoherent semantics2011-10-15T09:13:40Z<p>Laurent Regnier: /* After coherent semantics */ link</p>
<hr />
<div>''Coherent semantics'' was invented by Girard in the paper ''The system F, 15 years later''<ref>{{BibEntry|bibtype=journal|author=Girard, Jean-Yves|title=The System F of Variable Types, Fifteen Years Later|journal=Theoretical Computer Science|volume=45|issue=2|pages=159-192|doi=10.1016/0304-3975(86)90044-7|year=1986}}</ref>with the objective of building a denotationnal interpretation of second order intuitionnistic logic (aka polymorphic lambda-calculus).<br />
<br />
Coherent semantics is based on the notion of ''stable functions'' that was initially proposed by Gérard Berry. Stability is a condition on Scott continuous functions that expresses the determinism of the relation between the output and the input: the typical Scott continuous but non stable function is the ''parallel or'' because when the two inputs are both set to '''true''', only one of them is the reason why the result is '''true''' but there is no way to determine which one.<br />
<br />
A further achievement of coherent semantics was that it allowed to endow the set of stable functions from <math>X</math> to <math>Y</math> with a structure of domain, thus closing the category of coherent spaces and stable functions. However the most interesting point was the discovery of a special class of stable functions, ''linear functions'', which was the first step leading to Linear Logic.<br />
<br />
== The cartesian closed structure of coherent semantics ==<br />
<br />
There are three equivalent definitions of coherent spaces: the first one, ''coherent spaces as domains'', is interesting from a historical point of view as it emphazises the fact that coherent spaces are particular cases of Scott domains. The second one, ''coherent spaces as graphs'', is the most commonly used and will be our "official" definition in the sequel. The last one, ''cliqued spaces'' is a particular example of a more general scheme that one could call "symmetric reducibility"; this scheme is underlying lots of constructions in linear logic such as [[phase semantics]] or the proof of strong normalisation for proof-nets.<br />
<br />
=== Coherent spaces ===<br />
<br />
A coherent space <math>X</math> is a collection of subsets of a set <math>\web X</math> satisfying some conditions that will be detailed shortly. The elements of <math>X</math> are called the ''cliques'' of <math>X</math> (for reasons that will be made clear in a few lines). The set <math>\web X</math> is called the ''web'' of <math>X</math> and its elements are called the ''points'' of <math>X</math>; thus a clique is a set of points. Note that the terminology is a bit ambiguous as the points of <math>X</math> are the elements of the web of <math>X</math>, not the elements of <math>X</math>.<br />
<br />
The definitions below give three equivalent conditions that have to be satisfied by the cliques of a coherent space.<br />
<br />
==== As domains ====<br />
<br />
The cliques of <math>X</math> have to satisfy:<br />
* subset closure: if <math>x\subset y\in X</math> then <math>x\in X</math>,<br />
* singletons: <math>\{a\}\in X</math> for <math>a\in\web X</math>.<br />
* binary compatibility: if <math>A</math> is a family of pairwise compatible cliques of <math>X</math>, that is if <math>x\cup y\in X</math> for any <math>x,y\in A</math>, then <math>\bigcup A\in X</math>.<br />
<br />
A coherent space is thus ordered by inclusion; one easily checks that it is a domain. In particular finite cliques of <math>X</math> correspond to compact elements.<br />
<br />
==== As graphs ====<br />
<br />
There is a reflexive and symetric relation <math>\coh_X</math> on <math>\web X</math> (the ''coherence relation'') such that any subset <math>x</math> of <math>\web X</math> is a clique of <math>X</math> iff <math>\forall a,b\in x,\, a\coh_X b</math>. In other terms <math>X</math> is the set of complete subgraphs of the simple unoriented graph of the <math>\coh_X</math> relation; this is the reason why elements of <math>X</math> are called ''cliques''.<br />
<br />
The ''strict coherence relation'' <math>\scoh_X</math> on <math>X</math> is defined by: <math>a\scoh_X b</math> iff <math>a\neq b</math> and <math>a\coh_X b</math>.<br />
<br />
A coherent space in the domain sense is seen to be a coherent space in the graph sense by setting <math>a\coh_X b</math> iff <math>\{a,b\}\in X</math>; conversely one can check that cliques in the graph sense are subset closed and satisfy the binary compatibility condition.<br />
<br />
A coherent space is completely determined by its web and its coherence relation, or equivalently by its web and its strict coherence.<br />
<br />
==== As cliqued spaces ====<br />
<br />
{{Definition|title=Duality|<br />
Let <math>x, y\subseteq \web{X}</math> be two sets. We will say that they are dual, written <math>x\perp y</math> if their intersection contains at most one element: <math>\mathrm{Card}(x\cap y)\leq 1</math>. As usual, it defines an [[orthogonality relation]] over <math>\powerset{\web{X}}</math>.}}<br />
<br />
The last way to express the conditions on the cliques of a coherent space <math>X</math> is simply to say that we must have <math>X\biorth = X</math>.<br />
<br />
==== Equivalence of definitions ====<br />
<br />
Let <math>X</math> be a cliqued space and define a relation on <math>\web X</math> by setting <math>a\coh_X b</math> iff there is <math>x\in X</math> such that <math>a, b\in x</math>. This relation is obviously symetric; it is also reflexive because all singletons belong to <math>X</math>: if <math>a\in \web X</math> then <math>\{a\}</math> is dual to any element of <math>X\orth</math> (actually <math>\{a\}</math> is dual to any subset of <math>\web X</math>), thus <math>\{a\}</math> is in <math>X\biorth</math>, thus in <math>X</math>.<br />
<br />
Let <math>a\coh_X b</math>. Then <math>\{a,b\}\in X</math>; indeed there is an <math>x\in X</math> such that <math>a, b\in x</math>. This <math>x</math> is dual to any <math>y\in X\orth</math>, that is meets any <math>y\in X\orth</math> in a most one point. Since <math>\{a,b\}\subset x</math> this is also true of <math>\{a,b\}</math>, so that <math>\{a,b\}</math> is in <math>X\biorth</math> thus in <math>X</math>.<br />
<br />
Now let <math>x</math> be a clique for <math>\coh_X</math> and <math>y</math> be an element of <math>X\orth</math>. Suppose <math>a, b\in x\cap y</math>, then since <math>a</math> and <math>b</math> are coherent (by hypothesis on <math>x</math>) we have <math>\{a,b\}\in X</math> and since <math>y\in X\orth</math> we must have that <math>\{a,b\}</math> and <math>y</math> meet in at most one point. Thus <math>a = b</math> and we have shown that <math>x</math> and <math>y</math> are dual. Since <math>y</math> was arbitrary this means that <math>x</math> is in <math>X\biorth</math>, thus in <math>X</math>. Finally we get that any set of pairwise coherent points of <math>X</math> is in <math>X</math>. Conversely given <math>x\in X</math> its points are obviously pairwise coherent so eventually we get that <math>X</math> is a coherent space in the graph sense.<br />
<br />
Conversely given a coherent space <math>X</math> in the graph sense, one can check that it is a cliqued space. Call ''anticlique'' a set <math>y\subset \web X</math> of pairwise incoherent points: for all <math>a, b</math> in <math>y</math>, if <math>a\coh_X b</math> then <math>a=b</math>. Any anticlique intersects any clique in at most one point: let <math>x</math> be a clique and <math>y</math> be an anticlique, then if <math>a,b\in x\cap y</math>, since <math>a, b\in x</math> we have <math>a\coh_X b</math> and since <math>y</math> is an anticlique we have <math>a = b</math>. Thus <math>y\in X\orth</math>. Conversely given any <math>y\in X\orth</math> and <math>a, b\in y</math>, suppose <math>a\coh_X b</math>. Then <math>\{a,b\}\in X</math>, thus <math>\{a,b\}\perp y</math> which entails that <math>\{a, b\}</math> has at most one point so that <math>a = b</math>: we have shown that any two elements of <math>y</math> are incoherent.<br />
<br />
Thus the collection of anticliques of <math>X</math> is the dual <math>X\orth</math> of <math>X</math>. Note that the incoherence relation defined above is reflexive and symetric, so that <math>X\orth</math> is a coherent space in the graph sense. Thus we can do for <math>X\orth</math> exactly what we've just done for <math>X</math> and consider the anti-anticliques, that is the anticliques for the incoherent relation which are the cliques for the in-incoherent relation. It is not difficult to see that this in-incoherence relation is just the coherence relation we started with; we thus obtain that <math>X\biorth = X</math>, so that <math>X</math> is a cliqued space.<br />
<br />
=== Stable functions ===<br />
<br />
{{Definition|title=Stable function|<br />
Let <math>X</math> and <math>Y</math> be two coherent spaces. A function <math>F:X\longrightarrow Y</math> is ''stable'' if it satisfies:<br />
* it is non decreasing: for any <math>x,y\in X</math> if <math>x\subset y</math> then <math>F(x)\subset F(y)</math>;<br />
* it is continuous (in the Scott sense): if <math>A</math> is a directed family of cliques of <math>X</math>, that is if for any <math>x,y\in A</math> there is a <math>z\in A</math> such that <math>x\cup y\subset z</math>, then <math>\bigcup_{x\in A}F(x) = F(\bigcup A)</math>;<br />
* it satisfies the stability condition: if <math>x,y\in X</math> are compatible, that is if <math>x\cup y\in X</math>, then <math>F(x\cap y) = F(x)\cap F(y)</math>.<br />
}}<br />
<br />
This definition is admitedly not very tractable. An equivalent and most useful caracterisation of stable functions is given by the following theorem.<br />
<br />
{{Theorem|<br />
Let <math>F:X\longrightarrow Y</math> be a non-decreasing function from the coherent space <math>X</math> to the coherent space <math>Y</math>. The function <math>F</math> is stable iff it satisfies: for any <math>x\in X</math>, <math>b\in\web Y</math>, if <math>b\in F(x)</math> then there is a finite clique <math>x_0\subset x</math> such that:<br />
* <math>b\in F(x_0)</math>,<br />
* for any <math>y\subset x</math> if <math>b\in F(y)</math> then <math>x_0\subset y</math> (<math>x_0</math> is ''the'' minimum sub-clique of <math>x</math> such that <math>b\in F(x_0)</math>). <br />
}}<br />
<br />
Note that the stability condition doesn't depend on the coherent space structure and can be expressed more generally for continuous functions on domains. However, as mentionned in the introduction, the restriction to coherent spaces allows to endow the set of stable functions from <math>X</math> to <math>Y</math> with a structure of coherent space.<br />
<br />
{{Definition|title=The space of stable functions|<br />
Let <math>X</math> and <math>Y</math> be coherent spaces. We denote by <math>X_{\mathrm{fin}}</math> the set of ''finite'' cliques of <math>X</math>. The function space <math>X\imp Y</math> is defined by:<br />
* <math>\web{X\imp Y} = X_{\mathrm{fin}}\times \web Y</math>,<br />
* <math>(x_0, a)\coh_{X\imp Y}(y_0, b)</math> iff <math>\begin{cases}\text{if } x_0\cup y_0\in X\text{ then } a\coh_Y b,\\<br />
\text{if } x_0\cup y_0\in X\text{ and } a = b\text{ then } x_0 = y_0\end{cases}</math>.<br />
}}<br />
<br />
One could equivalently define the strict coherence relation on <math>X\imp Y</math> by: <math>(x_0,a)\scoh_{X\imp Y}(y_0, b)</math> iff when <math>x_0\cup y_0\in X</math> then <math>a\scoh_Y b</math> (equivalently <math>x_0\cup y_0\not\in X</math> or <math>a\scoh_Y b</math>).<br />
<br />
{{Definition|title=Trace of a stable function|<br />
Let <math>F:X\longrightarrow Y</math> be a function. The ''trace'' of <math>F</math> is the set:<br />
<br />
<math>\mathrm{Tr}(F) = \{(x_0, b), x_0\text{ minimal such that } b\in F(x_0)\}</math>.<br />
}}<br />
<br />
{{theorem|<br />
<math>F</math> is stable iff <math>\mathrm{Tr}(F)</math> is a clique of the function space <math>X\imp Y</math><br />
}}<br />
<br />
In particular the continuity of <math>F</math> entails that if <math>x_0</math> is minimal such that <math>b\in F(x_0)</math>, then <math>x_0</math> is finite.<br />
<br />
{{Definition|title=The evaluation function|<br />
Let <math>f</math> be a clique in <math>X\imp Y</math>. We define a function <math>\mathrm{Fun}\,f:X\longrightarrow Y</math> by: <math>\mathrm{Fun}\,f(x) = \{b\in Y,\text{ there is }x_0\subset x\text{ such that }(x_0, b)\in f\}</math>.<br />
}}<br />
<br />
{{Theorem|title=Closure|<br />
If <math>f</math> is a clique of the function space <math>X\imp Y</math> then we have <math>\mathrm{Tr}(\mathrm{Fun}\,f) = f</math>. Conversely if <math>F:X\longrightarrow Y</math> is a stable function then we have <math>F = \mathrm{Fun}\,\mathrm{Tr}(F)</math>.<br />
}}<br />
<br />
=== Cartesian product ===<br />
<br />
{{Definition|title=Cartesian product|<br />
Let <math>X_1</math> and <math>X_2</math> be two coherent spaces. We define the coherent space <math>X_1\with X_2</math> (read <math>X_1</math> ''with'' <math>X_2</math>):<br />
* the web is the disjoint union of the webs: <math>\web{X_1\with X_2} = \{1\}\times\web{X_1}\cup \{2\}\times\web{X_2}</math>;<br />
* the coherence relation is the serie composition of the relations on <math>X_1</math> and <math>X_2</math>: <math>(i, a)\coh_{X_1\with X_2}(j, b)</math> iff either <math>i\neq j</math> or <math>i=j</math> and <math>a\coh_{X_i} b</math>.<br />
}}<br />
<br />
This definition is just the way to put a coherent space structure on the cartesian product. Indeed one easily shows the<br />
<br />
{{Theorem|<br />
Given cliques <math>x_1</math> and <math>x_2</math> in <math>X_1</math> and <math>X_2</math>, we define the subset <math>\langle x_1, x_2\rangle</math> of <math>\web{X_1\with X_2}</math> by: <math>\langle x_1, x_2\rangle = \{1\}\times x_1\cup \{2\}\times x_2</math>. Then <math>\langle x_1, x_2\rangle</math> is a clique in <math>X_1\with X_2</math>.<br />
<br />
Conversely, given a clique <math>x\in X_1\with X_2</math>, for <math>i=1,2</math> we define <math>\pi_i(x) = \{a\in X_i, (i, a)\in x\}</math>. Then <math>\pi_i(x)</math> is a clique in <math>X_i</math> and the function <math>\pi_i:X_1\with X_2\longrightarrow X_i</math> is stable.<br />
<br />
Furthemore these two operations are inverse of each other: <math>\pi_i(\langle x_1, x_2\rangle) = x_i</math> and <math>\langle\pi_1(x), \pi_2(x)\rangle = x</math>. In particular any clique in <math>X_1\with X_2</math> is of the form <math>\langle x_1, x_2\rangle</math>.<br />
}}<br />
<br />
Altogether the results above (and a few other more that we shall leave to the reader) allow to get:<br />
<br />
{{Theorem|<br />
The category of coherent spaces and stable functions is cartesian closed.<br />
}}<br />
<br />
In particular this means that if we define <math>\mathrm{Eval}:(X\imp Y)\with X\longrightarrow Y</math> by: <math>\mathrm{Eval}(\langle f, x\rangle) = \mathrm{Fun}\,f(x)</math> then <math>\mathrm{Eval}</math> is stable.<br />
<br />
== The monoidal structure of coherent semantics ==<br />
<br />
=== Linear functions ===<br />
<br />
{{Definition|title=Linear function|<br />
A function <math>F:X\longrightarrow Y</math> is ''linear'' if it is stable and furthemore satisfies: for any family <math>A</math> of pairwise compatible cliques of <math>X</math>, that is such that for any <math>x, y\in A</math>, <math>x\cup y\in X</math>, we have <math>\bigcup_{x\in A}F(x) = F(\bigcup A)</math>.<br />
}}<br />
<br />
In particular if we take <math>A</math> to be the empty family, then we have <math>F(\emptyset) = \emptyset</math>.<br />
<br />
The condition for linearity is quite similar to the condition for Scott continuity, except that we dropped the constraint that <math>A</math> is ''directed''. Linearity is therefore much stronger than stability: most stable functions are not linear.<br />
<br />
However most of the functions seen so far are linear. Typically the function <math>\pi_i:X_1\with X_2\longrightarrow X_i</math> is linear from wich one may deduce that the ''with'' construction is also a cartesian product in the category of coherent spaces and linear functions.<br />
<br />
As with stable function we have an equivalent and much more tractable caracterisation of linear function:<br />
<br />
{{Theorem|<br />
Let <math>F:X\longrightarrow Y</math> be a continuous function. Then <math>F</math> is linear iff it satisfies: for any clique <math>x\in X</math> and any <math>b\in F(x)</math> there is a unique <math>a\in x</math> such that <math>b\in F(\{a\})</math>.<br />
}}<br />
<br />
Just as the caracterisation theorem for stable functions allowed us to build the coherent space of stable functions, this theorem will help us to endow the set of linear maps with a structure of coherent space.<br />
<br />
{{Definition|title=The linear functions space|<br />
Let <math>X</math> and <math>Y</math> be coherent spaces. The ''linear function space'' <math>X\limp Y</math> is defined by:<br />
* <math>\web{X\limp Y} = \web X\times \web Y</math>,<br />
* <math>(a,b)\coh_{X\limp Y}(a', b')</math> iff <math>\begin{cases}\text{if }a\coh_X a'\text{ then } b\coh_Y b'\\<br />
\text{if }a\coh_X a' \text{ and }b=b'\text{ then }a=a'\end{cases}</math><br />
}}<br />
<br />
Equivalently one could define the strict coherence to be: <math>(a,b)\scoh_{X\limp Y}(a',b')</math> iff <math>a\scoh_X a'</math> entails <math>b\scoh_Y b'</math>.<br />
<br />
{{Definition|title=Linear trace|<br />
Let <math>F:X\longrightarrow Y</math> be a function. The ''linear trace'' of <math>F</math> denoted as <math>\mathrm{LinTr}(F)</math> is the set:<br />
<math>\mathrm{LinTr}(F) = \{(a, b)\in\web X\times\web Y</math> such that <math>b\in F(\{a\})\}</math>.<br />
}}<br />
<br />
{{Theorem|<br />
If <math>F</math> is linear then <math>\mathrm{LinTr}(F)</math> is a clique of <math>X\limp Y</math>.<br />
}}<br />
<br />
{{Definition|title=Evaluation of linear function|<br />
Let <math>f</math> be a clique of <math>X\limp Y</math>. We define the function <math>\mathrm{LinFun}\,f:X\longrightarrow Y</math> by: <math>\mathrm{LinFun}\,f(x) = \{b\in\web Y</math> such that there is an <math>a\in x</math> satisfying <math>(a,b)\in f\}</math>.<br />
}}<br />
<br />
{{Theorem|title=Linear closure|<br />
Let <math>f</math> be a clique in <math>X\limp Y</math>. Then we have <math>\mathrm{LinTr}(\mathrm{LinFun}\, f) = f</math>. Conversely if <math>F:X\longrightarrow Y</math> is linear then we have <math>F = \mathrm{LinFun}\,\mathrm{LinTr}(F)</math>.<br />
}}<br />
<br />
It remains to define a tensor product and we will get that the category of coherent spaces with linear functions is monoidal symetric (it is actually *-autonomous).<br />
<br />
=== Tensor product ===<br />
<br />
{{Definition|title=Tensor product|<br />
Let <math>X</math> and <math>Y</math> be coherent spaces. Their tensor product <math>X\tens Y</math> is defined by: <math>\web{X\tens Y} = \web X\times\web Y</math> and <math>(a,b)\coh_{X\tens Y}(a',b')</math> iff <math>a\coh_X a'</math> and <math>b\coh_Y b'</math>.<br />
}}<br />
<br />
{{Theorem|<br />
The category of coherent spaces with linear maps and tensor product is [[Categorical semantics#Modeling IMLL|monoidal symetric closed]].<br />
}}<br />
<br />
The closedness is a consequence of the existence of the linear isomorphism:<br />
<math>\varphi:X\tens Y\limp Z\ \stackrel{\sim}{\longrightarrow}\ X\limp(Y\limp Z)</math><br />
<br />
that is defined by its linear trace: <math>\mathrm{LinTr}(\varphi) = \{(((a, b), c), (a, (b, c))),\, a\in\web X,\, b\in \web Y,\, c\in\web Z\}</math>.<br />
<br />
=== Linear negation ===<br />
<br />
{{Definition|title=Linear negation|<br />
Let <math>X</math> be a coherent space. We define the ''incoherence relation'' on <math>\web X</math> by: <math>a\incoh_X b</math> iff <math>a\coh_X b</math> entails <math>a=b</math>. The incoherence relation is reflexive and symetric; we call ''dual'' or ''linear negation'' of <math>X</math> the associated coherent space denoted <math>X\orth</math>, thus defined by: <math>\web{X\orth} = \web X</math> and <math>a\coh_{X\orth} b</math> iff <math>a\incoh_X b</math>.<br />
}}<br />
<br />
The cliques of <math>X\orth</math> are called the ''anticliques'' of <math>X</math>. As seen in the section on cliqued spaces we have <math>X\biorth=X</math>.<br />
<br />
{{Theorem|<br />
The category of coherent spaces with linear maps, tensor product and linear negation is *-autonomous.<br />
}}<br />
<br />
This is in particular consequence of the existence of the isomorphism:<br />
<math>\varphi:X\limp Y\ \stackrel{\sim}{\longrightarrow}\ Y\orth\limp X\orth</math><br />
<br />
defined by its linear trace: <math>\mathrm{LinTr}(\varphi) = \{((a, b), (b, a)),\, a\in\web X,\, b\in\web Y\}</math>.<br />
<br />
== Exponentials ==<br />
<br />
In linear algebra, bilinear maps may be factorized through the tensor product. Similarly there is a coherent space <math>\oc X</math> that allows to factorize stable functions through linear functions.<br />
<br />
{{Definition|title=Of course|<br />
Let <math>X</math> be a coherent space; recall that <math>X_{\mathrm{fin}}</math> denotes the set of finite cliques of <math>X</math>. We define the space <math>\oc X</math> (read ''of course <math>X</math>'') by: <math>\web{\oc X} = X_{\mathrm{fin}}</math> and <math>x_0\coh_{\oc X}y_0</math> iff <math>x_0\cup y_0</math> is a clique of <math>X</math>.<br />
}}<br />
<br />
Thus a clique of <math>\oc X</math> is a set of finite cliques of <math>X</math> the union of wich is a clique of <math>X</math>.<br />
<br />
{{Theorem|<br />
Let <math>X</math> be a coherent space. Denote by <math>\beta:X\longrightarrow \oc X</math> the stable function whose trace is: <math>\mathrm{Tr}(\beta) = \{(x_0, x_0),\, x_0\in X_{\mathrm{fin}}\}</math>. Then for any coherent space <math>Y</math> and any stable function <math>F: X\longrightarrow Y</math> there is a unique ''linear'' function <math>\bar F:\oc X\longrightarrow Y</math> such that <math>F = \bar F\circ \beta</math>.<br />
<br />
Furthermore we have <math>X\imp Y = \oc X\limp Y</math>.<br />
}}<br />
<br />
{{Theorem|title=The exponential isomorphism|<br />
Let <math>X</math> and <math>Y</math> be two coherent spaces. Then there is a linear isomorphism:<br />
<math>\varphi:\oc(X\with Y)\quad\stackrel{\sim}{\longrightarrow}\quad \oc X\tens\oc Y</math>.<br />
}}<br />
<br />
The iso <math>\varphi</math> is defined by its trace: <math>\mathrm{Tr}(\varphi) = \{(x_0, (\pi_1(x_0), \pi_2(x_0)), x_0\text{ finite clique of } X\with Y\}</math>. <br />
<br />
This isomorphism, that sends an additive structure (the web of a with is obtained by disjoint union) onto a multiplicative one (the web of a tensor is obtained by cartesian product) is the reason why the of course is called an ''exponential''.<br />
<br />
== Dual connectives and neutrals ==<br />
<br />
By linear negation all the constructions defined so far (<math>\with, \tens, \oc</math>) have a dual.<br />
<br />
=== The direct sum ===<br />
<br />
The dual of <math>\with</math> is <math>\plus</math> defined by: <math>X\plus Y = (X\orth\with Y\orth)\orth</math>. An equivalent definition is given by: <math>\web{X\plus Y} = \web{X\with Y} = \{1\}\times \web X \cup \{2\}\times\web Y</math> and <math>(i, a)\coh_{X\plus Y} (j, b)\text{ iff } i = j = 1 \text{ and } a\coh_X b,\text{ or }i = j = 2\text{ and } a\coh_Y b</math>.<br />
<br />
{{Theorem|<br />
Let <math>x'</math> be a clique of <math>X\plus Y</math>; then <math>x'</math> is of the form <math>\{i\}\times x</math> where <math>i = 1\text{ and }x\in X</math>, or <math>i = 2\text{ and }x\in Y</math>.<br />
<br />
Denote <math>\mathrm{inl}:X\longrightarrow X\plus Y</math> the function defined by <math>\mathrm{inl}(x) = \{1\}\times x</math> and by <math>\mathrm{inr}:Y\longrightarrow X\plus Y</math> the function defined by <math>\mathrm{inr}(x) = \{2\}\times x</math>. Then <math>\mathrm{inl}</math> and <math>\mathrm{inr}</math> are linear.<br />
<br />
If <math>F:X\longrightarrow Z</math> and <math>G:Y\longrightarrow Z</math> are ''linear'' functions then the function <math>H:X\plus Y \longrightarrow Z</math> defined by <math>H(\mathrm{inl}(x)) = F(x)</math> and <math>H(\mathrm{inr}(y)) = G(y)</math> is linear.<br />
}}<br />
<br />
In other terms <math>X\plus Y</math> is the direct sum of <math>X</math> and <math>Y</math>. Note that in the theorem all functions are ''linear''. Things doesn't work so smoothly for stable functions. Historically it was after noting this defect of coherent semantics w.r.t. the intuitionnistic implication that Girard was leaded to discover linear functions.<br />
<br />
=== The par and the why not ===<br />
<br />
We now come to the most mysterious constructions of coherent semantics: the duals of the tensor and the of course.<br />
<br />
The ''par'' is the dual of the tensor, thus defined by: <math>X\parr Y = (X\orth\tens Y\orth)\orth</math>. From this one can deduce the definition in graph terms: <math>\web{X\parr Y} = \web{X\tens Y} = \web X\times \web Y</math> and <math>(a,b)\scoh_{X\parr Y} (a',b')</math> iff <math>a\scoh_X a'</math> or <math>b\scoh_Y b'</math>. With this definition one sees that we have:<br />
<br />
<math>X\limp Y = X\orth\parr Y</math><br />
<br />
for any coherent spaces <math>X</math> and <math>Y</math>. This equation can be seen as an alternative definition of the par: <math>X\parr Y = X\orth\limp Y</math>.<br />
<br />
Similarly the dual of the of course is called ''why not'' defined by: <math>\wn X = (\oc X\orth)\orth</math>. From this we deduce the definition in the graph sense which is a bit tricky: <math>\web{\wn X}</math> is the set of finite anticliques of <math>X</math>, and given two finite anticliques <math>x</math> and <math>y</math> of <math>X</math> we have <math>x\scoh_{\wn X} y</math> iff there is <math>a\in x</math> and <math>b\in y</math> such that <math>a\scoh_X b</math>.<br />
<br />
Note that both for the par and the why not it is much more convenient to define the strict coherence than the coherence.<br />
<br />
With these two last constructions, the equation between the stable function space, the of course and the linear function space may be written:<br />
<br />
<math>X\imp Y = \wn X\orth\parr Y</math>.<br />
<br />
=== One and bottom ===<br />
<br />
Depending on the context we denote by <math>\one</math> or <math>\bot</math> the coherent space whose web is a singleton and whose coherence relation is the trivial reflexive relation.<br />
<br />
{{Theorem|<br />
<math>\one</math> is neutral for tensor, that is, there is a linear isomorphism <math>\varphi:X\tens\one\ \stackrel{\sim}{\longrightarrow}\ X</math>.<br />
<br />
Similarly <math>\bot</math> is neutral for par.<br />
}}<br />
<br />
=== Zero and top ===<br />
<br />
Depending on the context we denote by <math>\zero</math> or <math>\top</math> the coherent space with empty web.<br />
<br />
{{Theorem|<br />
<math>\zero</math> is neutral for the direct sum <math>\plus</math>, <math>\top</math> is neutral for the cartesian product <math>\with</math>.<br />
}}<br />
<br />
{{Remark|<br />
It is one of the main defect of coherent semantics w.r.t. linear logic that it identifies the neutrals: in coherent semantics <math>\zero = \top</math> and <math>\one = \bot</math>. However there is no known semantics of LL that solves this problem in a satisfactory way.}}<br />
<br />
== After coherent semantics ==<br />
<br />
Coherent semantics was an important milestone in the modern theory of logic of programs, in particular because it leaded to the invention of Linear Logic, and more generally because it establishes a strong link between logic and linear algebra; this link is nowadays aknowledged by the customary use of [[Categorical semantics|monoidal categories]] in logic. In some sense coherent semantics is a precursor of many forthcoming works that explore the linear nature of logic as for example [[geometry of interaction]] which interprets proofs by operators or [[finiteness semantics]] which interprets formulas as vector spaces and resulted in [[differential linear logic]]...<br />
<br />
Lots of this work have been motivated by the fact that coherent semantics is not complete as a semantics of programs (technically one says that it is not ''fully abstract''). In order to see this, let us firts come back on the origin of the central concept of ''stability'' which as pointed above originated in the study of the sequentiality in programs.<br />
<br />
=== Sequentiality ===<br />
<br />
Sequentiality is a property that we will not define here (it would diserve its own article). We rely on the intuition that a function of <math>n</math> arguments is sequential if one can determine which of these argument is examined first during the computation. Obviously any function implemented in a functionnal language is sequential; for example the function ''or'' defined à la CAML by:<br />
<br />
<code>or = fun (x, y) -> if x then true else y</code><br />
<br />
examines its argument x first. Note that this may be expressed more abstractly by the property: <math>\mathrm{or}(\bot, x) = \bot</math> for any boolean <math>x</math>: the function ''or'' needs its first argument in order to compute anything. On the other hand we have <math>\mathrm{or}(\mathrm{true}, \bot) = \mathrm{true}</math>: in some case (when the first argument is true), the function doesn't need its second argument at all.<br />
<br />
The typical non sequential function is the ''parallel or'' (that one cannot define in a CAML like language).<br />
<br />
For a while one may have believed that the stability condition on which coherent semantics is built was enough to capture the notion of ''sequentiality'' of programs. A hint was the already mentionned fact that the ''parallel or'' is not stable. This diserves a bit of explanation.<br />
<br />
==== The parallel or is not stable ====<br />
<br />
Let <math>B</math> be the coherent space of booleans, also know as the flat domain of booleans: <math>\web B = \{tt, ff\}</math> where <math>tt</math> and <math>ff</math> are two arbitrary distinct objects (for example one may take <math>tt = 0</math> and <math>ff = 1</math>) and for any <math>b_1, b_2\in \web B</math>, define <math>b_1\coh_B b_2</math> iff <math>b_1 = b_2</math>. Then <math>B</math> has exactly three cliques: the empty clique that we shall denote <math>\bot</math>, the singleton <math>\{tt\}</math> that we shall denote <math>T</math> and the singleton <math>\{ff\}</math> that we shall denote <math>F</math>. These three cliques are ordered by inclusion: <math>\bot \leq T, F</math> (we use <math>\leq</math> for <math>\subset</math> to enforce the idea that coherent spaces are domains).<br />
<br />
Recall the [[#Cartesian product|definition of the with]], and in particular that any clique of <math>B\with B</math> has the form <math>\langle x, y\rangle</math> where <math>x</math> and <math>y</math> are cliques of <math>B</math>. Thus <math>B\with B</math> has 9 cliques: <math>\langle\bot,\bot\rangle,\ \langle\bot, T\rangle,\ \langle\bot, F\rangle,\ \langle T,\bot\rangle,\ \dots</math> that are ordered by the product order: <math>\langle x,y\rangle\leq \langle x,y\rangle</math> iff <math>x\leq x'</math> and <math>y\leq y'</math>.<br />
<br />
With these notations in mind one may define the parallel or by:<br />
<br />
<math><br />
\begin{array}{rcl}<br />
\mathrm{Por} : B\with B &\longrightarrow& B\\<br />
\langle T,\bot\rangle &\longrightarrow& T\\<br />
\langle \bot,T\rangle &\longrightarrow& T\\<br />
\langle F, F\rangle &\longrightarrow& F<br />
\end{array}<br />
</math><br />
<br />
The function is completely determined if we add the assumption that it is non decreasing; for example one must have <math>\mathrm{Por}\langle\bot,\bot\rangle = \bot</math> because the lhs has to be less than both <math>T</math> and <math>F</math> (because <math>\langle\bot,\bot\rangle \leq \langle T,\bot\rangle</math> and <math>\langle\bot,\bot\rangle \leq \langle F,F\rangle</math>).<br />
<br />
The function is not stable because <math>\langle T,\bot\rangle \cap \langle \bot, T\rangle = \langle\bot, \bot\rangle</math>, thus <math>\mathrm{Por}(\langle T,\bot\rangle \cap \langle \bot, T\rangle) = \bot</math> whereas <math>\mathrm{Por}\langle T,\bot\rangle \cap \mathrm{Por}\langle \bot, T\rangle = T\cap T = T</math>.<br />
<br />
Another way to see this is: suppose <math>x</math> and <math>y</math> are two cliques of <math>B</math> such that <math>tt\in \mathrm{Por}\langle x, y\rangle</math>, which means that <math>\mathrm{Por}\langle x, y\rangle = T</math>; according to the [[#Stable functions|caracterisation theorem of stable functions]], if <math>\mathrm{Por}</math> were stable then there would be a unique minimum <math>x_0</math> included in <math>x</math>, and a unique minimum <math>y_0</math> included in <math>y</math> such that <math>\mathrm{Por}\langle x_0, y_0\rangle = T</math>. This is not the case because both <math>\langle T,\bot\rangle</math> and <math>\langle T,\bot\rangle</math> are minimal such that their value is <math>T</math>.<br />
<br />
In other terms, knowing that <math>\mathrm{Por}\langle x, y\rangle = T</math> doesn't tell which of <math>x</math> of <math>y</math> is responsible for that, although we know by the definition of <math>\mathrm{Por}</math> that only one of them is. Indeed the <math>\mathrm{Por}</math> function is not representable in sequential programming languages such as (typed) lambda-calculus.<br />
<br />
So the first genuine idea would be that stability caracterises sequentiality; but...<br />
<br />
==== The Gustave function is stable ====<br />
<br />
The Gustave function, so-called after an old joke, was found by Gérard Berry as an example of a function that is stable but non sequential. It is defined by:<br />
<br />
<math><br />
\begin{array}{rcl}<br />
B\with B\with B &\longrightarrow& B\\<br />
\langle T, F, \bot\rangle &\longrightarrow& T\\<br />
\langle \bot, T, F\rangle &\longrightarrow& T\\<br />
\langle F, \bot, T\rangle &\longrightarrow& T\\<br />
\langle x, y, z\rangle &\longrightarrow& F<br />
\end{array}<br />
</math><br />
<br />
The last clause is for all cliques <math>x</math>, <math>y</math> and <math>z</math> such that <math>\langle x, y ,z\rangle</math> is incompatible with the three cliques <math>\langle T, F, \bot\rangle</math>, <math>\langle \bot, T, F\rangle</math> and <math>\langle F, \bot, T\rangle</math>, that is such that the union with any of these three cliques is not a clique in <math>B\with B\with B</math>. We shall denote <math>x_1</math>, <math>x_2</math> and <math>x_3</math> these three cliques.<br />
<br />
We furthemore assume that the Gustave function is non decreasing, so that we get <math>G\langle\bot,\bot,\bot\rangle = \bot</math>.<br />
<br />
We note that <math>x_1</math>, <math>x_2</math> and <math>x_3</math> are pairwise incompatible. From this we can deduce that the Gustave function is stable: typically if <math>G\langle x,y,z\rangle = T</math> then exactly one of the <math>x_i</math>s is contained in <math>\langle x, y, z\rangle</math>.<br />
<br />
However it is not sequential because there is no way to determine which of its three arguments is examined first: it is not the first one otherwise we would have <math>G\langle\bot, T, F\rangle = \bot</math> and similarly it is not the second one nor the third one.<br />
<br />
In other terms there is no way to implement the Gustave function by a lambda-term (or in any sequential programming language). Thus coherent semantics is not complete w.r.t. lambda-calculus.<br />
<br />
The research for a right model for sequentiality was the motivation for lot of<br />
work, ''e.g.'', ''sequential algorithms'' by Gérard Bérry and Pierre-Louis<br />
Currien in the early eighties, that were more recently reformulated as a kind<br />
of [[Game semantics|game model]], and the theory of ''hypercoherent spaces'' by<br />
Antonio Bucciarelli and Thomas Ehrhard.<br />
<br />
=== Multiplicative neutrals and the mix rule ===<br />
<br />
Coherent semantics is slightly degenerated w.r.t. linear logic because it identifies multiplicative neutrals (it also identifies additive neutrals but that's yet another problem): the coherent spaces <math>\one</math> and <math>\bot</math> are equal.<br />
<br />
The first consequence of the identity <math>\one = \bot</math> is that the formula <math>\one\limp\bot</math> becomes provable, and so does the formula <math>\bot</math>. Note that this doesn't entail (as in classical logic or intuitionnistic logic) that linear logic is incoherent because the principle <math>\bot\limp A</math> for any formula <math>A</math> is still not provable.<br />
<br />
The equality <math>\one = \bot</math> has also as consequence the fact that <math>\bot\limp\one</math> (or equivalently the formula <math>\one\parr\one</math>) is provable. This principle is also known as the [[Mix|mix rule]]<br />
<br />
<math><br />
\AxRule{\vdash \Gamma}<br />
\AxRule{\vdash \Delta}<br />
\LabelRule{\rulename{mix}}<br />
\BinRule{\vdash \Gamma,\Delta}<br />
\DisplayProof<br />
</math><br />
<br />
as it can be used to show that this rule is admissible:<br />
<br />
<math><br />
\AxRule{\vdash\Gamma}<br />
\LabelRule{\bot_R}<br />
\UnaRule{\vdash\Gamma, \bot}<br />
\AxRule{\vdash\Delta}<br />
\LabelRule{\bot_R}<br />
\UnaRule{\vdash\Delta, \bot}<br />
\BinRule{\vdash \Gamma, \Delta, \bot\tens\bot}<br />
\NulRule{\vdash \one\parr\one}<br />
\LabelRule{\rulename{cut}}<br />
\BinRule{\vdash\Gamma,\Delta}<br />
\DisplayProof<br />
</math><br />
<br />
None of the two principles <math>1\limp\bot</math> and <math>\bot\limp\one</math> are valid in linear logic. To correct this one could extend the syntax of linear logic by adding the mix-rule. This is not very satisfactory as the mix rule violates some principles of [[Polarized linear logic]], typically the fact that as sequent of the form <math>\vdash P_1, P_2</math> where <math>P_1</math> and <math>P_2</math> are positive, is never provable.<br />
<br />
On the other hand the mix-rule is valid in coherent semantics so one could try to find some other model that invalidates the mix-rule. For example Girard's Coherent Banach spaces were an attempt to address this issue.<br />
<br />
== References ==<br />
<references /></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Coherent_semanticsCoherent semantics2011-10-15T09:00:58Z<p>Laurent Regnier: /* The failure of coherent semantics */ change the title, the term failure seems unappropriate; reformulated the intro accordingly.</p>
<hr />
<div>''Coherent semantics'' was invented by Girard in the paper ''The system F, 15 years later''<ref>{{BibEntry|bibtype=journal|author=Girard, Jean-Yves|title=The System F of Variable Types, Fifteen Years Later|journal=Theoretical Computer Science|volume=45|issue=2|pages=159-192|doi=10.1016/0304-3975(86)90044-7|year=1986}}</ref>with the objective of building a denotationnal interpretation of second order intuitionnistic logic (aka polymorphic lambda-calculus).<br />
<br />
Coherent semantics is based on the notion of ''stable functions'' that was initially proposed by Gérard Berry. Stability is a condition on Scott continuous functions that expresses the determinism of the relation between the output and the input: the typical Scott continuous but non stable function is the ''parallel or'' because when the two inputs are both set to '''true''', only one of them is the reason why the result is '''true''' but there is no way to determine which one.<br />
<br />
A further achievement of coherent semantics was that it allowed to endow the set of stable functions from <math>X</math> to <math>Y</math> with a structure of domain, thus closing the category of coherent spaces and stable functions. However the most interesting point was the discovery of a special class of stable functions, ''linear functions'', which was the first step leading to Linear Logic.<br />
<br />
== The cartesian closed structure of coherent semantics ==<br />
<br />
There are three equivalent definitions of coherent spaces: the first one, ''coherent spaces as domains'', is interesting from a historical point of view as it emphazises the fact that coherent spaces are particular cases of Scott domains. The second one, ''coherent spaces as graphs'', is the most commonly used and will be our "official" definition in the sequel. The last one, ''cliqued spaces'' is a particular example of a more general scheme that one could call "symmetric reducibility"; this scheme is underlying lots of constructions in linear logic such as [[phase semantics]] or the proof of strong normalisation for proof-nets.<br />
<br />
=== Coherent spaces ===<br />
<br />
A coherent space <math>X</math> is a collection of subsets of a set <math>\web X</math> satisfying some conditions that will be detailed shortly. The elements of <math>X</math> are called the ''cliques'' of <math>X</math> (for reasons that will be made clear in a few lines). The set <math>\web X</math> is called the ''web'' of <math>X</math> and its elements are called the ''points'' of <math>X</math>; thus a clique is a set of points. Note that the terminology is a bit ambiguous as the points of <math>X</math> are the elements of the web of <math>X</math>, not the elements of <math>X</math>.<br />
<br />
The definitions below give three equivalent conditions that have to be satisfied by the cliques of a coherent space.<br />
<br />
==== As domains ====<br />
<br />
The cliques of <math>X</math> have to satisfy:<br />
* subset closure: if <math>x\subset y\in X</math> then <math>x\in X</math>,<br />
* singletons: <math>\{a\}\in X</math> for <math>a\in\web X</math>.<br />
* binary compatibility: if <math>A</math> is a family of pairwise compatible cliques of <math>X</math>, that is if <math>x\cup y\in X</math> for any <math>x,y\in A</math>, then <math>\bigcup A\in X</math>.<br />
<br />
A coherent space is thus ordered by inclusion; one easily checks that it is a domain. In particular finite cliques of <math>X</math> correspond to compact elements.<br />
<br />
==== As graphs ====<br />
<br />
There is a reflexive and symetric relation <math>\coh_X</math> on <math>\web X</math> (the ''coherence relation'') such that any subset <math>x</math> of <math>\web X</math> is a clique of <math>X</math> iff <math>\forall a,b\in x,\, a\coh_X b</math>. In other terms <math>X</math> is the set of complete subgraphs of the simple unoriented graph of the <math>\coh_X</math> relation; this is the reason why elements of <math>X</math> are called ''cliques''.<br />
<br />
The ''strict coherence relation'' <math>\scoh_X</math> on <math>X</math> is defined by: <math>a\scoh_X b</math> iff <math>a\neq b</math> and <math>a\coh_X b</math>.<br />
<br />
A coherent space in the domain sense is seen to be a coherent space in the graph sense by setting <math>a\coh_X b</math> iff <math>\{a,b\}\in X</math>; conversely one can check that cliques in the graph sense are subset closed and satisfy the binary compatibility condition.<br />
<br />
A coherent space is completely determined by its web and its coherence relation, or equivalently by its web and its strict coherence.<br />
<br />
==== As cliqued spaces ====<br />
<br />
{{Definition|title=Duality|<br />
Let <math>x, y\subseteq \web{X}</math> be two sets. We will say that they are dual, written <math>x\perp y</math> if their intersection contains at most one element: <math>\mathrm{Card}(x\cap y)\leq 1</math>. As usual, it defines an [[orthogonality relation]] over <math>\powerset{\web{X}}</math>.}}<br />
<br />
The last way to express the conditions on the cliques of a coherent space <math>X</math> is simply to say that we must have <math>X\biorth = X</math>.<br />
<br />
==== Equivalence of definitions ====<br />
<br />
Let <math>X</math> be a cliqued space and define a relation on <math>\web X</math> by setting <math>a\coh_X b</math> iff there is <math>x\in X</math> such that <math>a, b\in x</math>. This relation is obviously symetric; it is also reflexive because all singletons belong to <math>X</math>: if <math>a\in \web X</math> then <math>\{a\}</math> is dual to any element of <math>X\orth</math> (actually <math>\{a\}</math> is dual to any subset of <math>\web X</math>), thus <math>\{a\}</math> is in <math>X\biorth</math>, thus in <math>X</math>.<br />
<br />
Let <math>a\coh_X b</math>. Then <math>\{a,b\}\in X</math>; indeed there is an <math>x\in X</math> such that <math>a, b\in x</math>. This <math>x</math> is dual to any <math>y\in X\orth</math>, that is meets any <math>y\in X\orth</math> in a most one point. Since <math>\{a,b\}\subset x</math> this is also true of <math>\{a,b\}</math>, so that <math>\{a,b\}</math> is in <math>X\biorth</math> thus in <math>X</math>.<br />
<br />
Now let <math>x</math> be a clique for <math>\coh_X</math> and <math>y</math> be an element of <math>X\orth</math>. Suppose <math>a, b\in x\cap y</math>, then since <math>a</math> and <math>b</math> are coherent (by hypothesis on <math>x</math>) we have <math>\{a,b\}\in X</math> and since <math>y\in X\orth</math> we must have that <math>\{a,b\}</math> and <math>y</math> meet in at most one point. Thus <math>a = b</math> and we have shown that <math>x</math> and <math>y</math> are dual. Since <math>y</math> was arbitrary this means that <math>x</math> is in <math>X\biorth</math>, thus in <math>X</math>. Finally we get that any set of pairwise coherent points of <math>X</math> is in <math>X</math>. Conversely given <math>x\in X</math> its points are obviously pairwise coherent so eventually we get that <math>X</math> is a coherent space in the graph sense.<br />
<br />
Conversely given a coherent space <math>X</math> in the graph sense, one can check that it is a cliqued space. Call ''anticlique'' a set <math>y\subset \web X</math> of pairwise incoherent points: for all <math>a, b</math> in <math>y</math>, if <math>a\coh_X b</math> then <math>a=b</math>. Any anticlique intersects any clique in at most one point: let <math>x</math> be a clique and <math>y</math> be an anticlique, then if <math>a,b\in x\cap y</math>, since <math>a, b\in x</math> we have <math>a\coh_X b</math> and since <math>y</math> is an anticlique we have <math>a = b</math>. Thus <math>y\in X\orth</math>. Conversely given any <math>y\in X\orth</math> and <math>a, b\in y</math>, suppose <math>a\coh_X b</math>. Then <math>\{a,b\}\in X</math>, thus <math>\{a,b\}\perp y</math> which entails that <math>\{a, b\}</math> has at most one point so that <math>a = b</math>: we have shown that any two elements of <math>y</math> are incoherent.<br />
<br />
Thus the collection of anticliques of <math>X</math> is the dual <math>X\orth</math> of <math>X</math>. Note that the incoherence relation defined above is reflexive and symetric, so that <math>X\orth</math> is a coherent space in the graph sense. Thus we can do for <math>X\orth</math> exactly what we've just done for <math>X</math> and consider the anti-anticliques, that is the anticliques for the incoherent relation which are the cliques for the in-incoherent relation. It is not difficult to see that this in-incoherence relation is just the coherence relation we started with; we thus obtain that <math>X\biorth = X</math>, so that <math>X</math> is a cliqued space.<br />
<br />
=== Stable functions ===<br />
<br />
{{Definition|title=Stable function|<br />
Let <math>X</math> and <math>Y</math> be two coherent spaces. A function <math>F:X\longrightarrow Y</math> is ''stable'' if it satisfies:<br />
* it is non decreasing: for any <math>x,y\in X</math> if <math>x\subset y</math> then <math>F(x)\subset F(y)</math>;<br />
* it is continuous (in the Scott sense): if <math>A</math> is a directed family of cliques of <math>X</math>, that is if for any <math>x,y\in A</math> there is a <math>z\in A</math> such that <math>x\cup y\subset z</math>, then <math>\bigcup_{x\in A}F(x) = F(\bigcup A)</math>;<br />
* it satisfies the stability condition: if <math>x,y\in X</math> are compatible, that is if <math>x\cup y\in X</math>, then <math>F(x\cap y) = F(x)\cap F(y)</math>.<br />
}}<br />
<br />
This definition is admitedly not very tractable. An equivalent and most useful caracterisation of stable functions is given by the following theorem.<br />
<br />
{{Theorem|<br />
Let <math>F:X\longrightarrow Y</math> be a non-decreasing function from the coherent space <math>X</math> to the coherent space <math>Y</math>. The function <math>F</math> is stable iff it satisfies: for any <math>x\in X</math>, <math>b\in\web Y</math>, if <math>b\in F(x)</math> then there is a finite clique <math>x_0\subset x</math> such that:<br />
* <math>b\in F(x_0)</math>,<br />
* for any <math>y\subset x</math> if <math>b\in F(y)</math> then <math>x_0\subset y</math> (<math>x_0</math> is ''the'' minimum sub-clique of <math>x</math> such that <math>b\in F(x_0)</math>). <br />
}}<br />
<br />
Note that the stability condition doesn't depend on the coherent space structure and can be expressed more generally for continuous functions on domains. However, as mentionned in the introduction, the restriction to coherent spaces allows to endow the set of stable functions from <math>X</math> to <math>Y</math> with a structure of coherent space.<br />
<br />
{{Definition|title=The space of stable functions|<br />
Let <math>X</math> and <math>Y</math> be coherent spaces. We denote by <math>X_{\mathrm{fin}}</math> the set of ''finite'' cliques of <math>X</math>. The function space <math>X\imp Y</math> is defined by:<br />
* <math>\web{X\imp Y} = X_{\mathrm{fin}}\times \web Y</math>,<br />
* <math>(x_0, a)\coh_{X\imp Y}(y_0, b)</math> iff <math>\begin{cases}\text{if } x_0\cup y_0\in X\text{ then } a\coh_Y b,\\<br />
\text{if } x_0\cup y_0\in X\text{ and } a = b\text{ then } x_0 = y_0\end{cases}</math>.<br />
}}<br />
<br />
One could equivalently define the strict coherence relation on <math>X\imp Y</math> by: <math>(x_0,a)\scoh_{X\imp Y}(y_0, b)</math> iff when <math>x_0\cup y_0\in X</math> then <math>a\scoh_Y b</math> (equivalently <math>x_0\cup y_0\not\in X</math> or <math>a\scoh_Y b</math>).<br />
<br />
{{Definition|title=Trace of a stable function|<br />
Let <math>F:X\longrightarrow Y</math> be a function. The ''trace'' of <math>F</math> is the set:<br />
<br />
<math>\mathrm{Tr}(F) = \{(x_0, b), x_0\text{ minimal such that } b\in F(x_0)\}</math>.<br />
}}<br />
<br />
{{theorem|<br />
<math>F</math> is stable iff <math>\mathrm{Tr}(F)</math> is a clique of the function space <math>X\imp Y</math><br />
}}<br />
<br />
In particular the continuity of <math>F</math> entails that if <math>x_0</math> is minimal such that <math>b\in F(x_0)</math>, then <math>x_0</math> is finite.<br />
<br />
{{Definition|title=The evaluation function|<br />
Let <math>f</math> be a clique in <math>X\imp Y</math>. We define a function <math>\mathrm{Fun}\,f:X\longrightarrow Y</math> by: <math>\mathrm{Fun}\,f(x) = \{b\in Y,\text{ there is }x_0\subset x\text{ such that }(x_0, b)\in f\}</math>.<br />
}}<br />
<br />
{{Theorem|title=Closure|<br />
If <math>f</math> is a clique of the function space <math>X\imp Y</math> then we have <math>\mathrm{Tr}(\mathrm{Fun}\,f) = f</math>. Conversely if <math>F:X\longrightarrow Y</math> is a stable function then we have <math>F = \mathrm{Fun}\,\mathrm{Tr}(F)</math>.<br />
}}<br />
<br />
=== Cartesian product ===<br />
<br />
{{Definition|title=Cartesian product|<br />
Let <math>X_1</math> and <math>X_2</math> be two coherent spaces. We define the coherent space <math>X_1\with X_2</math> (read <math>X_1</math> ''with'' <math>X_2</math>):<br />
* the web is the disjoint union of the webs: <math>\web{X_1\with X_2} = \{1\}\times\web{X_1}\cup \{2\}\times\web{X_2}</math>;<br />
* the coherence relation is the serie composition of the relations on <math>X_1</math> and <math>X_2</math>: <math>(i, a)\coh_{X_1\with X_2}(j, b)</math> iff either <math>i\neq j</math> or <math>i=j</math> and <math>a\coh_{X_i} b</math>.<br />
}}<br />
<br />
This definition is just the way to put a coherent space structure on the cartesian product. Indeed one easily shows the<br />
<br />
{{Theorem|<br />
Given cliques <math>x_1</math> and <math>x_2</math> in <math>X_1</math> and <math>X_2</math>, we define the subset <math>\langle x_1, x_2\rangle</math> of <math>\web{X_1\with X_2}</math> by: <math>\langle x_1, x_2\rangle = \{1\}\times x_1\cup \{2\}\times x_2</math>. Then <math>\langle x_1, x_2\rangle</math> is a clique in <math>X_1\with X_2</math>.<br />
<br />
Conversely, given a clique <math>x\in X_1\with X_2</math>, for <math>i=1,2</math> we define <math>\pi_i(x) = \{a\in X_i, (i, a)\in x\}</math>. Then <math>\pi_i(x)</math> is a clique in <math>X_i</math> and the function <math>\pi_i:X_1\with X_2\longrightarrow X_i</math> is stable.<br />
<br />
Furthemore these two operations are inverse of each other: <math>\pi_i(\langle x_1, x_2\rangle) = x_i</math> and <math>\langle\pi_1(x), \pi_2(x)\rangle = x</math>. In particular any clique in <math>X_1\with X_2</math> is of the form <math>\langle x_1, x_2\rangle</math>.<br />
}}<br />
<br />
Altogether the results above (and a few other more that we shall leave to the reader) allow to get:<br />
<br />
{{Theorem|<br />
The category of coherent spaces and stable functions is cartesian closed.<br />
}}<br />
<br />
In particular this means that if we define <math>\mathrm{Eval}:(X\imp Y)\with X\longrightarrow Y</math> by: <math>\mathrm{Eval}(\langle f, x\rangle) = \mathrm{Fun}\,f(x)</math> then <math>\mathrm{Eval}</math> is stable.<br />
<br />
== The monoidal structure of coherent semantics ==<br />
<br />
=== Linear functions ===<br />
<br />
{{Definition|title=Linear function|<br />
A function <math>F:X\longrightarrow Y</math> is ''linear'' if it is stable and furthemore satisfies: for any family <math>A</math> of pairwise compatible cliques of <math>X</math>, that is such that for any <math>x, y\in A</math>, <math>x\cup y\in X</math>, we have <math>\bigcup_{x\in A}F(x) = F(\bigcup A)</math>.<br />
}}<br />
<br />
In particular if we take <math>A</math> to be the empty family, then we have <math>F(\emptyset) = \emptyset</math>.<br />
<br />
The condition for linearity is quite similar to the condition for Scott continuity, except that we dropped the constraint that <math>A</math> is ''directed''. Linearity is therefore much stronger than stability: most stable functions are not linear.<br />
<br />
However most of the functions seen so far are linear. Typically the function <math>\pi_i:X_1\with X_2\longrightarrow X_i</math> is linear from wich one may deduce that the ''with'' construction is also a cartesian product in the category of coherent spaces and linear functions.<br />
<br />
As with stable function we have an equivalent and much more tractable caracterisation of linear function:<br />
<br />
{{Theorem|<br />
Let <math>F:X\longrightarrow Y</math> be a continuous function. Then <math>F</math> is linear iff it satisfies: for any clique <math>x\in X</math> and any <math>b\in F(x)</math> there is a unique <math>a\in x</math> such that <math>b\in F(\{a\})</math>.<br />
}}<br />
<br />
Just as the caracterisation theorem for stable functions allowed us to build the coherent space of stable functions, this theorem will help us to endow the set of linear maps with a structure of coherent space.<br />
<br />
{{Definition|title=The linear functions space|<br />
Let <math>X</math> and <math>Y</math> be coherent spaces. The ''linear function space'' <math>X\limp Y</math> is defined by:<br />
* <math>\web{X\limp Y} = \web X\times \web Y</math>,<br />
* <math>(a,b)\coh_{X\limp Y}(a', b')</math> iff <math>\begin{cases}\text{if }a\coh_X a'\text{ then } b\coh_Y b'\\<br />
\text{if }a\coh_X a' \text{ and }b=b'\text{ then }a=a'\end{cases}</math><br />
}}<br />
<br />
Equivalently one could define the strict coherence to be: <math>(a,b)\scoh_{X\limp Y}(a',b')</math> iff <math>a\scoh_X a'</math> entails <math>b\scoh_Y b'</math>.<br />
<br />
{{Definition|title=Linear trace|<br />
Let <math>F:X\longrightarrow Y</math> be a function. The ''linear trace'' of <math>F</math> denoted as <math>\mathrm{LinTr}(F)</math> is the set:<br />
<math>\mathrm{LinTr}(F) = \{(a, b)\in\web X\times\web Y</math> such that <math>b\in F(\{a\})\}</math>.<br />
}}<br />
<br />
{{Theorem|<br />
If <math>F</math> is linear then <math>\mathrm{LinTr}(F)</math> is a clique of <math>X\limp Y</math>.<br />
}}<br />
<br />
{{Definition|title=Evaluation of linear function|<br />
Let <math>f</math> be a clique of <math>X\limp Y</math>. We define the function <math>\mathrm{LinFun}\,f:X\longrightarrow Y</math> by: <math>\mathrm{LinFun}\,f(x) = \{b\in\web Y</math> such that there is an <math>a\in x</math> satisfying <math>(a,b)\in f\}</math>.<br />
}}<br />
<br />
{{Theorem|title=Linear closure|<br />
Let <math>f</math> be a clique in <math>X\limp Y</math>. Then we have <math>\mathrm{LinTr}(\mathrm{LinFun}\, f) = f</math>. Conversely if <math>F:X\longrightarrow Y</math> is linear then we have <math>F = \mathrm{LinFun}\,\mathrm{LinTr}(F)</math>.<br />
}}<br />
<br />
It remains to define a tensor product and we will get that the category of coherent spaces with linear functions is monoidal symetric (it is actually *-autonomous).<br />
<br />
=== Tensor product ===<br />
<br />
{{Definition|title=Tensor product|<br />
Let <math>X</math> and <math>Y</math> be coherent spaces. Their tensor product <math>X\tens Y</math> is defined by: <math>\web{X\tens Y} = \web X\times\web Y</math> and <math>(a,b)\coh_{X\tens Y}(a',b')</math> iff <math>a\coh_X a'</math> and <math>b\coh_Y b'</math>.<br />
}}<br />
<br />
{{Theorem|<br />
The category of coherent spaces with linear maps and tensor product is [[Categorical semantics#Modeling IMLL|monoidal symetric closed]].<br />
}}<br />
<br />
The closedness is a consequence of the existence of the linear isomorphism:<br />
<math>\varphi:X\tens Y\limp Z\ \stackrel{\sim}{\longrightarrow}\ X\limp(Y\limp Z)</math><br />
<br />
that is defined by its linear trace: <math>\mathrm{LinTr}(\varphi) = \{(((a, b), c), (a, (b, c))),\, a\in\web X,\, b\in \web Y,\, c\in\web Z\}</math>.<br />
<br />
=== Linear negation ===<br />
<br />
{{Definition|title=Linear negation|<br />
Let <math>X</math> be a coherent space. We define the ''incoherence relation'' on <math>\web X</math> by: <math>a\incoh_X b</math> iff <math>a\coh_X b</math> entails <math>a=b</math>. The incoherence relation is reflexive and symetric; we call ''dual'' or ''linear negation'' of <math>X</math> the associated coherent space denoted <math>X\orth</math>, thus defined by: <math>\web{X\orth} = \web X</math> and <math>a\coh_{X\orth} b</math> iff <math>a\incoh_X b</math>.<br />
}}<br />
<br />
The cliques of <math>X\orth</math> are called the ''anticliques'' of <math>X</math>. As seen in the section on cliqued spaces we have <math>X\biorth=X</math>.<br />
<br />
{{Theorem|<br />
The category of coherent spaces with linear maps, tensor product and linear negation is *-autonomous.<br />
}}<br />
<br />
This is in particular consequence of the existence of the isomorphism:<br />
<math>\varphi:X\limp Y\ \stackrel{\sim}{\longrightarrow}\ Y\orth\limp X\orth</math><br />
<br />
defined by its linear trace: <math>\mathrm{LinTr}(\varphi) = \{((a, b), (b, a)),\, a\in\web X,\, b\in\web Y\}</math>.<br />
<br />
== Exponentials ==<br />
<br />
In linear algebra, bilinear maps may be factorized through the tensor product. Similarly there is a coherent space <math>\oc X</math> that allows to factorize stable functions through linear functions.<br />
<br />
{{Definition|title=Of course|<br />
Let <math>X</math> be a coherent space; recall that <math>X_{\mathrm{fin}}</math> denotes the set of finite cliques of <math>X</math>. We define the space <math>\oc X</math> (read ''of course <math>X</math>'') by: <math>\web{\oc X} = X_{\mathrm{fin}}</math> and <math>x_0\coh_{\oc X}y_0</math> iff <math>x_0\cup y_0</math> is a clique of <math>X</math>.<br />
}}<br />
<br />
Thus a clique of <math>\oc X</math> is a set of finite cliques of <math>X</math> the union of wich is a clique of <math>X</math>.<br />
<br />
{{Theorem|<br />
Let <math>X</math> be a coherent space. Denote by <math>\beta:X\longrightarrow \oc X</math> the stable function whose trace is: <math>\mathrm{Tr}(\beta) = \{(x_0, x_0),\, x_0\in X_{\mathrm{fin}}\}</math>. Then for any coherent space <math>Y</math> and any stable function <math>F: X\longrightarrow Y</math> there is a unique ''linear'' function <math>\bar F:\oc X\longrightarrow Y</math> such that <math>F = \bar F\circ \beta</math>.<br />
<br />
Furthermore we have <math>X\imp Y = \oc X\limp Y</math>.<br />
}}<br />
<br />
{{Theorem|title=The exponential isomorphism|<br />
Let <math>X</math> and <math>Y</math> be two coherent spaces. Then there is a linear isomorphism:<br />
<math>\varphi:\oc(X\with Y)\quad\stackrel{\sim}{\longrightarrow}\quad \oc X\tens\oc Y</math>.<br />
}}<br />
<br />
The iso <math>\varphi</math> is defined by its trace: <math>\mathrm{Tr}(\varphi) = \{(x_0, (\pi_1(x_0), \pi_2(x_0)), x_0\text{ finite clique of } X\with Y\}</math>. <br />
<br />
This isomorphism, that sends an additive structure (the web of a with is obtained by disjoint union) onto a multiplicative one (the web of a tensor is obtained by cartesian product) is the reason why the of course is called an ''exponential''.<br />
<br />
== Dual connectives and neutrals ==<br />
<br />
By linear negation all the constructions defined so far (<math>\with, \tens, \oc</math>) have a dual.<br />
<br />
=== The direct sum ===<br />
<br />
The dual of <math>\with</math> is <math>\plus</math> defined by: <math>X\plus Y = (X\orth\with Y\orth)\orth</math>. An equivalent definition is given by: <math>\web{X\plus Y} = \web{X\with Y} = \{1\}\times \web X \cup \{2\}\times\web Y</math> and <math>(i, a)\coh_{X\plus Y} (j, b)\text{ iff } i = j = 1 \text{ and } a\coh_X b,\text{ or }i = j = 2\text{ and } a\coh_Y b</math>.<br />
<br />
{{Theorem|<br />
Let <math>x'</math> be a clique of <math>X\plus Y</math>; then <math>x'</math> is of the form <math>\{i\}\times x</math> where <math>i = 1\text{ and }x\in X</math>, or <math>i = 2\text{ and }x\in Y</math>.<br />
<br />
Denote <math>\mathrm{inl}:X\longrightarrow X\plus Y</math> the function defined by <math>\mathrm{inl}(x) = \{1\}\times x</math> and by <math>\mathrm{inr}:Y\longrightarrow X\plus Y</math> the function defined by <math>\mathrm{inr}(x) = \{2\}\times x</math>. Then <math>\mathrm{inl}</math> and <math>\mathrm{inr}</math> are linear.<br />
<br />
If <math>F:X\longrightarrow Z</math> and <math>G:Y\longrightarrow Z</math> are ''linear'' functions then the function <math>H:X\plus Y \longrightarrow Z</math> defined by <math>H(\mathrm{inl}(x)) = F(x)</math> and <math>H(\mathrm{inr}(y)) = G(y)</math> is linear.<br />
}}<br />
<br />
In other terms <math>X\plus Y</math> is the direct sum of <math>X</math> and <math>Y</math>. Note that in the theorem all functions are ''linear''. Things doesn't work so smoothly for stable functions. Historically it was after noting this defect of coherent semantics w.r.t. the intuitionnistic implication that Girard was leaded to discover linear functions.<br />
<br />
=== The par and the why not ===<br />
<br />
We now come to the most mysterious constructions of coherent semantics: the duals of the tensor and the of course.<br />
<br />
The ''par'' is the dual of the tensor, thus defined by: <math>X\parr Y = (X\orth\tens Y\orth)\orth</math>. From this one can deduce the definition in graph terms: <math>\web{X\parr Y} = \web{X\tens Y} = \web X\times \web Y</math> and <math>(a,b)\scoh_{X\parr Y} (a',b')</math> iff <math>a\scoh_X a'</math> or <math>b\scoh_Y b'</math>. With this definition one sees that we have:<br />
<br />
<math>X\limp Y = X\orth\parr Y</math><br />
<br />
for any coherent spaces <math>X</math> and <math>Y</math>. This equation can be seen as an alternative definition of the par: <math>X\parr Y = X\orth\limp Y</math>.<br />
<br />
Similarly the dual of the of course is called ''why not'' defined by: <math>\wn X = (\oc X\orth)\orth</math>. From this we deduce the definition in the graph sense which is a bit tricky: <math>\web{\wn X}</math> is the set of finite anticliques of <math>X</math>, and given two finite anticliques <math>x</math> and <math>y</math> of <math>X</math> we have <math>x\scoh_{\wn X} y</math> iff there is <math>a\in x</math> and <math>b\in y</math> such that <math>a\scoh_X b</math>.<br />
<br />
Note that both for the par and the why not it is much more convenient to define the strict coherence than the coherence.<br />
<br />
With these two last constructions, the equation between the stable function space, the of course and the linear function space may be written:<br />
<br />
<math>X\imp Y = \wn X\orth\parr Y</math>.<br />
<br />
=== One and bottom ===<br />
<br />
Depending on the context we denote by <math>\one</math> or <math>\bot</math> the coherent space whose web is a singleton and whose coherence relation is the trivial reflexive relation.<br />
<br />
{{Theorem|<br />
<math>\one</math> is neutral for tensor, that is, there is a linear isomorphism <math>\varphi:X\tens\one\ \stackrel{\sim}{\longrightarrow}\ X</math>.<br />
<br />
Similarly <math>\bot</math> is neutral for par.<br />
}}<br />
<br />
=== Zero and top ===<br />
<br />
Depending on the context we denote by <math>\zero</math> or <math>\top</math> the coherent space with empty web.<br />
<br />
{{Theorem|<br />
<math>\zero</math> is neutral for the direct sum <math>\plus</math>, <math>\top</math> is neutral for the cartesian product <math>\with</math>.<br />
}}<br />
<br />
{{Remark|<br />
It is one of the main defect of coherent semantics w.r.t. linear logic that it identifies the neutrals: in coherent semantics <math>\zero = \top</math> and <math>\one = \bot</math>. However there is no known semantics of LL that solves this problem in a satisfactory way.}}<br />
<br />
== After coherent semantics ==<br />
<br />
Coherent semantics was an important milestone in the modern theory of logic of programs, in particular because it leaded to the invention of Linear Logic, and more generally because it establishes a strong link between logic and linear algebra; this link is nowadays aknowledged by the customary use of ''monoidal categories'' in logic. In some sense coherent semantics is a precursor of many forthcoming works that explore the linear nature of logic as for example [[geometry of interaction]] which interprets proofs by operators or [[finiteness semantics]] which interprets formulas as vector spaces and resulted in [[differential linear logic]]...<br />
<br />
Lots of this work have been motivated by the fact that coherent semantics is not complete as a semantics of programs (technically one says that it is not ''fully abstract''). In order to see this, let us firts come back on the origin of the central concept of ''stability'' which as pointed above originated in the study of the sequentiality in programs.<br />
<br />
=== Sequentiality ===<br />
<br />
Sequentiality is a property that we will not define here (it would diserve its own article). We rely on the intuition that a function of <math>n</math> arguments is sequential if one can determine which of these argument is examined first during the computation. Obviously any function implemented in a functionnal language is sequential; for example the function ''or'' defined à la CAML by:<br />
<br />
<code>or = fun (x, y) -> if x then true else y</code><br />
<br />
examines its argument x first. Note that this may be expressed more abstractly by the property: <math>\mathrm{or}(\bot, x) = \bot</math> for any boolean <math>x</math>: the function ''or'' needs its first argument in order to compute anything. On the other hand we have <math>\mathrm{or}(\mathrm{true}, \bot) = \mathrm{true}</math>: in some case (when the first argument is true), the function doesn't need its second argument at all.<br />
<br />
The typical non sequential function is the ''parallel or'' (that one cannot define in a CAML like language).<br />
<br />
For a while one may have believed that the stability condition on which coherent semantics is built was enough to capture the notion of ''sequentiality'' of programs. A hint was the already mentionned fact that the ''parallel or'' is not stable. This diserves a bit of explanation.<br />
<br />
==== The parallel or is not stable ====<br />
<br />
Let <math>B</math> be the coherent space of booleans, also know as the flat domain of booleans: <math>\web B = \{tt, ff\}</math> where <math>tt</math> and <math>ff</math> are two arbitrary distinct objects (for example one may take <math>tt = 0</math> and <math>ff = 1</math>) and for any <math>b_1, b_2\in \web B</math>, define <math>b_1\coh_B b_2</math> iff <math>b_1 = b_2</math>. Then <math>B</math> has exactly three cliques: the empty clique that we shall denote <math>\bot</math>, the singleton <math>\{tt\}</math> that we shall denote <math>T</math> and the singleton <math>\{ff\}</math> that we shall denote <math>F</math>. These three cliques are ordered by inclusion: <math>\bot \leq T, F</math> (we use <math>\leq</math> for <math>\subset</math> to enforce the idea that coherent spaces are domains).<br />
<br />
Recall the [[#Cartesian product|definition of the with]], and in particular that any clique of <math>B\with B</math> has the form <math>\langle x, y\rangle</math> where <math>x</math> and <math>y</math> are cliques of <math>B</math>. Thus <math>B\with B</math> has 9 cliques: <math>\langle\bot,\bot\rangle,\ \langle\bot, T\rangle,\ \langle\bot, F\rangle,\ \langle T,\bot\rangle,\ \dots</math> that are ordered by the product order: <math>\langle x,y\rangle\leq \langle x,y\rangle</math> iff <math>x\leq x'</math> and <math>y\leq y'</math>.<br />
<br />
With these notations in mind one may define the parallel or by:<br />
<br />
<math><br />
\begin{array}{rcl}<br />
\mathrm{Por} : B\with B &\longrightarrow& B\\<br />
\langle T,\bot\rangle &\longrightarrow& T\\<br />
\langle \bot,T\rangle &\longrightarrow& T\\<br />
\langle F, F\rangle &\longrightarrow& F<br />
\end{array}<br />
</math><br />
<br />
The function is completely determined if we add the assumption that it is non decreasing; for example one must have <math>\mathrm{Por}\langle\bot,\bot\rangle = \bot</math> because the lhs has to be less than both <math>T</math> and <math>F</math> (because <math>\langle\bot,\bot\rangle \leq \langle T,\bot\rangle</math> and <math>\langle\bot,\bot\rangle \leq \langle F,F\rangle</math>).<br />
<br />
The function is not stable because <math>\langle T,\bot\rangle \cap \langle \bot, T\rangle = \langle\bot, \bot\rangle</math>, thus <math>\mathrm{Por}(\langle T,\bot\rangle \cap \langle \bot, T\rangle) = \bot</math> whereas <math>\mathrm{Por}\langle T,\bot\rangle \cap \mathrm{Por}\langle \bot, T\rangle = T\cap T = T</math>.<br />
<br />
Another way to see this is: suppose <math>x</math> and <math>y</math> are two cliques of <math>B</math> such that <math>tt\in \mathrm{Por}\langle x, y\rangle</math>, which means that <math>\mathrm{Por}\langle x, y\rangle = T</math>; according to the [[#Stable functions|caracterisation theorem of stable functions]], if <math>\mathrm{Por}</math> were stable then there would be a unique minimum <math>x_0</math> included in <math>x</math>, and a unique minimum <math>y_0</math> included in <math>y</math> such that <math>\mathrm{Por}\langle x_0, y_0\rangle = T</math>. This is not the case because both <math>\langle T,\bot\rangle</math> and <math>\langle T,\bot\rangle</math> are minimal such that their value is <math>T</math>.<br />
<br />
In other terms, knowing that <math>\mathrm{Por}\langle x, y\rangle = T</math> doesn't tell which of <math>x</math> of <math>y</math> is responsible for that, although we know by the definition of <math>\mathrm{Por}</math> that only one of them is. Indeed the <math>\mathrm{Por}</math> function is not representable in sequential programming languages such as (typed) lambda-calculus.<br />
<br />
So the first genuine idea would be that stability caracterises sequentiality; but...<br />
<br />
==== The Gustave function is stable ====<br />
<br />
The Gustave function, so-called after an old joke, was found by Gérard Berry as an example of a function that is stable but non sequential. It is defined by:<br />
<br />
<math><br />
\begin{array}{rcl}<br />
B\with B\with B &\longrightarrow& B\\<br />
\langle T, F, \bot\rangle &\longrightarrow& T\\<br />
\langle \bot, T, F\rangle &\longrightarrow& T\\<br />
\langle F, \bot, T\rangle &\longrightarrow& T\\<br />
\langle x, y, z\rangle &\longrightarrow& F<br />
\end{array}<br />
</math><br />
<br />
The last clause is for all cliques <math>x</math>, <math>y</math> and <math>z</math> such that <math>\langle x, y ,z\rangle</math> is incompatible with the three cliques <math>\langle T, F, \bot\rangle</math>, <math>\langle \bot, T, F\rangle</math> and <math>\langle F, \bot, T\rangle</math>, that is such that the union with any of these three cliques is not a clique in <math>B\with B\with B</math>. We shall denote <math>x_1</math>, <math>x_2</math> and <math>x_3</math> these three cliques.<br />
<br />
We furthemore assume that the Gustave function is non decreasing, so that we get <math>G\langle\bot,\bot,\bot\rangle = \bot</math>.<br />
<br />
We note that <math>x_1</math>, <math>x_2</math> and <math>x_3</math> are pairwise incompatible. From this we can deduce that the Gustave function is stable: typically if <math>G\langle x,y,z\rangle = T</math> then exactly one of the <math>x_i</math>s is contained in <math>\langle x, y, z\rangle</math>.<br />
<br />
However it is not sequential because there is no way to determine which of its three arguments is examined first: it is not the first one otherwise we would have <math>G\langle\bot, T, F\rangle = \bot</math> and similarly it is not the second one nor the third one.<br />
<br />
In other terms there is no way to implement the Gustave function by a lambda-term (or in any sequential programming language). Thus coherent semantics is not complete w.r.t. lambda-calculus.<br />
<br />
The research for a right model for sequentiality was the motivation for lot of<br />
work, ''e.g.'', ''sequential algorithms'' by Gérard Bérry and Pierre-Louis<br />
Currien in the early eighties, that were more recently reformulated as a kind<br />
of [[Game semantics|game model]], and the theory of ''hypercoherent spaces'' by<br />
Antonio Bucciarelli and Thomas Ehrhard.<br />
<br />
=== Multiplicative neutrals and the mix rule ===<br />
<br />
Coherent semantics is slightly degenerated w.r.t. linear logic because it identifies multiplicative neutrals (it also identifies additive neutrals but that's yet another problem): the coherent spaces <math>\one</math> and <math>\bot</math> are equal.<br />
<br />
The first consequence of the identity <math>\one = \bot</math> is that the formula <math>\one\limp\bot</math> becomes provable, and so does the formula <math>\bot</math>. Note that this doesn't entail (as in classical logic or intuitionnistic logic) that linear logic is incoherent because the principle <math>\bot\limp A</math> for any formula <math>A</math> is still not provable.<br />
<br />
The equality <math>\one = \bot</math> has also as consequence the fact that <math>\bot\limp\one</math> (or equivalently the formula <math>\one\parr\one</math>) is provable. This principle is also known as the [[Mix|mix rule]]<br />
<br />
<math><br />
\AxRule{\vdash \Gamma}<br />
\AxRule{\vdash \Delta}<br />
\LabelRule{\rulename{mix}}<br />
\BinRule{\vdash \Gamma,\Delta}<br />
\DisplayProof<br />
</math><br />
<br />
as it can be used to show that this rule is admissible:<br />
<br />
<math><br />
\AxRule{\vdash\Gamma}<br />
\LabelRule{\bot_R}<br />
\UnaRule{\vdash\Gamma, \bot}<br />
\AxRule{\vdash\Delta}<br />
\LabelRule{\bot_R}<br />
\UnaRule{\vdash\Delta, \bot}<br />
\BinRule{\vdash \Gamma, \Delta, \bot\tens\bot}<br />
\NulRule{\vdash \one\parr\one}<br />
\LabelRule{\rulename{cut}}<br />
\BinRule{\vdash\Gamma,\Delta}<br />
\DisplayProof<br />
</math><br />
<br />
None of the two principles <math>1\limp\bot</math> and <math>\bot\limp\one</math> are valid in linear logic. To correct this one could extend the syntax of linear logic by adding the mix-rule. This is not very satisfactory as the mix rule violates some principles of [[Polarized linear logic]], typically the fact that as sequent of the form <math>\vdash P_1, P_2</math> where <math>P_1</math> and <math>P_2</math> are positive, is never provable.<br />
<br />
On the other hand the mix-rule is valid in coherent semantics so one could try to find some other model that invalidates the mix-rule. For example Girard's Coherent Banach spaces were an attempt to address this issue.<br />
<br />
== References ==<br />
<references /></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/GoI_for_MELL:_exponentialsGoI for MELL: exponentials2010-11-17T11:29:15Z<p>Laurent Regnier: definition of type !A</p>
<hr />
<div>= The tensor product of Hilbert spaces =<br />
<br />
Recall that we work in the Hilbert space <math>H=\ell^2(\mathbb{N})</math> endowed with its canonical hilbertian basis denoted by <math>(e_k)_{k\in\mathbb{N}}</math>.<br />
<br />
The space <math>H\tens H</math> is the collection of sequences <math>(x_{np})_{n,p\in\mathbb{N}}</math> of complex numbers such that <math>\sum_{n,p}|x_{np}|^2</math> converges. The scalar product is defined just as before:<br />
: <math>\langle (x_{np}), (y_{np})\rangle = \sum_{n,p} x_{np}\bar y_{np}</math>.<br />
<br />
If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_p)_{p\in\mathbb{N}}</math> are vectors in <math>H</math> then their tensor is the sequence:<br />
: <math>x\tens y = (x_ny_p)_{n,p\in\mathbb{N}}</math>.<br />
<br />
We define: <math>e_{np} = e_n\tens e_p</math> so that <math>e_{np}</math> is the sequence <math>(e_{npij})_{i,j\in\mathbb{N}}</math> of complex numbers given by <math>e_{npij} = \delta_{ni}\delta_{pj}</math>. By bilinearity of tensor we have:<br />
: <math>x\tens y = \left(\sum_n x_ne_n\right)\tens\left(\sum_p y_pe_p\right) = <br />
\sum_{n,p} x_ny_p\, e_n\tens e_p = \sum_{n,p} x_ny_p\,e_{np}</math><br />
<br />
Furthermore the system of vectors <math>(e_{np})</math> is a hilbertian basis of <math>H\tens H</math>: the sequence <math>x=(x_{np})_{n,p\in\mathbb{N}}</math> may be written:<br />
: <math>x = \sum_{n,p\in\mathbb{N}}x_{np}\,e_{np}<br />
= \sum_{n,p\in\mathbb{N}}x_{np}\,e_n\tens e_p</math>.<br />
<br />
== An algebra isomorphism ==<br />
<br />
Being both separable Hilbert spaces, <math>H</math> and <math>H\tens H</math> are isomorphic. We will now define explicitely an iso based on partial permutations.<br />
<br />
We fix, once for all, a bijection from couples of natural numbers to natural<br />
numbers that we will denote by <math>(n,p)\mapsto\langle n,p\rangle</math>. For<br />
example set <math>\langle n,p\rangle = 2^n(2p+1) - 1</math>. Conversely, given<br />
<math>n\in\mathbb{N}</math> we denote by <math>n_{(1)}</math> and<br />
<math>n_{(2)}</math> the unique integers such that <math>\langle n_{(1)},<br />
n_{(2)}\rangle = n</math>.<br />
<br />
{{Remark|<br />
just as it was convenient but actually not necessary to choose <math>p</math> and <math>q</math> so that <math>pp^* + qq^* = 1</math> it is actually not necessary to have a ''bijection'', a one-to-one mapping from <math>\mathbb{N}^2</math> ''into'' <math>\mathbb{N}</math> would be sufficient for our purpose.<br />
}}<br />
<br />
This bijection can be extended into a Hilbert space isomorphism <math>\Phi:H\tens H\rightarrow H</math> by defining:<br />
: <math>e_n\tens e_p = e_{np} \mapsto e_{\langle n,p\rangle}</math>.<br />
<br />
Now given an operator <math>u</math> on <math>H</math> we define the operator <math>!u</math> on <math>H</math> by:<br />
: <math>!u(e_{\langle n,p\rangle}) = \Phi(e_n\tens u(e_p))</math>.<br />
<br />
{{Remark|<br />
The operator <math>!u</math> is defined by:<br />
: <math>!u = \Phi\circ (1\tens u)\circ \Phi^{-1}</math><br />
where <math>1\tens u</math> denotes the operator on <math>H\tens H</math> defined by <math>(1\tens u)(x\tens y) = x\tens u(y)</math> for any <math>x,y</math> in <math>H</math>. However this notation must not be confused with the [[GoI for MELL: the *-autonomous structure#The tensor rule|tensor of operators]] that was defined in the previous section in order to interpret the tensor rule of linear logic; we therefore will not use it.<br />
}}<br />
<br />
One can check that given two operators <math>u</math> and <math>v</math> we have:<br />
* <math>!u!v = {!(uv)}</math>;<br />
* <math>!(u^*) = (!u)^*</math>.<br />
<br />
Due to the fact that <math>\Phi</math> is an isomorphism ''onto'' we also have <math>!1=1</math>; this however will not be used.<br />
<br />
We therefore have that <math>!</math> is a morphism on <math>\mathcal{B}(H)</math>; it is easily seen to be an iso (not ''onto'' though). As this is the crucial ingredient for interpreting the structural rules of linear logic, we will call it the ''copying iso''.<br />
<br />
== Interpretation of exponentials ==<br />
<br />
If we suppose that <math>u = u_\varphi</math> is a <math>p</math>-isometry generated by the partial permutation <math>\varphi</math> then we have:<br />
: <math>!u(e_{\langle n,p\rangle}) = \Phi(e_n\tens u(e_p)) = \Phi(e_n\tens e_{\varphi(p)}) = e_{\langle n,\varphi(p)\rangle}</math>.<br />
Thus <math>!u_\varphi</math> is itself a <math>p</math>-isometry generated by the<br />
partial permutation <math>!\varphi:n\mapsto \langle n_{(1)}, \varphi(n_{(2)})\rangle</math>, which shows that the proof space is stable under the copying iso.<br />
<br />
Given a type <math>A</math> we define the type <math>!A</math> by:<br />
: <math>!A = \{!u, u\in A\}\biorth</math></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/GoI_for_MELL:_exponentialsGoI for MELL: exponentials2010-11-17T10:55:50Z<p>Laurent Regnier: style</p>
<hr />
<div>= The tensor product of Hilbert spaces =<br />
<br />
Recall that we work in the Hilbert space <math>H=\ell^2(\mathbb{N})</math> endowed with its canonical hilbertian basis denoted by <math>(e_k)_{k\in\mathbb{N}}</math>.<br />
<br />
The space <math>H\tens H</math> is the collection of sequences <math>(x_{np})_{n,p\in\mathbb{N}}</math> of complex numbers such that <math>\sum_{n,p}|x_{np}|^2</math> converges. The scalar product is defined just as before:<br />
: <math>\langle (x_{np}), (y_{np})\rangle = \sum_{n,p} x_{np}\bar y_{np}</math>.<br />
<br />
If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_p)_{p\in\mathbb{N}}</math> are vectors in <math>H</math> then their tensor is the sequence:<br />
: <math>x\tens y = (x_ny_p)_{n,p\in\mathbb{N}}</math>.<br />
<br />
We define: <math>e_{np} = e_n\tens e_p</math> so that <math>e_{np}</math> is the sequence <math>(e_{npij})_{i,j\in\mathbb{N}}</math> of complex numbers given by <math>e_{npij} = \delta_{ni}\delta_{pj}</math>. By bilinearity of tensor we have:<br />
: <math>x\tens y = \left(\sum_n x_ne_n\right)\tens\left(\sum_p y_pe_p\right) = <br />
\sum_{n,p} x_ny_p\, e_n\tens e_p = \sum_{n,p} x_ny_p\,e_{np}</math><br />
<br />
Furthermore the system of vectors <math>(e_{np})</math> is a hilbertian basis of <math>H\tens H</math>: the sequence <math>x=(x_{np})_{n,p\in\mathbb{N}}</math> may be written:<br />
: <math>x = \sum_{n,p\in\mathbb{N}}x_{np}\,e_{np}<br />
= \sum_{n,p\in\mathbb{N}}x_{np}\,e_n\tens e_p</math>.<br />
<br />
== An algebra isomorphism ==<br />
<br />
Being both separable Hilbert spaces, <math>H</math> and <math>H\tens H</math> are isomorphic. We will now define explicitely an iso based on partial permutations.<br />
<br />
We fix, once for all, a bijection from couples of natural numbers to natural numbers that we will denote by <math>(n,p)\mapsto\langle n,p\rangle</math>. For example set <math>\langle n,p\rangle = 2^n(2p+1) - 1</math>.<br />
<br />
{{Remark|<br />
just as it was convenient but actually not necessary to choose <math>p</math> and <math>q</math> so that <math>pp^* + qq^* = 1</math> it is actually not necessary to have a ''bijection'', a one-to-one mapping from <math>\mathbb{N}^2</math> ''into'' <math>\mathbb{N}</math> would be sufficient for our purpose.<br />
}}<br />
<br />
This bijection can be extended into a Hilbert space isomorphism <math>\Phi:H\tens H\rightarrow H</math> by defining:<br />
: <math>e_n\tens e_p = e_{np} \mapsto e_{\langle n,p\rangle}</math>.<br />
<br />
Now given an operator <math>u</math> on <math>H</math> we define the operator <math>!u</math> on <math>H</math> by:<br />
: <math>!u(e_{\langle n,p\rangle}) = \Phi(e_n\tens u(e_p))</math>.<br />
<br />
{{Remark|<br />
The operator <math>!u</math> is defined by:<br />
: <math>!u = \Phi\circ (1\tens u)\circ \Phi^{-1}</math><br />
where <math>1\tens u</math> denotes the operator on <math>H\tens H</math> defined by <math>(1\tens u)(x\tens y) = x\tens u(y)</math> for any <math>x,y</math> in <math>H</math>. However this notation must not be confused with the [[GoI for MELL: the *-autonomous structure#The tensor rule|tensor of operators]] that was defined in the previous section in order to interpret the tensor rule of linear logic; we therefore will not use it.<br />
}}<br />
<br />
One can check that given two operators <math>u</math> and <math>v</math> we have:<br />
* <math>!u!v = {!(uv)}</math>;<br />
* <math>!(u^*) = (!u)^*</math>.<br />
<br />
Due to the fact that <math>\Phi</math> is an isomorphism ''onto'' we also have <math>!1=1</math>; this however will not be used.<br />
<br />
We therefore have that <math>!</math> is a morphism on <math>\mathcal{B}(H)</math>; it is easily seen to be an iso (not ''onto'' though). As this is the crucial ingredient for interpreting the structural rules of linear logic, we will call it the ''copying iso''.<br />
<br />
== Interpretation of exponentials ==<br />
<br />
If we suppose that <math>u = u_\varphi</math> is a <math>p</math>-isometry generated by the partial permutation <math>\varphi</math> then we have:<br />
: <math>!u(e_{\langle n,p\rangle}) = \Phi(e_n\tens u(e_p)) = \Phi(e_n\tens e_{\varphi(p)}) = e_{\langle n,\varphi(p)\rangle}</math>.<br />
Thus <math>!u</math> is itself a <math>p</math>-isometry and the proof space is stable under the copying iso.</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/GoI_for_MELL:_exponentialsGoI for MELL: exponentials2010-06-05T18:26:05Z<p>Laurent Regnier: The copying iso</p>
<hr />
<div>= The tensor product of Hilbert spaces =<br />
<br />
The space <math>H\tens H</math> is the collection of sequences <math>(x_{np})_{n,p\in\mathbb{N}}</math> of complex numbers such that: <math>\sum_{n,p}|x_{np}|^2</math> converges. The scalar product is defined just as before:<br />
: <math>\langle (x_{np}), (y_{np})\rangle = \sum_{n,p} x_{np}\bar y_{np}</math>.<br />
<br />
If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_p)_{p\in\mathbb{N}}</math> are vectors in <math>H</math> then their tensor is the sequence:<br />
: <math>x\tens y = (x_ny_p)_{n,p\in\mathbb{N}}</math>.<br />
<br />
Recall that <math>(e_k)_{k\in\mathbb{N}}</math> is the canonical hilbertian basis of <math>H=\ell^2(\mathbb{N})</math>. We define: <math>e_{np} = e_n\tens e_p</math> so that <math>e_{np}</math> is the sequence <math>(e_{npij})_{i,j\in\mathbb{N}}</math> of complex numbers given by <math>e_{npij} = \delta_{ni}\delta_{pj}</math>. By bilinearity of tensor we have:<br />
: <math>x\tens y = \left(\sum_n x_ne_n\right)\tens\left(\sum_p y_pe_p\right) = <br />
\sum_{n,p} x_ny_p\, e_n\tens e_p = \sum_{n,p} x_ny_p\,e_{np}</math><br />
<br />
Furthermore the system of vectors <math>(e_{np})</math> is a hilbertian basis of <math>H\tens H</math>: the sequence <math>x=(x_{np})_{n,p\in\mathbb{N}}</math> may be written:<br />
: <math>x = \sum_{n,p\in\mathbb{N}}x_{np}\,e_{np}<br />
= \sum_{n,p\in\mathbb{N}}x_{np}\,e_n\tens e_p</math>.<br />
<br />
== An algebra isomorphism ==<br />
<br />
Being both separable Hilbert spaces, <math>H</math> and <math>H\tens H</math> are isomorphic. We will now define explicitely an iso based on partial permutations.<br />
<br />
We fix, once for all, a bijection from couples of natural numbers to natural numbers that we will denote by <math>(n,p)\mapsto\langle n,p\rangle</math>. For example set <math>\langle n,p\rangle = 2^n(2p+1) - 1</math>.<br />
<br />
{{Remark|<br />
just as it was convenient but actually not necessary to choose <math>p</math> and <math>q</math> so that <math>pp^* + qq^* = 1</math> it is actually not necessary to have a ''bijection'', a one-to-one mapping from <math>\mathbb{N}^2</math> ''into'' <math>\mathbb{N}</math> would be sufficient for our purpose.<br />
}}<br />
<br />
This bijection can be extended into a Hilbert space isomorphism <math>\Phi:H\tens H\rightarrow H</math> by defining:<br />
: <math>e_n\tens e_p = e_{np} \mapsto e_{\langle n,p\rangle}</math>.<br />
<br />
Now given an operator <math>u</math> on <math>H</math> we define the operator <math>!u</math> on <math>H</math> by:<br />
: <math>!u(e_{\langle n,p\rangle}) = \Phi(e_n\tens u(e_p))</math>.<br />
<br />
{{Remark|<br />
The operator <math>!u</math> is defined by:<br />
: <math>!u = \Phi\circ (1\tens u)\circ \Phi^{-1}</math><br />
where <math>1\tens u</math> denotes the operator on <math>H\tens H</math> defined by <math>(1\tens u)(x\tens y) = x\tens u(y)</math> for any <math>x,y</math> in <math>H</math>. However this notation must not be confused with the [[GoI for MELL: the *-autonomous structure#The tensor rule|tensor of operators]] that was defined in the previous section in order to interpret the tensor rule of linear logic; we therefore will not use it.<br />
}}<br />
<br />
One can check that given two operators <math>u</math> and <math>v</math> we have:<br />
* <math>!u!v = {!(uv)}</math>;<br />
* <math>!(u^*) = (!u)^*</math>.<br />
<br />
Due to the fact that <math>\Phi</math> is an isomorphism ''onto'' we also have <math>!1=1</math>; this however will not be used.<br />
<br />
We therefore have that <math>!</math> is a morphism on <math>\mathcal{B}(H)</math>; it is easily seen to be an iso (not ''onto'' though). As this is the crucial ingredient for interpreting the structural rules of linear logic, we will call it the ''copying iso''.<br />
<br />
== Interpretation of exponentials ==<br />
<br />
If we suppose that <math>u = u_\varphi</math> is a <math>p</math>-isometry generated by the partial permutation <math>\varphi</math> then we have:<br />
: <math>!u(e_{\langle n,p\rangle}) = \Phi(e_n\tens u(e_p)) = \Phi(e_n\tens e_{\varphi(p)}) = e_{\langle n,\varphi(p)\rangle}</math>.<br />
Thus <math>!u</math> is itself a <math>p</math>-isometry and the proof space is stable under the copying iso.</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/GoI_for_MELL:_exponentialsGoI for MELL: exponentials2010-06-05T10:20:44Z<p>Laurent Regnier: /* The tensor product of Hilbert spaces */ presentation</p>
<hr />
<div>= The tensor product of Hilbert spaces =<br />
<br />
Recall that <math>(e_k)_{k\in\mathbb{N}}</math> is the canonical basis of <math>H=\ell^2(\mathbb{N})</math>. The space <math>H\tens H</math> is the collection of sequences <math>(x_{np})_{n,p\in\mathbb{N}}</math> of complex numbers such that: <math>\sum_{n,p}|x_{np}|^2</math> converges. The scalar product is defined just as before:<br />
: <math>\langle (x_{np}), (y_{np})\rangle = \sum_{n,p} x_{np}\bar y_{np}</math>.<br />
<br />
If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_p)_{p\in\mathbb{N}}</math> are vectors in <math>H</math> then their tensor is the sequence:<br />
: <math>x\tens y = (x_ny_p)_{n,p\in\mathbb{N}}</math>.<br />
<br />
In particular if we define: <math>e_{np} = e_n\tens e_p</math> so that <math>e_{np}</math> is the (doubly indexed) sequence of complex numbers given by <math>e_{npij} = \delta_{ni}\delta_{pj}</math> then <math>(e_{np})</math> is a hilbertian basis of <math>H\tens H</math>: the sequence <math>x=(x_{np})</math> may be written:<br />
: <math>x = \sum_{n,p\in\mathbb{N}}x_{np}\,e_{np}<br />
= \sum_{n,p\in\mathbb{N}}x_{np}\,e_n\tens e_p</math>. <br />
By bilinearity of tensor we have:<br />
: <math>x\tens y = \left(\sum_n x_ne_n\right)\tens\left(\sum_p y_pe_p\right) = <br />
\sum_{n,p} x_ny_p\, e_n\tens e_p = \sum_{n,p} x_ny_p\,e_{np}</math></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/GoI_for_MELL:_exponentialsGoI for MELL: exponentials2010-05-25T08:28:09Z<p>Laurent Regnier: Creation of the page : generalities on Hilbert spaces tensor product</p>
<hr />
<div>= The tensor product of Hilbert spaces</math> =<br />
<br />
Recall that <math>(e_k)_{k\in\mathbb{N}}</math> is the canonical basis of <math>H=\ell^2(\mathbb{N})</math>. The space <math>H\tens H</math> is the collection of sequences <math>(x_{np})_{n,p\in\mathbb{N}}</math> of complex numbers such that: <math>\sum_{n,p}|x_{np}|^2</math> converges. The scalar product is defined just as before:<br />
: <math>\langle (x_{np}), (y_{np})\rangle = \sum_{n,p} x_{np}\bar y_{np}</math>.<br />
<br />
The canonical basis of <math>H\tens H</math> is denoted <math>(e_{ij})_{i,j\in\mathbb{N}}</math> where <math>e_{ij}</math> is the (doubly indexed) sequence <math>(e_{ijnp})_{n,p\in\mathbb{N}}</math> defined by:<br />
: <math>e_{ijnp} = \delta_{in}\delta_{jp}</math> (all terms are null but the one at index <math>(i,j)</math> which is 1).<br />
<br />
If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_p)_{p\in\mathbb{N}}</math> are vectors in <math>H</math> then their tensor is the sequence:<br />
: <math>x\tens y = (x_ny_p)_{n,p\in\mathbb{N}}</math>.<br />
<br />
In particular we have: <math>e_{ij} = e_i\tens e_j</math> and we can write:<br />
: <math>x\tens y = \left(\sum_n x_ne_n\right)\left(\sum_p y_pe_p\right) = <br />
\sum_{n,p} x_ny_p e_n\tens e_p = \sum_{n,p} x_ny_p e_{np}</math></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-25T07:53:08Z<p>Laurent Regnier: Exponentials</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== [[GoI for MELL: partial isometries|Partial isometries]] ==<br />
<br />
The first step is to build the proof space. This is constructed as a special set of partial isometries on a separable Hilbert space <math>H</math> which turns out to be generated by partial permutations on the canonical basis of <math>H</math>.<br />
<br />
These so-called ''<math>p</math>-isometries'' enjoy some nice properties, the most important one being that a <math>p</math>-isometry is a sum of <math>p</math>-isometries iff all the terms of the sum have disjoint domains and disjoint codomains. As a consequence we get that a sum of <math>p</math>-isometries is null iff each term of the sum is null.<br />
<br />
A second important property is that operators on <math>H</math> can be ''externalized'' using <math>p</math>-isometries into operators acting on <math>H\oplus H</math>, and conversely operators on <math>H\oplus H</math> may be ''internalized'' into operators on <math>H</math>. This is widely used in the sequel.<br />
<br />
== [[GoI for MELL: the *-autonomous structure|The *-autonomous structure]] ==<br />
<br />
The second step is to interpret the linear logic multiplicative operations, most importantly the cut rule.<br />
<br />
Internalization/externalization is the key for this: typically the type <math>A\tens B</math> is interpreted by a set of <math>p</math>-isometries which are internalizations of operators acting on <math>H\oplus H</math>.<br />
<br />
The (interpretation of) the cut-rule is defined in two steps: firstly we use nilpotency to define an operation corresponding to lambda-calculus application which given two <math>p</math>-isometries in respectively <math>A\limp B</math> and <math>A</math> produces an operator in <math>B</math>. From this we deduce the composition and finally obtain a structure of *-autonomous category, that is a model of multiplicative linear logic.<br />
<br />
== [[GoI for MELL: exponentials|The exponentials]] ==<br />
<br />
Finally we turn to define exponentials, that is connectives managing duplication. To do this we introduce an isomorphism (induced by a <math>p</math>-isometry) between <math>H</math> and <math>H\tens H</math>: the first component of the tensor is intended to hold the address of the the copy whereas the second component contains the content of the copy.<br />
<br />
We eventually get a quasi-model of full MELL; quasi in the sense that if we can construct <math>p</math>-isometries for usual structural operations in MELL (contraction, dereliction, digging), the interpretation of linear logic proofs is not invariant w.r.t. cut elimination in general. It is however invariant in some good cases, which are enough to get a correction theorem for the interpretation.<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-15T11:58:39Z<p>Laurent Regnier: Summaries of sections</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== [[GoI for MELL: partial isometries|Partial isometries]] ==<br />
<br />
The first step is to build the proof space. This is constructed as a special set of partial isometries on a separable Hilbert space <math>H</math> which turns out to be generated by partial permutations on the canonical basis of <math>H</math>.<br />
<br />
These so-called <math>p</math>-isometries enjoy some nice properties, the most important one being that a <math>p</math>-isometry is a sum of <math>p</math>-isometries iff all the terms of the sum have disjoint domains and disjoint codomains. As a consequence we get that a sum of <math>p</math>-isometries is null iff each term of the sum is null.<br />
<br />
A second important property is that operators on <math>H</math> can be ''externalized'' using <math>p</math>-isometries into operators acting on <math>H\oplus H</math>, and conversely operators on <math>H\oplus H</math> may be ''internalized'' into operators on <math>H</math>. This is widely used in the sequel.<br />
<br />
== [[GoI for MELL: the *-autonomous structure| The *-autonomous structure]] ==<br />
<br />
The second step is to interpret the linear logic multiplicative operations, most importantly the cut rule.<br />
<br />
Internalization/externalization is the key for this: typically the type <math>A\tens B</math> is interpreted by a set of <math>p</math>-isometries which are internalizations of operators acting on <math>H\oplus H</math>.<br />
<br />
The (interpretation of) the cut-rule is defined in two steps: firstly we use nilpotency to define an operation corresponding to lambda-calculus application which given two <math>p</math>-isometries in respectively <math>A\limp B</math> and <math>A</math> produces an operator in <math>B</math>. From this we deduce the composition and finally obtain a structure of *-autonomous category, that is a model of multiplicative linear logic.<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-15T11:04:11Z<p>Laurent Regnier: Split the page</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== [[GoI for MELL: partial isometries|Partial isometries]] ==<br />
<br />
== [[GoI for MELL: the *-autonomous structure| The *-autonomous structure]] ==<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/GoI_for_MELL:_the_*-autonomous_structureGoI for MELL: the *-autonomous structure2010-05-15T11:04:03Z<p>Laurent Regnier: New page: Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpo...</p>
<hr />
<div>Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
= The tensor and the linear application =<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
= The identity =<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
= The execution formula, version 1: application =<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
= The tensor rule =<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent and that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
= Other monoidal constructions =<br />
<br />
== Contraposition ==<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
== Commutativity ==<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
== Distributivity ==<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
== Weak distributivity ==<br />
Similarly we get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
= Execution formula, version 2: composition =<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of <math>p</math>-isometries satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are <math>p</math>-isometries in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/GoI_for_MELL:_partial_isometriesGoI for MELL: partial isometries2010-05-15T11:03:54Z<p>Laurent Regnier: New page: = Operators, partial isometries = We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers su...</p>
<hr />
<div>= Operators, partial isometries =<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>.<ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries then we have:<br />
* <math>uv^* = 0</math> iff <math>u^*uv^*v = 0</math> iff the domains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>u^*v = 0</math> iff <math>uu^*vv^* = 0</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>uu^* + vv^* = 1</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint and their their direct sum is <math>H</math>.<br />
<br />
= Partial permutations =<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
Given a a subset <math>D</math> of <math>\mathbb{N}</math>, the ''partial identity'' on <math>D</math> is the partial permutation <math>\varphi</math> defined by:<br />
* <math>D_\varphi = D</math>;<br />
* <math>\varphi(n) = n</math> for any <math>n\in D_\varphi</math>.<br />
Thus the codomain of <math>\varphi</math> is <math>D</math>.<br />
<br />
The partial identity on <math>D</math> will be denoted by <math>1_D</math>. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
= The proof space =<br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
If <math>p</math> is a projector in <math>\mathcal{P}</math> then there is a partial identity <math>1_D</math> such that <math>p= u_{1_D}</math>.<br />
<br />
Projectors commute, in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*_\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra.<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
= From operators to matrices: internalization/externalization =<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is the matrix product <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-15T10:13:27Z<p>Laurent Regnier: /* Execution formula, version 2: composition */ correction</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
=== Operators, partial isometries ===<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>.<ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries then we have:<br />
* <math>uv^* = 0</math> iff <math>u^*uv^*v = 0</math> iff the domains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>u^*v = 0</math> iff <math>uu^*vv^* = 0</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>uu^* + vv^* = 1</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint and their their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
Given a a subset <math>D</math> of <math>\mathbb{N}</math>, the ''partial identity'' on <math>D</math> is the partial permutation <math>\varphi</math> defined by:<br />
* <math>D_\varphi = D</math>;<br />
* <math>\varphi(n) = n</math> for any <math>n\in D_\varphi</math>.<br />
Thus the codomain of <math>\varphi</math> is <math>D</math>.<br />
<br />
The partial identity on <math>D</math> will be denoted by <math>1_D</math>. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
=== The proof space ===<br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
If <math>p</math> is a projector in <math>\mathcal{P}</math> then there is a partial identity <math>1_D</math> such that <math>p= u_{1_D}</math>.<br />
<br />
Projectors commute, in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*_\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra.<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is the matrix product <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent and that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
Similarly we get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of <math>p</math>-isometries satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are <math>p</math>-isometries in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-15T10:07:24Z<p>Laurent Regnier: /* Weak distributivity */</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
=== Operators, partial isometries ===<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>.<ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries then we have:<br />
* <math>uv^* = 0</math> iff <math>u^*uv^*v = 0</math> iff the domains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>u^*v = 0</math> iff <math>uu^*vv^* = 0</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>uu^* + vv^* = 1</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint and their their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
Given a a subset <math>D</math> of <math>\mathbb{N}</math>, the ''partial identity'' on <math>D</math> is the partial permutation <math>\varphi</math> defined by:<br />
* <math>D_\varphi = D</math>;<br />
* <math>\varphi(n) = n</math> for any <math>n\in D_\varphi</math>.<br />
Thus the codomain of <math>\varphi</math> is <math>D</math>.<br />
<br />
The partial identity on <math>D</math> will be denoted by <math>1_D</math>. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
=== The proof space ===<br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
If <math>p</math> is a projector in <math>\mathcal{P}</math> then there is a partial identity <math>1_D</math> such that <math>p= u_{1_D}</math>.<br />
<br />
Projectors commute, in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*_\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra.<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is the matrix product <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent and that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
Similarly we get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-15T10:03:29Z<p>Laurent Regnier: /* The tensor rule */ typo, style</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
=== Operators, partial isometries ===<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>.<ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries then we have:<br />
* <math>uv^* = 0</math> iff <math>u^*uv^*v = 0</math> iff the domains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>u^*v = 0</math> iff <math>uu^*vv^* = 0</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>uu^* + vv^* = 1</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint and their their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
Given a a subset <math>D</math> of <math>\mathbb{N}</math>, the ''partial identity'' on <math>D</math> is the partial permutation <math>\varphi</math> defined by:<br />
* <math>D_\varphi = D</math>;<br />
* <math>\varphi(n) = n</math> for any <math>n\in D_\varphi</math>.<br />
Thus the codomain of <math>\varphi</math> is <math>D</math>.<br />
<br />
The partial identity on <math>D</math> will be denoted by <math>1_D</math>. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
=== The proof space ===<br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
If <math>p</math> is a projector in <math>\mathcal{P}</math> then there is a partial identity <math>1_D</math> such that <math>p= u_{1_D}</math>.<br />
<br />
Projectors commute, in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*_\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra.<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is the matrix product <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent and that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-14T15:24:29Z<p>Laurent Regnier: /* From operators to matrices: internalization/externalization */ precision</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
=== Operators, partial isometries ===<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>.<ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries then we have:<br />
* <math>uv^* = 0</math> iff <math>u^*uv^*v = 0</math> iff the domains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>u^*v = 0</math> iff <math>uu^*vv^* = 0</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>uu^* + vv^* = 1</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint and their their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
Given a a subset <math>D</math> of <math>\mathbb{N}</math>, the ''partial identity'' on <math>D</math> is the partial permutation <math>\varphi</math> defined by:<br />
* <math>D_\varphi = D</math>;<br />
* <math>\varphi(n) = n</math> for any <math>n\in D_\varphi</math>.<br />
Thus the codomain of <math>\varphi</math> is <math>D</math>.<br />
<br />
The partial identity on <math>D</math> will be denoted by <math>1_D</math>. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
=== The proof space ===<br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
If <math>p</math> is a projector in <math>\mathcal{P}</math> then there is a partial identity <math>1_D</math> such that <math>p= u_{1_D}</math>.<br />
<br />
Projectors commute, in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*_\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra.<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is the matrix product <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-05-14T15:14:25Z<p>Laurent Regnier: /* Operators, partial isometries */ elementary props of partial isometries</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
=== Operators, partial isometries ===<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>.<ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries then we have:<br />
* <math>uv^* = 0</math> iff <math>u^*uv^*v = 0</math> iff the domains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>u^*v = 0</math> iff <math>uu^*vv^* = 0</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint;<br />
* <math>uu^* + vv^* = 1</math> iff the codomains of <math>u</math> and <math>v</math> are disjoint and their their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
Given a a subset <math>D</math> of <math>\mathbb{N}</math>, the ''partial identity'' on <math>D</math> is the partial permutation <math>\varphi</math> defined by:<br />
* <math>D_\varphi = D</math>;<br />
* <math>\varphi(n) = n</math> for any <math>n\in D_\varphi</math>.<br />
Thus the codomain of <math>\varphi</math> is <math>D</math>.<br />
<br />
The partial identity on <math>D</math> will be denoted by <math>1_D</math>. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
=== The proof space ===<br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
If <math>p</math> is a projector in <math>\mathcal{P}</math> then there is a partial identity <math>1_D</math> such that <math>p= u_{1_D}</math>.<br />
<br />
Projectors commute, in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*_\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra.<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-30T06:25:44Z<p>Laurent Regnier: typos, style</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
=== Operators, partial isometries ===<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>.<ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are disjoint but their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
Given a a subset <math>D</math> of <math>\mathbb{N}</math>, the ''partial identity'' on <math>D</math> is the partial permutation <math>\varphi</math> defined by:<br />
* <math>D_\varphi = D</math>;<br />
* <math>\varphi(n) = n</math> for any <math>n\in D_\varphi</math>.<br />
Thus the codomain of <math>\varphi</math> is <math>D</math>.<br />
<br />
The partial identity on <math>D</math> will be denoted by <math>1_D</math>. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
=== The proof space ===<br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
If <math>p</math> is a projector in <math>\mathcal{P}</math> then there is a partial identity <math>1_D</math> such that <math>p= u_{1_D}</math>.<br />
<br />
Projectors commute, in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*_\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra.<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= Notes and references =<br />
<br />
<references/></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-29T07:45:45Z<p>Laurent Regnier: corrections, precisions</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>, iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. Note in particular that <math>uv\in\bot</math> iff <math>vu\in\bot</math>.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>, is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math><ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>. The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>:<br />
* <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>;<br />
* <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>.<br />
<br />
The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are disjoint but their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
Projectors generated by partial identities commute:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>.<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>. In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
{{Proof|<br />
Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
}}<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is <math>UV</math>.<br />
<br />
As <math>pp^* + qq^* = 1</math> we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce:<br />
<br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
{{Remark|<br />
This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
}}<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even.<br />
<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null, thus also the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Template:RemarkTemplate:Remark2010-04-29T06:56:00Z<p>Laurent Regnier: try to correct a bug in display</p>
<hr />
<div>''Remark:'' {{{1}}}<br />
<br /></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-28T21:28:19Z<p>Laurent Regnier: Execution formula : false assertion corrected</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>, iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. Note in particular that <math>uv\in\bot</math> iff <math>vu\in\bot</math>.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>, is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that we use. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when their vectors are pairwise orthogonal; this terminology is slightly misleading as disjoint subspaces always have <math>0</math> in common.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness if well behaved w.r.t. composition of operators:<br />
: <math>(uv)^* = v^*u^*</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their codomain are respectively the domain and the codomain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are disjoint and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on a fixed basis of <math>H</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that may shorten the definition of <math>u_\varphi</math> into:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
Projectors generated by partial identities commute; in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
{{Definition|<br />
We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries.<br />
}}<br />
<br />
In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>. In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
Suppose for contradiction that <math>e_n</math> is in the domain of <math>u</math> and in the domain of <math>v</math>. As <math>u</math> and <math>v</math> are <math>p</math>-isometries there are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation.<br />
<br />
As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is<br />
satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are<br />
disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their<br />
codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job.<br />
<br />
Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s are given by:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as:<br />
: <math>V = \begin{pmatrix}<br />
v_{11} & v_{12}\\<br />
v_{21} & v_{22}<br />
\end{pmatrix} <br />
= \begin{pmatrix}<br />
p^*vp & p^*vq\\<br />
q^*vp & q^*vq<br />
\end{pmatrix}<br />
</math><br />
then the externalization of <math>uv</math> is <math>UV</math>.<br />
<br />
We also have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that externalization is reversible, its converse being called ''internalization''.<br />
<br />
Furthermore if we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains. <br />
{{Proposition|<br />
If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then:<br />
* <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>;<br />
* <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>.<br />
}}<br />
<br />
As an example of computation in <math>\mathcal{P}</math> let us compute the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math>:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= (pp^*upp^*)(pp^*u^*pp^*)(pp^*uqq^*)(qq^*u^*pp^*)\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are partial isometries in <math>\mathcal{P}</math> we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type.<br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
Note that this so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we get:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
The interpretation of the identity is an example of internalization. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
{{Definition|<br />
Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. Assume that <math>u_{11}v</math> is nilpotent. We define a new operator <math>\mathrm{App}(u,v)</math> by:<br />
: <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{11}</math>.<br />
}}<br />
<br />
Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well.<br />
<br />
{{Theorem|<br />
If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Proof|<br />
Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains.<br />
<br />
Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>:<br />
: <math><br />
E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*)<br />
</math><br />
As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have:<br />
: <math>\begin{align}<br />
E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\<br />
&= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\<br />
&= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\<br />
&= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\<br />
&= 0<br />
\end{align}</math><br />
because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. <br />
<br />
Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry.<br />
}}<br />
<br />
{{Theorem|<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent:<br />
# <math>u\in A\limp B</math>;<br />
# for any <math>v\in A</math> we have:<br />
#* <math>u_{11}v</math> is nilpotent and<br />
#* <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Proof|<br />
Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute<br />
: <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math><br />
we get a finite sum of monomial operators of the form:<br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math><br />
# <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or<br />
# <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>,<br />
for all tuples of integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>.<br />
<br />
Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>.<br />
<br />
Suppose <math>u_{11}v</math> is nilpotent and consider:<br />
: <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>.<br />
Developping we get a finite sum of monomials of the form:<br />
: <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math><br />
for all tuples <math>(l_0,\dots, l_m)</math> such that <math>l_0\cdots l_m + m = n</math>.<br />
<br />
Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above.<br />
<br />
Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent.<br />
<br />
Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null because they have disjoint domains and disjoint codomains. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>.<br />
<br />
Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null.<br />
<br />
Consider for example a monomial of type 1:<br />
: <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math>.<br />
If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math>. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> then as we have:<br />
: <math>i_0+\cdots+i_m + m = n</math><br />
we deduce:<br />
: <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math><br />
thus:<br />
: <math>i_1+i_3+\cdots +i_{m-1}\geq n - (m/2)(1+P)</math>.<br />
Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>m/2<N</math> and<br />
: <math>i_1+i_3+\cdots +i_{m-1}\geq n - N(1+P) = N</math>.<br />
Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial:<br />
: <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math><br />
is null. Thus so is the monomial of type 1 we started with.<br />
}}<br />
<br />
{{Corollary|<br />
If <math>A</math> and <math>B</math> are types then we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-26T10:37:43Z<p>Laurent Regnier: typos, style</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math>called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>, iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. Note in particular that <math>uv\in\bot</math> iff <math>vu\in\bot</math>.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>Y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>, is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that we use. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when their vectors are pairwise orthogonal; this terminology is slightly misleading as disjoint subspaces always have <math>0</math> in common.<br />
<br />
The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u =<br />
u</math>; this condition entails that we also have <math>u^*uu^* =<br />
u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their codomain are respectively the domain and the codomain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are disjoint and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on a fixed basis of <math>H</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that may shorten the definition of <math>u_\varphi</math> into:<br />
: <math>u_\varphi(e_n) = e_{\varphi(n)}</math>.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>.<br />
<br />
Projectors generated by partial identities commute; in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
{{Definition|<br />
The ''proof space'' <math>\mathcal{P}</math> is the set of partial isometries of the form <math>u_\varphi</math> for partial permutations <math>\varphi</math> on <math>\mathbb{N}</math>.<br />
}}<br />
<br />
In particular note that <math>0\in\mathcal{P}</math>. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra: in general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
Also note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''components'' of <math>u</math>. Note that if <math>u</math> is generated by a partial permutation, that is if <math>u\in\mathcal{P}</math> then so are the <math>u_{ij}</math>'s. Moreover we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains. This can be verified for example by computing the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math>:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= (pp^*upp^*)(pp^*u^*pp^*)(pp^*uqq^*)(qq^*u^*pp^*)\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are partial isometries in <math>\mathcal{P}</math> we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math>, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, <strike>which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math></strike>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-23T16:36:30Z<p>Laurent Regnier: redefinition of the proof space (begin)</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math>called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>, iff <math>uv\in\bot</math>.<br />
<br />
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. Note in particular that <math>uv\in\bot</math> iff <math>vu\in\bot</math>.<br />
<br />
When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>:<br />
: <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>.<br />
<br />
This constrution has a few properties that we will use without mention in the sequel. Given two subset <math>X</math> and <math>Y</math> of <math>P</math> we have:<br />
* if <math>X\subset Y</math> then <math>y\orth\subset X</math>;<br />
* <math>X\subset X\biorth</math>;<br />
* <math>X\triorth = X\orth</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>.<br />
<br />
The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>, is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that we use. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the duality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
* <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
* <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
* <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector, the ''final projector of <math>u</math>'', the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector, the initial projector of <math>u</math>, the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on a fixed basis of <math>H</math>.<br />
<br />
More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
* <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
* if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
* the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
{{Proposition|<br />
Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
The adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the initial projector of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and the final projector of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
Projectors generated by partial identities commute; in particular we have:<br />
: <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>.<br />
}}<br />
<br />
{{Definition|<br />
The ''proof space'' <math>\mathcal{P}</math> is the set of partial isometries of the form <math>u_\varphi</math> for partial permutations <math>\varphi</math> on <math>\mathbb{N}</math>.<br />
}}<br />
<br />
In particular note that <math>0\in\mathcal{P}</math>. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra: in general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have:<br />
<br />
{{Proposition|<br />
Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have orthogonal domains and codomains, that is:<br />
: <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>.<br />
}}<br />
<br />
Also note that if <math>u+v=0</math> then <math>u=v=0</math>.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
The <math>u_{ij}</math>'s are called the ''components'' of <math>u</math>. Note that if <math>u</math> is generated by a partial permutation, that is if <math>u\in\mathcal{P}</math> then so are the <math>u_{ij}</math>'s. Moreover we have:<br />
: <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math><br />
which entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains. This can be verified for example by computing the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math>:<br />
: <math>\begin{align}<br />
(pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*)<br />
&= (pp^*upp^*)(pp^*u^*pp^*)(pp^*uqq^*)(qq^*u^*pp^*)\\<br />
&= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\<br />
&= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\<br />
&= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\<br />
&= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\<br />
&= 0<br />
\end{align}</math><br />
where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are partial isometries in <math>\mathcal{P}</math> we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math>, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by bidual to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{P}\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, <strike>which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math></strike>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\limpinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-21T06:22:22Z<p>Laurent Regnier: /* The execution formula, version 1: application */ false assertion, has to find a solution</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. To enforce this we will reserve the notation <math>\perp</math> exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is<br />
equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in<br />
T</math> iff for all operator <math>v\in T\orth</math>, that is such that<br />
<math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. In particular note that <math>0</math> belongs to any type.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, <strike>which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math></strike>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\multimapinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\multimapinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-20T16:18:36Z<p>Laurent Regnier: /* Execution formula, version 2: composition */ compostion with identity</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. To enforce this we will reserve the notation <math>\perp</math> exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is<br />
equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in<br />
T</math> iff for all operator <math>v\in T\orth</math>, that is such that<br />
<math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. In particular note that <math>0</math> belongs to any type.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\multimapinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\multimapinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get:<br />
: <math><br />
\mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u<br />
</math><br />
Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here).<br />
<br />
Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u,<br />
\mathrm{Comp}(v, w))</math>.<br />
<br />
Putting together the results of this section we finally have:<br />
{{Theorem|<br />
Let GoI(H) be defined by:<br />
* objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>;<br />
* morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>;<br />
* composition is given by the formula above.<br />
<br />
Then GoI(H) is a star-autonomous category.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-20T15:54:56Z<p>Laurent Regnier: Composition</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. To enforce this we will reserve the notation <math>\perp</math> exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is<br />
equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in<br />
T</math> iff for all operator <math>v\in T\orth</math>, that is such that<br />
<math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. In particular note that <math>0</math> belongs to any type.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The identity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\multimapinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\multimapinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
=== Execution formula, version 2: composition ===<br />
<br />
Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>.<br />
<br />
As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ...<br />
<br />
As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent.<br />
<br />
We define the operator <math>\mathrm{Comp}(u, v)</math> by:<br />
: <math>\begin{align}<br />
\mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\<br />
&+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\<br />
&+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\<br />
&+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^*<br />
\end{align}</math><br />
<br />
This is well defined since <math>u_{11}v_{22}</math> is nilpotent. We claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>.<br />
<br />
Let <math>a</math> be an operator in <math>A</math>. By computation we can check that:<br />
: <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>.<br />
Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>.<br />
<br />
If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then a quite lengthy computation shows that:<br />
: <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u, \mathrm{Comp}(v, w))</math>.<br />
<br />
Put together the results of this section shows that:<br />
{{Theorem|<br />
The category of types with morphisms from <math>A</math> to <math>B</math> the operators in type <math>A\limp B</math> is star-autonomous.<br />
}}<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-20T10:31:31Z<p>Laurent Regnier: Contraposition of the linear arrow</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. To enforce this we will reserve the notation <math>\perp</math> exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is<br />
equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in<br />
T</math> iff for all operator <math>v\in T\orth</math>, that is such that<br />
<math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. In particular note that <math>0</math> belongs to any type.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
==== Contraposition ====<br />
<br />
Let <math>A</math> and <math>B</math> be some types; we have:<br />
: <math>A\limp B = A\orth\multimapinv B\orth</math><br />
<br />
Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\multimapinv B\orth</math>.<br />
<br />
We will denote <math>u\orth</math> the operator:<br />
: <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math><br />
where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is:<br />
: <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>.<br />
From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>.<br />
<br />
==== Commutativity ====<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
==== Distributivity ====<br />
We get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
<br />
==== Weak distributivity ====<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-20T09:15:19Z<p>Laurent Regnier: /* The tensor rule */ warning on notation</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. To enforce this we will reserve the notation <math>\perp</math> exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is<br />
equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in<br />
T</math> iff for all operator <math>v\in T\orth</math>, that is such that<br />
<math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. In particular note that <math>0</math> belongs to any type.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>.<br />
<br />
Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math><br />
\begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math><br />
\begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
then we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix:<br />
: <math><br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
We can get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-20T09:06:42Z<p>Laurent Regnier: /* The Geometry of Interaction as operators */ remark on types : 0 belongs to any type</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. To enforce this we will reserve the notation <math>\perp</math> exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is<br />
equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in<br />
T</math> iff for all operator <math>v\in T\orth</math>, that is such that<br />
<math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. In particular note that <math>0</math> belongs to any type.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
so that we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
We can get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-20T08:50:50Z<p>Laurent Regnier: /* The execution formula, version 1: application */ precision</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we deduce that <math>u_{11}v</math> is nilpotent by considering the particular case where <math>w=0</math>. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
so that we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
We can get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-18T17:01:36Z<p>Laurent Regnier: the star-autonomous structure</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== The execution formula, version 1: application ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we may deduce in particular that <math>u_{11}v</math> is nilpotent too. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''application of <math>u</math> to <math>v</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
Note that this is well defined as soon as <math>u_{11}v</math> is nilpotent.<br />
<br />
We summarize what has just been shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
* <math>u\in A\limp B</math>;<br />
* for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
}}<br />
<br />
{{Corollary|<br />
Under the hypothesis of the theorem we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>.<br />
}}<br />
<br />
As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have:<br />
: <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>.<br />
Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get:<br />
: <math>\mathrm{App}(\iota, v) = v</math><br />
as expected.<br />
<br />
=== The tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
so that we may write:<br />
: <math>\begin{align}<br />
u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\<br />
&+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^*<br />
\end{align}</math><br />
<br />
Thus the components of <math>u\tens u'</math> are given by:<br />
: <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>.<br />
<br />
and we see that <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>.<br />
<br />
Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have:<br />
: <math>\begin{align}<br />
\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n<br />
&= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\<br />
&= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\<br />
&= p(u_{11}v)^np^* + q(u'_{11}v')^nq^*<br />
\end{align}</math><br />
<br />
Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>:<br />
: <math>\begin{align}<br />
&\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\<br />
&= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\<br />
&= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\<br />
&= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\<br />
&= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*<br />
\end{align}</math><br />
thus lives in <math>B\tens B'</math>.<br />
<br />
=== Other monoidal constructions ===<br />
<br />
Let <math>\sigma</math> be the operator:<br />
: <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>.<br />
One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are:<br />
: <math>\sigma_{11} = \sigma_{22} = 0</math>;<br />
: <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>.<br />
<br />
Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute:<br />
: <math>\begin{align}<br />
\mathrm{App}(\sigma, pup^* + qvq^*) <br />
&= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\<br />
&= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\<br />
&= pvp^* + quq^*<br />
\end{align}</math><br />
But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that:<br />
: <math>\sigma\in (A\tens B) \limp (B\tens A)</math>.<br />
<br />
We can get distributivity by considering the operator:<br />
: <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math><br />
that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>.<br />
<br />
We can finally get weak distributivity thanks to the operators:<br />
: <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and<br />
: <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>.<br />
<br />
Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that:<br />
: <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and<br />
: <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-18T11:11:12Z<p>Laurent Regnier: Execution formula, continued</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== Interpreting the cut rule: the execution formula ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
# <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
# <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
From the nilpotency of <math>u.(pvp^* + qwq^*)</math> we may deduce in particular that <math>u_{11}v</math> is nilpotent too. We also have that <math>q^*(u.(pvp^* + qwq^*))^nq</math> is null for <math>n</math> big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>) is bigger than <math>n</math>.<br />
<br />
This implies that the two following operators are nilpotent:<br />
: <math>u_{11}v</math> and<br />
: <math>\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w</math>.<br />
<br />
Conversely if these two operators are nilpotent then one can show that so is <math>u.(pvp^* + qwq^*)</math>. Moreover we have:<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl((u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12})w\bigr)^n</math>.<br />
<br />
We define the ''execution of <math>u:A\limp B</math> against <math>v:A</math>'' as:<br />
: <math>\mathrm{App}(u, v) = u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}</math>.<br />
<br />
We summarize what we've just shown in the following theorem:<br />
<br />
{{Theorem|<br />
Let <math>u</math> be an operator, <math>A</math> and <math>B</math> be two types; the following conditions are equivalent:<br />
: <math>u\in A\limp B</math>;<br />
: for any <math>v\in A</math>, we both have:<br />
:: <math>u_{11}v = p^*upv</math> is nilpotent and<br />
:: <math>\mathrm{App}(u, v)\in B</math>.<br />
<br />
Furthemore if <math>v</math> and <math>w</math> are respectively in <math>A</math> and <math>B\orth</math> then<br />
: <math>q^*\sum_n\bigl(u.(pvp^* + qwq^*)\bigr)^nq = \sum_n\bigl(\mathrm{App}(u, v).w\bigr)^n</math>.<br />
}}<br />
<br />
=== Interpreting the tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*vpq^*p^* + qqq^*vpq^*p^* + pqp^*vqq^*q^* + qqq^*vqq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s and the <math>u'_{ij}</math>'s are defined by the formula above, eg <math>u_{11} = p^*up</math>.<br />
<br />
Then <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
It remains to show that, given that <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. Let <math>v</math> and <math>v'</math> be respectively in <math>A</math> and <math>A'</math>. Then <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math> thus we have:<br />
: <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>.<br />
<br />
If <math>w\in (B\tens B')\orth</math> we thus get that <math>\bigl(p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^*\bigr).w</math> is nilpotent. This entails that <math>u\tens u' . (p(pvp^* + qv'q^*)p^* + qwq^*)</math> is in turn nilpotent, showing that <math>u\tens u'\in A\tens A'\limp B\tens B'</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-13T22:18:08Z<p>Laurent Regnier: style</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== Interpreting the cut rule: the execution formula ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
: <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
: <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
: <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
: <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{i1}v</math> or <math>u_{i2}w</math>.<br />
<br />
For <math>n</math> big enough, we know that this sum is null. Let us suppose that <math>u</math>, <math>v</math> and <math>w</math> are partial isometries generated by partial permutations of the basis. Then this is also the case of all these monomials and the fact that their sum is null entails that each of them is null. This is equivalent to the conjunction of the two facts:<br />
: <math>u_{11}v</math> is nilpotent and<br />
: <math>\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr).w</math> is nilpotent.<br />
<br />
=== Interpreting the tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*vpq^*p^* + qqq^*vpq^*p^* + pqp^*vqq^*q^* + qqq^*vqq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s and the <math>u'_{ij}</math>'s are defined by the formula above, eg <math>u_{11} = p^*up</math>.<br />
<br />
Then <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
It remains to show that, given that <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. We postpone this for after the definition of the execution.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-13T18:17:14Z<p>Laurent Regnier: /* Interpreting the cut rule: the execution formula */ style</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== Interpreting the cut rule: the execution formula ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
: <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
: <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
: <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
: <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{ij}(v\text{ or }w)</math>.<br />
<br />
For <math>n</math> big enough, we know that this sum is null. Let us suppose that <math>u</math>, <math>v</math> and <math>w</math> are partial isometries generated by partial permutations of the basis. Then this is also the case of all these monomials and the fact that their sum is null entails that each of them is null. This is equivalent to the conjunction of the two facts:<br />
: <math>u_{11}v</math> is nilpotent and<br />
: <math>\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr).w</math> is nilpotent.<br />
<br />
=== Interpreting the tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*vpq^*p^* + qqq^*vpq^*p^* + pqp^*vqq^*q^* + qqq^*vqq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s and the <math>u'_{ij}</math>'s are defined by the formula above, eg <math>u_{11} = p^*up</math>.<br />
<br />
Then <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
It remains to show that, given that <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. We postpone this for after the definition of the execution.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-13T18:12:23Z<p>Laurent Regnier: execution formula (part 1)</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given two types <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
=== Interpreting the cut rule: the execution formula ===<br />
<br />
Let <math>A</math> and <math>B</math> be two types and <math>u</math> an operator in <math>A\limp B</math>. By definition this means that given <math>v</math> in <math>A</math> and <math>w</math> in <math>B\orth</math> the operator <math>u.(pvp^* + qwq^*)</math> is nilpotent.<br />
<br />
Let us define <math>u_{11}</math> to <math>u_{22}</math> by externalization as above. If we compute <math>(u.(pvp^* + qwq^*))^n</math> we see that this is a finite sum of operators of the form:<br />
: <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
: <math>p(u_{11}v)^{k_1}u_{12}w\dots u_{12}w(u_{22}w)^{k_{p+1}}q^*</math>,<br />
: <math>q(u_{22}w)^{k_0}u_{21}v(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math> or<br />
: <math>p(u_{11}v)^{k_1}u_{12}w\dots (u_{11}v)^{k_p}p^*</math><br />
where each of these monimials has exactly <math>n</math> factors of the form <math>u_{ij}(v\text{ or }w)</math>.<br />
<br />
Let us suppose that <math>u</math>, <math>v</math> and <math>w</math> are partial isometries generated by partial permutations of the basis. Then this is also the case of all these monomials. For <math>n</math> big enough, we know that <math>(u.(pvp^* + qwq^*))^n = 0</math>, thus it is also the case that all these monomials are null for <math>n</math> big enough. Now this is equivalent to the conjonction of two facts:<br />
: <math>u_{11}v</math> is nilpotent and<br />
: <math>\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr).w</math> is nilpotent.<br />
<br />
=== Interpreting the tensor rule ===<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*vpq^*p^* + qqq^*vpq^*p^* + pqp^*vqq^*q^* + qqq^*vqq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s and the <math>u'_{ij}</math>'s are defined by the formula above, eg <math>u_{11} = p^*up</math>.<br />
<br />
Then <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
It remains to show that, given that <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. We postpone this for after the definition of the execution.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-13T16:59:11Z<p>Laurent Regnier: interpretation of the tensor rule</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
=== From operators to matrices: internalization/externalization ===<br />
<br />
It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by:<br />
: <math>\begin{align}<br />
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\<br />
&+ pqp^*vpq^*p^* + qqq^*vpq^*p^* + pqp^*vqq^*q^* + qqq^*vqq^*q^*<br />
\end{align}</math><br />
<br />
To understand this formula it is convenient to think <math>u</math> and <math>u'</math> as the internalizations of the matrices:<br />
: <math>U = \begin{pmatrix}u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}<br />
</math> and <math>U' = \begin{pmatrix}u'_{11} & u'_{12}\\<br />
u'_{21} & u'_{22}<br />
\end{pmatrix}</math><br />
where the <math>u_{ij}</math>'s and the <math>u'_{ij}</math>'s are defined by the formula above, eg <math>u_{11} = p^*up</math>.<br />
<br />
Then <math>u\tens u'</math> is actually the internalization of the matrix <math>U\tens U'</math> given by:<br />
<br />
: <math><br />
U\tens U' =<br />
\begin{pmatrix}<br />
u_{11} & 0 & u_{12} & 0 \\<br />
0 & u'_{11} & 0 & u'_{12} \\<br />
u_{21} & 0 & u_{22} & 0 \\<br />
0 & u'_{21} & 0 & u'_{22} \\<br />
\end{pmatrix}<br />
</math><br />
<br />
One remaining problem is now to show that, given that <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>.<br />
<br />
=== The idendity ===<br />
<br />
The interpretation of the identity is another example of the internalization/externalization procedure. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-08T21:26:18Z<p>Laurent Regnier: presentation details</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
{{Proposition|<br />
Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have:<br />
: <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>,<br />
that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation.<br />
}} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Our first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math>.<br />
<br />
The choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
u_{11} & u_{12}\\<br />
u_{21} & u_{22}<br />
\end{pmatrix}</math><br />
where each <math>u_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Internalization is compatible with composition (functorial so to speak): if <math>V</math> is another operator on <math>H\oplus</math> then the internalization of the matrix product <math>UV</math> is the product <math>uv</math>.<br />
<br />
Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>u_{11} = p^*up</math>;<br />
: <math>u_{12} = p^*uq</math>;<br />
: <math>u_{21} = q^*up</math>;<br />
: <math>u_{22} = q^*uq</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
As with orthogonality we use here the notation <math>\tens</math> in a specific sense: the tensor of two types should not be confused with the tensor of vectors or the tensor of spaces.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
As an example of the internalization/externalization procedure, let us give the example of the (interpretation of the) identity. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-08T10:41:07Z<p>Laurent Regnier: complements on partial isometries</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality<br />
of vectors. . To enforce this we will reserve the notation <math>\perp</math><br />
exclusively for the duality of operators and never use it for othogonality of vectors.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>0</math> (the projector<br />
on the null subspace) or <math>1</math>, that is an operator <math>p</math><br />
such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition.<br />
<br />
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
Partial isometries generated by partial permutations have a property that we will widely use: if <math>u_\varphi</math> and <math>u_\psi</math> are two such operators then we have <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>, that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation. Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(e_n)} = 0</math> which is possible only if both <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Our first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. We also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Note that the choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qU_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>U_{11} = p^*up</math>;<br />
: <math>U_{12} = p^*uq</math>;<br />
: <math>U_{21} = q^*up</math>;<br />
: <math>U_{22} = q^*uq</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
As with orthogonality we use here the notation <math>\tens</math> in a specific sense: the tensor of two types should not be confused with the tensor of vectors or the tensor of spaces.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
As an example of the internalization/externalization procedure, let us give the example of the (interpretation of the) identity. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-04-05T14:48:12Z<p>Laurent Regnier: style</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. In this article we will use the notation <math>\perp</math> exclusively for the duality of operators.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \mathrm{Ker}(u)\orth = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>1</math>, that is an operator <math>p</math> such that <math>p^2 = p</math> and <math>\|p\| = 1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Our first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. We also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Note that the choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qU_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>U_{11} = p^*up</math>;<br />
: <math>U_{12} = p^*uq</math>;<br />
: <math>U_{21} = q^*up</math>;<br />
: <math>U_{22} = q^*uq</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
As with orthogonality we use here the notation <math>\tens</math> in a specific sense: the tensor of two types should not be confused with the tensor of vectors or the tensor of spaces.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
As an example of the internalization/externalization procedure, let us give the example of the (interpretation of the) identity. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-31T08:12:21Z<p>Laurent Regnier: /* The Geometry of Interaction as operators */ notation \bot for the set of nilpotent operators, definition of the identity</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries.<br />
<br />
Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads:<br />
: <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
This duality applies to operators and shouldn't be confused with orthogonality of vectors. In this article we will use the notation <math>\perp</math> exclusively for the duality of operators.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v\in\bot</math> for all <math>u'\in T</math>, then <math>uv\in\bot</math>.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \mathrm{Ker}(u)\orth = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>1</math>, that is an operator <math>p</math> such that <math>p^2 = p</math> and <math>\|p\| = 1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by:<br />
: <math>u_\varphi(e_n) = <br />
\begin{cases}<br />
e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\<br />
0 & \text{ otherwise.}<br />
\end{cases}<br />
</math><br />
In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Our first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. We also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Note that the choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qU_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>U_{11} = p^*up</math>;<br />
: <math>U_{12} = p^*uq</math>;<br />
: <math>U_{21} = q^*up</math>;<br />
: <math>U_{22} = q^*uq</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
As with orthogonality we use here the notation <math>\tens</math> in a specific sense: the tensor of two types should not be confused with the tensor of vectors or the tensor of spaces.<br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u(pvp^* + qwq^*) \in\bot\}</math>.<br />
<br />
=== The idendity ===<br />
<br />
As an example of the internalization/externalization procedure, let us give the example of the (interpretation of the) identity. Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying:<br />
: <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>.<br />
<br />
An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>.<br />
<br />
It is interesting to note that the <math>\iota</math> thus defined if actually the internalization of the operator on <math>H\oplus H</math> given by the matrix:<br />
: <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>.<br />
<br />
We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-29T12:43:49Z<p>Laurent Regnier: /* Interpreting the multiplicative connectives */</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space in which the interpretations of proofs will live; here, in the case of GoI, the space is the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a suitable duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is an nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. In the case of GoI this means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v</math> is nilpotent for all <math>u'\in T</math>, then <math>u\perp v</math>, that is <math>uv</math> is nilpotent.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the adequacy lemma, if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. The continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \mathrm{Ker}(u)\orth = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>1</math>, that is an operator <math>p</math> such that <math>p^2 = p</math> and <math>\|p\| = 1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by <math>u_\varphi(e_n) = e_{\varphi(n)}</math> if <math>n\in D_\varphi</math>, <math>0</math> otherwise. In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent. This duality applies to operators and shouldn't be confused with orthogonality of vectors in <math>H</math>. In the sequel we will only use the notation <math>\perp</math> for the duality of operators.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \text{ is nilpotent}\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor and the linear application ===<br />
<br />
Our first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. We also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Note that the choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>U_{11} = p^*up</math>;<br />
: <math>U_{12} = p^*uq</math>;<br />
: <math>U_{21} = q^*up</math>;<br />
: <math>U_{22} = q^*uq</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
Unfolding this definition we see that we have:<br />
: <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u(pvp^* + qwq^*) \text{ is nilpotent}\}</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-28T14:36:20Z<p>Laurent Regnier: /* Interpreting the tensor */ genralities on types, iso H+H = H</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space in which the interpretations of proofs will live; here, in the case of GoI, the space is the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a suitable duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is an nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. In the case of GoI this means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v</math> is nilpotent for all <math>u'\in T</math>, then <math>u\perp v</math>, that is <math>uv</math> is nilpotent.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the adequacy lemma, if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. The continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \mathrm{Ker}(u)\orth = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>1</math>, that is an operator <math>p</math> such that <math>p^2 = p</math> and <math>\|p\| = 1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by <math>u_\varphi(e_n) = e_{\varphi(n)}</math> if <math>n\in D_\varphi</math>, <math>0</math> otherwise. In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
== Interpreting the multiplicative connectives ==<br />
<br />
Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent. This duality applies to operators and shouldn't be confused with orthogonality of vectors in <math>H</math>. In the sequel we will only use the notation <math>\perp</math> for the duality of operators.<br />
<br />
If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators:<br />
: <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \text{ is nilpotent}\}</math>.<br />
<br />
There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators:<br />
: <math>X\subset X\biorth</math>;<br />
: <math>X\orth = X\triorth</math>.<br />
: if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>;<br />
<br />
In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>.<br />
<br />
=== The tensor ===<br />
<br />
Our first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. We also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Note that the choice of <math>p</math> and <math>q</math> is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>u</math> on <math>H</math> defined by:<br />
<br />
: <math>u = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>u</math> the ''internalization'' of <math>U</math>. Conversely given an operator <math>u</math> on <math>H</math> we may externalize it obtaining an operator <math>U</math> on <math>H\oplus H</math>:<br />
: <math>U_{11} = p^*up</math>;<br />
: <math>U_{12} = p^*uq</math>;<br />
: <math>U_{21} = q^*up</math>;<br />
: <math>U_{22} = q^*uq</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
=== The execution formula ===<br />
<br />
Given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by duality w.r.t. the tensor:<br />
: <math>A\limp B = (A\tens B\orth)\orth</math>.<br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-28T13:32:09Z<p>Laurent Regnier: /* Preliminaries */ Notation B(H)</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space in which the interpretations of proofs will live; here, in the case of GoI, the space is the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a suitable duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is an nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. In the case of GoI this means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v</math> is nilpotent for all <math>u'\in T</math>, then <math>u\perp v</math>, that is <math>uv</math> is nilpotent.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the adequacy lemma, if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. The continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \mathrm{Ker}(u)\orth = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>1</math>, that is an operator <math>p</math> such that <math>p^2 = p</math> and <math>\|p\| = 1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by <math>u_\varphi(e_n) = e_{\varphi(n)}</math> if <math>n\in D_\varphi</math>, <math>0</math> otherwise. In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
== Interpreting the tensor ==<br />
<br />
The first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we will define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
This is actually arbitrary, any two partial isometries <math>p,q</math> with full domain and such that the sum of their codomains is <math>H</math> would do the job.<br />
<br />
We shall From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>\bar U</math> on <math>H</math> defined by:<br />
<br />
: <math>\bar U = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>\bar U</math> the ''internalization'' of <math>U</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-28T13:10:22Z<p>Laurent Regnier: /* The Geometry of Interaction as operators */ generalities on operators</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space in which the interpretations of proofs will live; here, in the case of GoI, the space is the space of bounded operators on <math>\ell^2</math>.<br />
<br />
Second define a suitable duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is an nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. In the case of GoI this means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v</math> is nilpotent for all <math>u'\in T</math>, then <math>u\perp v</math>, that is <math>uv</math> is nilpotent.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the adequacy lemma, if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product:<br />
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>.<br />
<br />
Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself:<br />
: <math>\|x\| = \sqrt{\langle x, x\rangle}</math>.<br />
<br />
Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have:<br />
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>.<br />
<br />
In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. The continuity allows to define the ''norm'' of an operator <math>u</math> to be the sup on the unit ball of the norms of its values:<br />
: <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.<br />
<br />
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel:<br />
<br />
: <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>;<br />
: <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>;<br />
: <math>\mathrm{Dom}(u) = \mathrm{Ker}(u)\orth = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>.<br />
<br />
These three sets are closed subspaces of <math>H</math>.<br />
<br />
The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''projector'' is an idempotent operator of norm <math>1</math>, that is an operator <math>p</math> such that <math>p^2 = p</math> and <math>\|p\| = 1</math>. A projector is auto-adjoint and its domain is equal to its codomain.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by <math>u_\varphi(e_n) = e_{\varphi(n)}</math> if <math>n\in D_\varphi</math>, <math>0</math> otherwise. In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
== Interpreting the tensor ==<br />
<br />
The first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we will define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
This is actually arbitrary, any two partial isometries <math>p,q</math> with full domain and such that the sum of their codomains is <math>H</math> would do the job.<br />
<br />
We shall From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>\bar U</math> on <math>H</math> defined by:<br />
<br />
: <math>\bar U = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>\bar U</math> the ''internalization'' of <math>U</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-28T10:05:25Z<p>Laurent Regnier: </p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space in which the interpretations of proofs will live; here, in the case of GoI, the space is the space of bounded operators on <math>H=\ell^2</math>.<br />
<br />
Second define a suitable duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is an nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. In the case of GoI this means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v</math> is nilpotent for all <math>u'\in T</math>, then <math>u\perp v</math>, that is <math>uv</math> is nilpotent.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the adequacy lemma, if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations on <math>H</math> that will be used in the sequel.<br />
<br />
Let us denote by <math>(e_n)_{n\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H = \ell^2</math>; <math>e_n</math> is the sequence containing only 0's but at the <math>n</math>'s position where its value is <math>1</math>: <math>e_n = (\delta_{in})_{i\in\mathbb{N}}</math> where <math>\delta_{in}</math> is the standard Kroenecker function. Recall that the adjoint of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>; we will call ''codomain'' the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the ''domain'' of <math>u</math>, defined to be the orthogonal subspace of the kernel of <math>u</math>. The domain and the codomain of <math>u</math> are both closed subspace of <math>H</math> and <math>u</math> restricted to its domain is an isometry. If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
=== Partial permutations and partial isometries ===<br />
<br />
It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries.<br />
<br />
More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math>, the ''domain'' of <math>\varphi</math>, which is one-to-one onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>, the ''codomain'' of <math>\varphi</math>. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by:<br />
<br />
: <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>;<br />
: if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>;<br />
: the codomain of <math>\varphi\circ\psi</math> is the image of the domain.<br />
<br />
Partial permutations are well known to form a structure of ''inverse monoid that we detail now.<br />
<br />
A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote <math>1</math>. This latter permutation is the neutral for composition.<br />
<br />
If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies:<br />
<br />
: <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math><br />
: <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math><br />
<br />
Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by <math>u_\varphi(e_n) = e_{\varphi(n)}</math> if <math>n\in D_\varphi</math>, <math>0</math> otherwise. In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by:<br />
: <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise.<br />
<br />
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>.<br />
<br />
If <math>\psi</math> is another partial permutation then we have:<br />
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>.<br />
<br />
If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is:<br />
: <math>u_\varphi^* = u_{\varphi^{-1}}</math>.<br />
<br />
In particular the projector on the domain of <math>u_{\varphi}</math> is given by:<br />
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>.<br />
<br />
and similarly the projector on the codomain of <math>u_\varphi</math> is:<br />
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>.<br />
<br />
== Interpreting the tensor ==<br />
<br />
The first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we will define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
This is actually arbitrary, any two partial isometries <math>p,q</math> with full domain and such that the sum of their codomains is <math>H</math> would do the job.<br />
<br />
We shall From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>\bar U</math> on <math>H</math> defined by:<br />
<br />
: <math>\bar U = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>\bar U</math> the ''internalization'' of <math>U</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math><br />
<br />
= The Geometry of Interaction as an abstract machine =</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-27T23:07:06Z<p>Laurent Regnier: /* The Geometry of Interaction as operators */</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space in which the interpretations of proofs will live; here, in the case of GoI, the space is the space of bounded operators on <math>H=\ell^2</math>.<br />
<br />
Second define a suitable duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is an nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>.<br />
<br />
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. In the case of GoI this means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v</math> is nilpotent for all <math>u'\in T</math>, then <math>u\perp v</math>, that is <math>uv</math> is nilpotent.<br />
<br />
It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the adequacy lemma, if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations on <math>H</math> that will be used in the sequel.<br />
<br />
Let us denote by <math>(e_n)_{n\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H = \ell^2</math>; <math>e_n</math> is the sequence containing only 0's but at the <math>n</math>'s position where its value is <math>1</math>: <math>e_n = (\delta_{in})_{i\in\mathbb{N}}</math> where <math>\delta_{in}</math> is the standard Kroenecker function. Recall that the adjoint of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>; we will call ''codomain'' the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the ''domain'' of <math>u</math> (which is orthogonal to the kernel of <math>u</math>). The domain and the codomain of <math>u</math> are both closed subspace of <math>H</math> and <math>u</math> restricted to its domain is an isometry. If the domain <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
We shall define a number of operators on <math>H</math> by describing their action on the basis <math>(e_n)</math>; actually most of the operators that are used to interpret logical operations will turn out to be defined as partial permutations on the basis, which in particular entails that they are partial isometries. More precisely given a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math>, that is a one-to-one function from a subset of <math>\mathbb{N}</math> into <math>\mathbb{N}</math> one define the partial isometry <math>u_\varphi</math> by <math>u_\varphi(e_n) = e_{\varphi(n)}</math> if <math>\varphi(n)</math> is defined, <math>0</math> otherwise.<br />
<br />
== Interpreting the tensor ==<br />
<br />
The first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we will define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
This is actually arbitrary, any two partial isometries <math>p,q</math> with full domain and such that the sum of their codomains is <math>H</math> would do the job.<br />
<br />
From the definition <math>p</math> and <math>q</math> have full domain, that is satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are orthogonal, thus we have <math>p^* q = q^* p = 0</math>. Note that we also have <math>pp^* + qq^* = 1</math> although this property is not needed in the sequel.<br />
<br />
Let <math>U</math> be an operator on <math>H\oplus H</math>. We can write <math>U</math> as a matrix:<br />
: <math>U = \begin{pmatrix}<br />
U_{11} & U_{12}\\<br />
U_{21} & U_{22}<br />
\end{pmatrix}</math><br />
where each <math>U_{ij}</math> operates on <math>H</math>.<br />
<br />
Now through the isomorphism <math>H\oplus H\cong H</math> we may transform <math>U</math> into the operator <math>\bar U</math> on <math>H</math> defined by:<br />
<br />
: <math>\bar U = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qu_{22}q^*</math>.<br />
<br />
We call <math>\bar U</math> the ''internalization'' of <math>U</math>.<br />
<br />
Given <math>A</math> and <math>B</math> two types, we define their tensor by:<br />
<br />
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math><br />
<br />
From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form:<br />
: <math>\begin{pmatrix}<br />
u & 0\\<br />
0 & v<br />
\end{pmatrix}</math></div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-27T20:03:21Z<p>Laurent Regnier: Started the description of GoI as operators on Hilbert space</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.<br />
<br />
= The Geometry of Interaction as an abstract machine =<br />
<br />
= The Geometry of Interaction as operators =<br />
<br />
The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space; here, in the case of GoI, the space will be the space of bounded operators on <math>H=\ell^2(\mathbb{C})</math>. This is where the interpretations of proof objects will live. Second define a suitable duality on the space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is an nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. Last define a ''type'' as a set of objects <math>T</math> that is equal to its bidual: <math>T = T\biorth</math>. Putting it in ground terms this means that <math>u\in T</math> iff for all operator <math>v</math>, if <math>v\in T\orth</math>, that is if <math>u'v</math> is nilpotent for all <math>u'\in T</math>, then <math>u\perp v</math>, that is <math>uv</math> is nilpotent.<br />
<br />
It remains to interpret logical operations in this framework, that is to associate a type to each formula, an object to each proof and to show the adequacy lemma: if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>.<br />
<br />
== Preliminaries ==<br />
<br />
We begin by a brief tour of the operations on <math>H</math> that will be used in the sequel.<br />
<br />
Let us denote by <math>(e_n)_{n\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H = \ell^2</math>; <math>e_n</math> is the sequence containing only 0's but at the <math>n</math>'s position where its value is <math>1</math>: <math>e_n = (\delta_{in})_{i\in\mathbb{N}}</math> where <math>\delta_{in}</math> is the standard Kroenecker function. Recall that the adjoint of an operator <math>u</math> is the operator <math>u^\ast</math> defined by <math>\langle u(x), y\rangle = \langle x, u^\ast(y)\rangle</math> for any <math>x,y\in H</math>.<br />
<br />
A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^\ast u = u</math>; as a consequence <math>uu^\ast</math> is a projector the range of which is the range of <math>u</math>; we will call ''codomain'' the range of <math>u</math>. Similarly <math>u^\ast u</math> is also a projector the range of which is the ''domain'' of <math>u</math> (which is orthogonal to the kernel of <math>u</math>). The domain and the codomain of <math>u</math> are both closed subspace of <math>H</math> and <math>u</math> restricted to its domain is an isometry. If the domain <math>u</math> is <math>H</math> that is if <math>u^\ast u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain.<br />
<br />
If <math>u</math> is a partial isometry then <math>u^\ast</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>.<br />
<br />
If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^\ast + vv^\ast = 1</math> means that the codomains of <math>u</math> and <math>v</math> are orthogonal and that their direct sum is <math>H</math>.<br />
<br />
We shall define a number of operators on <math>H</math> by describing their action on the basis <math>(e_n)</math>; actually most of the operators that are used to interpret logical operations will turn out to be defined as partial permutations on the basis, which in particular entails that they are partial isometries.<br />
<br />
== Interpreting the tensor ==<br />
<br />
The first step is, given two types <math>A</math> and <math>B</math>, to construct the type <math>A\tens B</math>. For this purpose we will define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are given by:<br />
<br />
: <math>p(e_n) = e_{2n}</math>,<br />
: <math>q(e_n) = e_{2n+1}</math>.<br />
<br />
This is actually arbitrary, any two partial isometries <math>p,q</math> with full domain and such that the sum of their codomains is <math>H</math> would do the job.</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-16T11:38:04Z<p>Laurent Regnier: typos, style</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.<br />
<br />
This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' (the space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2010-03-16T11:34:35Z<p>Laurent Regnier: reprise de l'intro précisant la différence GoI/sémantique dénotationnelle</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries. This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' (the space interpreting) <math>A</math> ''to'' the (space interpreting) <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' (the space interpreting) <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was originally expressed as a feedback equation solved by the ''execution formula''. The fact that the execution formula has some formal analogies with Kleene's formula for recursive functions allowed to claim that GoI was an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]].<br />
<br />
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/User:Laurent_RegnierUser:Laurent Regnier2009-04-02T13:59:17Z<p>Laurent Regnier: New page: [http://iml.univ-mrs.fr/ Institut de Mathématiques de Luminy] - [http://www.univmed.fr/ Université de la Méditerranée] My [http://iml.univ-mrs.fr/~regnier/ home page].</p>
<hr />
<div>[http://iml.univ-mrs.fr/ Institut de Mathématiques de Luminy] - [http://www.univmed.fr/ Université de la Méditerranée]<br />
<br />
My [http://iml.univ-mrs.fr/~regnier/ home page].</div>Laurent Regnierhttp://llwiki.ens-lyon.fr/mediawiki/index.php/Geometry_of_interactionGeometry of interaction2009-03-28T15:47:37Z<p>Laurent Regnier: style corrections</p>
<hr />
<div>The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries. This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret proofs by morphisms and the composition of proofs (allowed by the cut rule) by composition of morphisms, as it is the case in [[Coherent semantics|denotationnal]] or [[Categorical semantics|categorical]] semantics. Instead the GoI was claimed to be an ''operational semantics'' as it interpreted cut elimination as a mathematical process, namely the computation of a series, the ''execution formula'' that is the solution of a ''feed-back'' equation.<br />
<br />
The first instance of the GoI, that will be described in details in this article, was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]]<br />
<br />
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. It was shown to be related to game semantics and more precisely to the Abramsky-Jagadeesan-Malacaria model of PCF. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.</div>Laurent Regnier