# Geometry of interaction

(corrections, precisions) |
m (ortho) |
||

(13 intermediate revisions by 3 users not shown) | |||

Line 1: | Line 1: | ||

The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries. |
The ''geometry of interaction'', GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries. |
||

− | This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math><ref>to be precise one should say from ''the space interpreting'' <math>A</math> to the space interpreting'' <math>B</math></ref>, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]]. |
+ | This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of <math>A\limp B</math> as a morphism ''from'' <math>A</math> ''to'' <math>B</math> and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting ''on'' <math>A\limp B</math>, that is a morphism from <math>A\limp B</math> to <math>A\limp B</math>. For proof composition the problem was then, given an operator on <math>A\limp B</math> and another one on <math>B\limp C</math> to construct a new operator on <math>A\limp C</math>. This problem was solved by the ''execution formula'' that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an ''operational semantics'', as opposed to traditionnal [[Semantics|denotational semantics]]. |

The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]] |
The first instance of the GoI was restricted to the <math>MELL</math> fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as ''Geometry of Interaction 3'' and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of [[Light linear logics|implicit complexity]] |
||

− | The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal. |
+ | The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the <math>MELL</math> fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal. |

= The Geometry of Interaction as operators = |
= The Geometry of Interaction as operators = |
||

− | The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>. |
+ | The original construction of GoI by Girard follows a general pattern already mentionned in the section on [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math> called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>. |

− | Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>, iff <math>uv\in\bot</math>. |
+ | Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>iff <math>uv\in\bot</math>. |

− | For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. Note in particular that <math>uv\in\bot</math> iff <math>vu\in\bot</math>. |
+ | Such a duality defines an [[orthogonality relation]], with the usual derived definitions and properties. |

− | When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>: |
+ | For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonnegative integer <math>n</math> such that <math>(uv)^n = 0</math>. This duality is symmetric: if <math>uv</math> is nilpotent then <math>vu</math> is nilpotent also. |

− | : <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>. |
||

− | |||

− | This construction has a few properties that we will use without mention in the sequel. Given two subsets <math>X</math> and <math>Y</math> of <math>P</math> we have: |
||

− | * if <math>X\subset Y</math> then <math>Y\orth\subset X</math>; |
||

− | * <math>X\subset X\biorth</math>; |
||

− | * <math>X\triorth = X\orth</math>. |
||

Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. |
Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. |
||

− | The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>, is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>. |
+ | The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''[[double glueing]]''.</ref>is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>. |

− | == Preliminaries == |
+ | == [[GoI for MELL: partial isometries|Partial isometries]] == |

− | We will denote by <math>H</math> the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}|x_n|^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> their ''scalar product'' is: |
+ | The first step is to build the proof space. This is constructed as a special set of partial isometries on a separable Hilbert space <math>H</math> which turns out to be generated by partial permutations on the canonical basis of <math>H</math>. |

− | : <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>. |
||

− | Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. We will say that two subspaces are ''disjoint'' when any two vectors taken in each subspace are orthorgonal. Note that this notion is different from the set theoretic one, in particular two disjoint subspaces always have exactly one vector in common: <math>0</math>. |
+ | These so-called ''<math>p</math>-isometries'' enjoy some nice properties, the most important one being that a <math>p</math>-isometry is a sum of <math>p</math>-isometries iff all the terms of the sum have disjoint domains and disjoint codomains. As a consequence we get that a sum of <math>p</math>-isometries is null iff each term of the sum is null. |

− | The ''norm'' of a vector is the square root of the scalar product with itself: |
+ | A second important property is that operators on <math>H</math> can be ''externalized'' using <math>p</math>-isometries into operators acting on <math>H\oplus H</math>, and conversely operators on <math>H\oplus H</math> may be ''internalized'' into operators on <math>H</math>. This is widely used in the sequel. |

− | : <math>\|x\| = \sqrt{\langle x, x\rangle}</math>. |
||

− | Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical ''hilbertian basis'' of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>\delta_{kn}=1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have: |
+ | == [[GoI for MELL: the *-autonomous structure|The *-autonomous structure]] == |

− | : <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>. |
||

− | An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math><ref>Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values: |
+ | The second step is to interpret the linear logic multiplicative operations, most importantly the cut rule. |

− | : <math>\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|</math>.</ref>. The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>. |
||

− | The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel, ''ie'', the maximal subspace disjoint with the kernel: |
+ | Internalization/externalization is the key for this: typically the type <math>A\tens B</math> is interpreted by a set of <math>p</math>-isometries which are internalizations of operators acting on <math>H\oplus H</math>. |

− | * <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>; |
+ | The (interpretation of) the cut-rule is defined in two steps: firstly we use nilpotency to define an operation corresponding to lambda-calculus application which given two <math>p</math>-isometries in respectively <math>A\limp B</math> and <math>A</math> produces an operator in <math>B</math>. From this we deduce the composition and finally obtain a structure of *-autonomous category, that is a model of multiplicative linear logic. |

− | * <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>; |
||

− | * <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>. |
||

− | These three sets are closed subspaces of <math>H</math>. |
+ | == [[GoI for MELL: exponentials|The exponentials]] == |

− | The ''adjoint'' of an operator <math>u</math> is the operator <math>u^*</math> defined by <math>\langle u(x), y\rangle = \langle x, u^*(y)\rangle</math> for any <math>x,y\in H</math>. Adjointness is well behaved w.r.t. composition of operators: |
+ | Finally we turn to define exponentials, that is connectives managing duplication. To do this we introduce an isomorphism (induced by a <math>p</math>-isometry) between <math>H</math> and <math>H\tens H</math>: the first component of the tensor is intended to hold the address of the the copy whereas the second component contains the content of the copy. |

− | : <math>(uv)^* = v^*u^*</math>. |
||

− | A ''projector'' is an idempotent operator of norm <math>0</math> (the projector |
+ | We eventually get a quasi-model of full MELL; quasi in the sense that if we can construct <math>p</math>-isometries for usual structural operations in MELL (contraction, dereliction, digging), the interpretation of linear logic proofs is not invariant w.r.t. cut elimination in general. It is however invariant in some good cases, which are enough to get a correction theorem for the interpretation. |

− | on the null subspace) or <math>1</math>, that is an operator <math>p</math> |
||

− | such that <math>p^2 = p</math> and <math>\|p\| = 0</math> or <math>1</math>. A projector is auto-adjoint and its domain is equal to its codomain. |
||

− | A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = |
+ | = The Geometry of Interaction as an abstract machine = |

− | u</math>; this condition entails that we also have <math>u^*uu^* = |
||

− | u^*</math>. As a consequence <math>uu^*</math> and <math>uu^*</math> are both projectors, called respectively the ''initial'' and the ''final'' projector of <math>u</math> because their (co)domains are respectively the domain and the codomain of <math>u</math>: |
||

− | * <math>\mathrm{Dom}(u^*u) = \mathrm{Codom}(u^*u) = \mathrm{Dom}(u)</math>; |
||

− | * <math>\mathrm{Dom}(uu^*) = \mathrm{Codom}(uu^*) = \mathrm{Codom}(u)</math>. |
||

− | The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries. |
+ | = Notes and references = |

− | If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>. |
+ | <references/> |

− | |||

− | If the domain of <math>u</math> is <math>H</math> that is if <math>u^* u = 1</math> we say that <math>u</math> has ''full domain'', and similarly for codomain. If <math>u</math> and <math>v</math> are two partial isometries, the equation <math>uu^* + vv^* = 1</math> means that the codomains of <math>u</math> and <math>v</math> are disjoint but their direct sum is <math>H</math>. |
||

− | |||

− | === Partial permutations and partial isometries === |
||

− | |||

− | We will now define our proof space which turns out to be the set of partial isometries acting as permutations on the canonical basis <math>(e_n)_{n\in\mathbb{N}}</math>. |
||

− | |||

− | More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a one-to-one map defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by: |
||

− | |||

− | * <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>; |
||

− | * if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>; |
||

− | * the codomain of <math>\varphi\circ\psi</math> is the image of the domain: <math>C_{\varphi\circ\psi} = \{\varphi(\psi(n)), n\in D_{\varphi\circ\psi}\}</math>. |
||

− | |||

− | Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now. |
||

− | |||

− | A ''partial identitie'' is a partial permutation <math>1_D</math> whose domain and codomain are both equal to a subset <math>D</math> on which <math>1_D</math> is the identity function. Partial identities are idempotent for composition. |
||

− | |||

− | Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote by <math>0</math> and the identity on <math>\mathbb{N}</math> that we will denote by <math>1</math>. This latter permutation is the neutral for composition. |
||

− | |||

− | If <math>\varphi</math> is a partial permutation there is an inverse partial permutation <math>\varphi^{-1}</math> whose domain is <math>D_{\varphi^{-1}} = C_{\varphi}</math> and who satisfies: |
||

− | |||

− | : <math>\varphi^{-1}\circ\varphi = 1_{D_\varphi}</math> |
||

− | : <math>\varphi\circ\varphi^{-1} = 1_{C_\varphi}</math> |
||

− | |||

− | Given a partial permutation <math>\varphi</math> one defines a partial isometry <math>u_\varphi</math> by: |
||

− | : <math>u_\varphi(e_n) = |
||

− | \begin{cases} |
||

− | e_{\varphi(n)} & \text{ if }n\in D_\varphi,\\ |
||

− | 0 & \text{ otherwise.} |
||

− | \end{cases} |
||

− | </math> |
||

− | In other terms if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>\ell^2</math> then <math>u_\varphi(x)</math> is the sequence <math>(y_n)_{n\in\mathbb{N}}</math> defined by: |
||

− | : <math>y_n = x_{\varphi^{-1}(n)}</math> if <math>n\in C_\varphi</math>, <math>0</math> otherwise. |
||

− | |||

− | We will (not so abusively) write <math>e_{\varphi(n)} = 0</math> when <math>\varphi(n)</math> is undefined so that the definition of <math>u_\varphi</math> reads: |
||

− | : <math>u_\varphi(e_n) = e_{\varphi(n)}</math>. |
||

− | |||

− | The domain of <math>u_\varphi</math> is the subspace spanned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spanned by <math>(e_n)_{n\in C_\varphi}</math>. In particular if <math>\varphi</math> is <math>1_D</math> then <math>u_\varphi</math> is the projector on the subspace spanned by <math>(e_n)_{n\in D}</math>. |
||

− | |||

− | {{Proposition| |
||

− | Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have: |
||

− | : <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>. |
||

− | |||

− | The adjoint of <math>u_\varphi</math> is: |
||

− | : <math>u_\varphi^* = u_{\varphi^{-1}}</math>. |
||

− | |||

− | In particular the initial projector of <math>u_{\varphi}</math> is given by: |
||

− | : <math>u_\varphi u^*_\varphi = u_{1_{D_\varphi}}</math>. |
||

− | |||

− | and the final projector of <math>u_\varphi</math> is: |
||

− | : <math>u^*_\varphi u_\varphi = u_{1_{C_\varphi}}</math>. |
||

− | |||

− | Projectors generated by partial identities commute: |
||

− | : <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>. |
||

− | }} |
||

− | Note that this entails all the other commutations of projectors: <math>u^*_\varphi u_\varphi u_\psi u^*\psi = u_\psi u^*_\psi u^*_\varphi u_\varphi</math> and <math>u^*_\varphi u_\varphi u^*_\psi u\psi = u^*_\psi u_\psi u^*_\varphi u_\varphi</math>. |
||

− | |||

− | {{Definition| |
||

− | We call ''<math>p</math>-isometry'' a partial isometry of the form <math>u_\varphi</math> where <math>\varphi</math> is a partial permutation on <math>\mathbb{N}</math>. The ''proof space'' <math>\mathcal{P}</math> is the set of all <math>p</math>-isometries. |
||

− | }} |
||

− | |||

− | In particular note that <math>0</math> is a <math>p</math>-isometry. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra<ref><math>\mathcal{P}</math> is the normalizing groupoid of the maximal commutative subalgebra of <math>\mathcal{B}(H)</math> consisiting of all operators ''diagonalizable'' in the canonical basis.</ref>. In general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have: |
||

− | |||

− | {{Proposition| |
||

− | Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have disjoint domains and disjoint codomains, that is: |
||

− | : <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>. |
||

− | }} |
||

− | |||

− | {{Proof| |
||

− | Suppose for contradiction that <math>e_n</math> is in the domains of <math>u</math> and <math>v</math>. There are integers <math>p</math> and <math>q</math> such that <math>u(e_n) = e_p</math> and <math>v(e_n) = e_q</math> thus <math>(u+v)(e_n) = e_p + e_q</math> which is not a basis vector; therefore <math>u+v</math> is not a <math>p</math>-permutation. |
||

− | }} |
||

− | |||

− | As a corollary note that if <math>u+v=0</math> then <math>u=v=0</math>. |
||

− | |||

− | === From operators to matrices: internalization/externalization === |
||

− | |||

− | It will be convenient to view operators on <math>H</math> as acting on <math>H\oplus H</math>, and conversely. For this purpose we define an isomorphism <math>H\oplus H \cong H</math> by <math>x\oplus y\rightsquigarrow p(x)+q(y)</math> where <math>p:H\mapsto H</math> and <math>q:H\mapsto H</math> are partial isometries given by: |
||

− | |||

− | : <math>p(e_n) = e_{2n}</math>, |
||

− | : <math>q(e_n) = e_{2n+1}</math>. |
||

− | |||

− | From the definition <math>p</math> and <math>q</math> have full domain, that is |
||

− | satisfy <math>p^* p = q^* q = 1</math>. On the other hand their codomains are |
||

− | disjoint, thus we have <math>p^*q = q^*p = 0</math>. As the sum of their |
||

− | codomains is the full space <math>H</math> we also have <math>pp^* + qq^* = 1</math>. |
||

− | |||

− | Note that we have choosen <math>p</math> and <math>q</math> in <math>\mathcal{P}</math>. However the choice is arbitrary: any two <math>p</math>-isometries with full domain and disjoint codomains would do the job. |
||

− | |||

− | Given an operator <math>u</math> on <math>H</math> we may ''externalize'' it obtaining an operator <math>U</math> on <math>H\oplus H</math> defined by the matrix: |
||

− | : <math>U = \begin{pmatrix} |
||

− | u_{11} & u_{12}\\ |
||

− | u_{21} & u_{22} |
||

− | \end{pmatrix}</math> |
||

− | where the <math>u_{ij}</math>'s are given by: |
||

− | : <math>u_{11} = p^*up</math>; |
||

− | : <math>u_{12} = p^*uq</math>; |
||

− | : <math>u_{21} = q^*up</math>; |
||

− | : <math>u_{22} = q^*uq</math>. |
||

− | |||

− | The <math>u_{ij}</math>'s are called the ''external components'' of <math>u</math>. The externalization is functorial in the sense that if <math>v</math> is another operator externalized as: |
||

− | : <math>V = \begin{pmatrix} |
||

− | v_{11} & v_{12}\\ |
||

− | v_{21} & v_{22} |
||

− | \end{pmatrix} |
||

− | = \begin{pmatrix} |
||

− | p^*vp & p^*vq\\ |
||

− | q^*vp & q^*vq |
||

− | \end{pmatrix} |
||

− | </math> |
||

− | then the externalization of <math>uv</math> is <math>UV</math>. |
||

− | |||

− | As <math>pp^* + qq^* = 1</math> we have: |
||

− | : <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math> |
||

− | which entails that externalization is reversible, its converse being called ''internalization''. |
||

− | |||

− | If we suppose that <math>u</math> is a <math>p</math>-isometry then so are the components <math>u_{ij}</math>'s. Thus the formula above entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains from which we deduce: |
||

− | |||

− | {{Proposition| |
||

− | If <math>u</math> is a <math>p</math>-isometry and <math>u_{ij}</math> are its external components then: |
||

− | * <math>u_{1j}</math> and <math>u_{2j}</math> have disjoint domains, that is <math>u_{1j}^*u_{1j}u_{2j}^*u_{2j} = 0</math> for <math>j=1,2</math>; |
||

− | * <math>u_{i1}</math> and <math>u_{i2}</math> have disjoint codomains, that is <math>u_{i1}u_{i1}^*u_{i2}u_{i2}^* = 0</math> for <math>i=1,2</math>. |
||

− | }} |
||

− | |||

− | As an example of computation in <math>\mathcal{P}</math> let us check that the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math> is null: |
||

− | : <math>\begin{align} |
||

− | (pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*) |
||

− | &= pu_{11}u_{11}^*u_{12}u_{12}^*p^*\\ |
||

− | &= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\ |
||

− | &= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\ |
||

− | &= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\ |
||

− | &= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\ |
||

− | &= 0 |
||

− | \end{align}</math> |
||

− | where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>. |
||

− | |||

− | == Interpreting the multiplicative connectives == |
||

− | |||

− | Recall that when <math>u</math> and <math>v</math> are <math>p</math>-isometries we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>. |
||

− | |||

− | === The tensor and the linear application === |
||

− | |||

− | If <math>u</math> and <math>v</math> are two <math>p</math>-isometries summing them doesn't in general produces a <math>p</math>-isometry. However as <math>pup^*</math> and <math>qvq^*</math> have disjoint domains and disjoint codomains it is true that <math>pup^* + qvq^*</math> is a <math>p</math>-isometry. Given two types <math>A</math> and <math>B</math>, we thus define their ''tensor'' by: |
||

− | |||

− | : <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math> |
||

− | |||

− | Note the closure by bidual to make sure that we obtain a type. |
||

− | |||

− | From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form: |
||

− | : <math>\begin{pmatrix} |
||

− | u & 0\\ |
||

− | 0 & v |
||

− | \end{pmatrix}</math> |
||

− | |||

− | {{Remark| |
||

− | This so-called tensor resembles a sum rather than a product. We will stick to this terminology though because it defines the interpretation of the tensor connective of linear logic. |
||

− | }} |
||

− | |||

− | The linear implication is derived from the tensor by duality: given two types <math>A</math> and <math>B</math> the type <math>A\limp B</math> is defined by: |
||

− | : <math>A\limp B = (A\tens B\orth)\orth</math>. |
||

− | |||

− | Unfolding this definition we get: |
||

− | : <math>A\limp B = \{u\in\mathcal{P}\text{ s.t. } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>. |
||

− | |||

− | === The identity === |
||

− | |||

− | Given a type <math>A</math> we are to find an operator <math>\iota</math> in type <math>A\limp A</math>, thus satisfying: |
||

− | : <math>\forall u\in A, v\in A\orth,\, \iota(pup^* + qvq^*)\in\bot</math>. |
||

− | |||

− | An easy solution is to take <math>\iota = pq^* + qp^*</math>. In this way we get <math>\iota(pup^* + qvq^*) = qup^* + pvq^*</math>. Therefore <math>(\iota(pup^* + qvq^*))^2 = quvq^* + pvup^*</math>, from which one deduces that this operator is nilpotent iff <math>uv</math> is nilpotent. It is the case since <math>u</math> is in <math>A</math> and <math>v</math> in <math>A\orth</math>. |
||

− | |||

− | It is interesting to note that the <math>\iota</math> thus defined is actually the internalization of the operator on <math>H\oplus H</math> given by the matrix: |
||

− | : <math>\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix}</math>. |
||

− | |||

− | We will see once the composition is defined that the <math>\iota</math> operator is the interpretation of the identity proof, as expected. |
||

− | |||

− | === The execution formula, version 1: application === |
||

− | |||

− | {{Definition| |
||

− | Let <math>u</math> and <math>v</math> be two operators; as above denote by <math>u_{ij}</math> the external components of <math>u</math>. If <math>u_{11}v</math> is nilpotent we define the ''application of <math>u</math> to <math>v</math>'' by: |
||

− | : <math>\mathrm{App}(u,v) = u_{22} + u_{21}v\sum_k(u_{11}v)^ku_{12}</math>. |
||

− | }} |
||

− | |||

− | Note that the hypothesis that <math>u_{11}v</math> is nilpotent entails that the sum <math>\sum_k(u_{11}v)^k</math> is actually finite. It would be enough to assume that this sum converges. For simplicity we stick to the nilpotency condition, but we should mention that weak nilpotency would do as well. |
||

− | |||

− | {{Theorem| |
||

− | If <math>u</math> and <math>v</math> are <math>p</math>-isometries such that <math>u_{11}v</math> is nilpotent, then <math>\mathrm{App}(u,v)</math> is also a <math>p</math>-isometry. |
||

− | }} |
||

− | |||

− | {{Proof| |
||

− | Let us note <math>E_k = u_{21}v(u_{11}v)^ku_{12}</math>. Recall that <math>u_{22}</math> and <math>u_{12}</math> being external components of the <math>p</math>-isometry <math>u</math>, they have disjoint domains. Thus it is also the case of <math>u_{22}</math> and <math>E_k</math>. Similarly <math>u_{22}</math> and <math>E_k</math> have disjoint codomains because <math>u_{22}</math> and <math>u_{21}</math> have disjoint codomains. |
||

− | |||

− | Let now <math>k</math> and <math>l</math> be two integers such that <math>k>l</math> and let us compute for example the intersection of the codomains of <math>E_k</math> and <math>E_l</math>: |
||

− | : <math> |
||

− | E_kE^*_kE_lE^*_l = (u_{21}v(u_{11}v)^ku_{12})(u^*_{12}(v^*u^*_{11})^kv^*u^*_{21})(u_{21}v(u_{11}v)^lu_{12})(u^*_{12}(v^*u^*_{11})^lv^*u_{21}^*) |
||

− | </math> |
||

− | As <math>k>l</math> we may write <math>(v^*u_{11}^*)^l = (v^*u^*_{11})^{k-l-1}v^*u^*_{11}(v^*u^*_{11})^l</math>. Let us note <math>E = u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}</math> so that <math>E_kE^*_kE_lE^*_l = u_{21}v(u_{11}v)^ku_{12}u^*_{12}(v^*u^*_{11})^{k-l-1}v^*Eu^*_{12}(v^*u^*_{11})^lv^*u_{21}^*</math>. We have: |
||

− | : <math>\begin{align} |
||

− | E &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\ |
||

− | &= (u^*_{11}u_{11}u^*_{11})(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{12}\\ |
||

− | &= u^*_{11}(u_{11}u^*_{11})\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)u_{12}\\ |
||

− | &= u^*_{11}\bigl((v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^l\bigr)(u_{11}u^*_{11})u_{12}\\ |
||

− | &= u^*_{11}(v^*u^*_{11})^lv^*u_{21}^*u_{21}v(u_{11}v)^lu_{11}u^*_{11}u_{12}\\ |
||

− | &= 0 |
||

− | \end{align}</math> |
||

− | because <math>u_{11}</math> and <math>u_{12}</math> have disjoint codomains, thus <math>u^*_{11}u_{12} = 0</math>. |
||

− | |||

− | Similarly we can show that <math>E_k</math> and <math>E_l</math> have disjoint domains. Therefore we have proved that all terms of the sum <math>\mathrm{App}(u,v)</math> have disjoint domains and disjoint codomains. Consequently <math>\mathrm{App}(u,v)</math> is a <math>p</math>-isometry. |
||

− | }} |
||

− | |||

− | {{Theorem| |
||

− | Let <math>A</math> and <math>B</math> be two types and <math>u</math> a <math>p</math>-isometry. Then the two following conditions are equivalent: |
||

− | # <math>u\in A\limp B</math>; |
||

− | # for any <math>v\in A</math> we have: |
||

− | #* <math>u_{11}v</math> is nilpotent and |
||

− | #* <math>\mathrm{App}(u, v)\in B</math>. |
||

− | }} |
||

− | |||

− | {{Proof| |
||

− | Let <math>v</math> and <math>w</math> be two <math>p</math>-isometries. If we compute |
||

− | : <math>(u.(pvp^* + qwq^*))^n = \bigl((pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*)(pvp^* + qwq^*)\bigr)^n</math> |
||

− | we get a finite sum of monomial operators of the form: |
||

− | # <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> |
||

− | # <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>, |
||

− | # <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> or |
||

− | # <math>q(u_{22}w)^{i_0}u_{21}v(u_{11}v)^{i_1}\dots u_{12}w(u_{22}w)^{i_m}q^*</math>, |
||

− | for all tuples of (nonnegative) integers <math>(i_1,\dots, i_m)</math> such that <math>i_0+\cdots+i_m+m = n</math>. |
||

− | |||

− | Each of these monomial is a <math>p</math>-isometry. Furthermore they have disjoint domains and disjoint codomains because their sum is the <math>p</math>-isometry <math>(u.(pvp^* + qwq^*))^n</math>. This entails that <math>(u.(pvp^* + qwq^*))^n = 0</math> iff all these monomials are null. |
||

− | |||

− | Suppose <math>u_{11}v</math> is nilpotent and consider: |
||

− | : <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = \biggl(\bigl(u_{22} + u_{21}v\sum_k(u_{11}v)^k u_{12}\bigr)w\biggr)^n</math>. |
||

− | Developping we get a finite sum of monomials of the form: |
||

− | : 5. <math>(u_{22}w)^{l_0}u_{21}v(u_{11}v)^{k_1}u_{12}w(u_{22}w)^{l_1}\dots u_{21}v(u_{11}v)^{k_m}u_{12}w(u_{22}w)^{l_m}</math> |
||

− | for all tuples <math>(l_0, k_1, l_1,\dots, k_m, l_m)</math> such that <math>l_0\cdots l_m + m = n</math> and <math>k_i</math> is less than the degree of nilpotency of <math>u_{11}v</math> for all <math>i</math>. |
||

− | |||

− | Again as these monomials are <math>p</math>-isometries and their sum is the <math>p</math>-isometry <math>(\mathrm{App}(u,v)w)^n</math>, they have pairwise disjoint domains and pairwise disjoint codomains. Note that each of these monomial is equal to <math>q^*Mq</math> where <math>M</math> is a monomial of type 4 above. |
||

− | |||

− | As before we thus have that <math>\bigl(\mathrm{App}(u,v)w\bigr)^n = 0</math> iff all monomials of type 5 are null. |
||

− | |||

− | Suppose now that <math>u\in A\limp B</math> and <math>v\in A</math>. Then, since <math>0\in B\orth</math> (<math>0</math> belongs to any type) <math>u.(pvp^*) = pu_{11}vp^*</math> is nilpotent, thus <math>u_{11}v</math> is nilpotent. |
||

− | |||

− | Suppose further that <math>w\in B\orth</math>. Then <math>u.(pvp^*+qwq^*)</math> is nilpotent, thus there is a <math>N</math> such that <math>(u.(pvp^* + qwq^*))^n=0</math> for any <math>n\geq N</math>. This entails that all monomials of type 1 to 4 are null. Therefore all monomials appearing in the developpment of <math>(\mathrm{App}(u,v)w)^N</math> are null which proves that <math>\mathrm{App}(u,v)w</math> is nilpotent. Thus <math>\mathrm{App}(u,v)\in B</math>. |
||

− | |||

− | Conversely suppose for any <math>v\in A</math> and <math>w\in B\orth</math>, <math>u_{11}v</math> and <math>\mathrm{App}(u,v)w</math> are nilpotent. Let <math>P</math> and <math>N</math> be their respective degrees of nilpotency and put <math>n=N(P+1)+N</math>. Then we claim that all monomials of type 1 to 4 appearing in the development of <math>(u.(pvp^*+qwq^*))^n</math> are null. |
||

− | |||

− | Consider for example a monomial of type 1: |
||

− | : <math>p(u_{11}v)^{i_0}u_{12}w(u_{22}w)^{i_1}\dots u_{21}v(u_{11}v)^{i_m}p^*</math> |
||

− | with <math>i_0+\cdots+i_m + m = n</math>. Note that <math>m</math> must be even. |
||

− | |||

− | If <math>i_{2k}\geq P</math> for some <math>0\leq k\leq m/2</math> then <math>(u_{11}v)^{i_{2k}}=0</math> thus our monomial is null. Otherwise if <math>i_{2k}<P</math> for all <math>k</math> we have: |
||

− | : <math>i_1+i_3+\cdots +i_{m-1} + m/2 = n - m/2 - (i_0+i_2+\cdots +i_m)</math> |
||

− | thus: |
||

− | : <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - m/2 - (1+m/2)P</math>. |
||

− | Now if <math>m/2\geq N</math> then <math>i_1+\cdots+i_{m-1}+m/2 \geq N</math>. Otherwise <math>1+m/2\leq N</math> thus |
||

− | : <math>i_1+i_3+\cdots +i_{m-1} + m/2\geq n - N - NP = N</math>. |
||

− | Since <math>N</math> is the degree of nilpotency of <math>\mathrm{App}(u,v)w</math> we have that the monomial: |
||

− | : <math>(u_{22}w)^{i_1}u_{21}v(u_{11}v)^{i_2}u_{12}w\dots(u_{11}v)^{i_{m-2}}u_{12}w(u_{22}w)^{i_{m-1}}</math> |
||

− | is null, thus also the monomial of type 1 we started with. |
||

− | }} |
||

− | |||

− | {{Corollary| |
||

− | If <math>A</math> and <math>B</math> are types then we have: |
||

− | : <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>. |
||

− | }} |
||

− | |||

− | As an example if we compute the application of the interpretation of the identity <math>\iota</math> in type <math>A\limp A</math> to the operator <math>v\in A</math> then we have: |
||

− | : <math>\mathrm{App}(\iota, v) = \iota_{22} + \iota_{21}v\sum(\iota_{11}v)^k\iota_{12}</math>. |
||

− | Now recall that <math>\iota = pq^* + qp^*</math> so that <math>\iota_{11} = \iota_{22} = 0</math> and <math>\iota_{12} = \iota_{21} = 1</math> and we thus get: |
||

− | : <math>\mathrm{App}(\iota, v) = v</math> |
||

− | as expected. |
||

− | |||

− | === The tensor rule === |
||

− | |||

− | Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by: |
||

− | : <math>\begin{align} |
||

− | u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\ |
||

− | &+ pqp^*u'pq^*p^* + qqq^*u'pq^*p^* + pqp^*u'qq^*q^* + qqq^*u'qq^*q^* |
||

− | \end{align}</math> |
||

− | |||

− | Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by <math>u\tens u'</math> actually is the internalization of the direct sum <math>u\oplus u'</math>. |
||

− | |||

− | Indeed if we think of <math>u</math> and <math>u'</math> as the internalizations of the matrices: |
||

− | : <math> |
||

− | \begin{pmatrix}u_{11} & u_{12}\\ |
||

− | u_{21} & u_{22} |
||

− | \end{pmatrix} |
||

− | </math> and <math> |
||

− | \begin{pmatrix}u'_{11} & u'_{12}\\ |
||

− | u'_{21} & u'_{22} |
||

− | \end{pmatrix}</math> |
||

− | then we may write: |
||

− | : <math>\begin{align} |
||

− | u\tens u' &= ppu_{11}p^*p^* + qpu_{21}p^*p^* + ppu_{12}p^*q^* + qpu_{22}p^*q^*\\ |
||

− | &+ pqu'_{11}q^*p^* + qqu'_{21}q^*p^* + pqu'_{12}q^*q^* + qqu'_{22}q^*q^* |
||

− | \end{align}</math> |
||

− | |||

− | Thus the components of <math>u\tens u'</math> are given by: |
||

− | : <math>(u\tens u')_{ij} = pu_{ij}p^* + qu'_{ij}q^*</math>. |
||

− | and we see that <math>u\tens u'</math> is actually the internalization of the matrix: |
||

− | : <math> |
||

− | \begin{pmatrix} |
||

− | u_{11} & 0 & u_{12} & 0 \\ |
||

− | 0 & u'_{11} & 0 & u'_{12} \\ |
||

− | u_{21} & 0 & u_{22} & 0 \\ |
||

− | 0 & u'_{21} & 0 & u'_{22} \\ |
||

− | \end{pmatrix} |
||

− | </math> |
||

− | |||

− | We are now to show that if we suppose <math>u</math>and <math>u'</math> are in types <math>A\limp B</math> and <math>A'\limp B'</math>, then <math>u\tens u'</math> is in <math>A\tens A'\limp B\tens B'</math>. For this we consider <math>v</math> and <math>v'</math> in respectively in <math>A</math> and <math>A'</math>, so that <math>pvp^* + qv'q^*</math> is in <math>A\tens A'</math>, and we show that <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)\in B\tens B'</math>. |
||

− | |||

− | Since <math>u</math> and <math>u'</math> are in <math>A\limp B</math> and <math>A'\limp B'</math> we have that <math>\mathrm{App}(u, v)</math> and <math>\mathrm{App}(u', v')</math> are respectively in <math>B</math> and <math>B'</math>, thus: |
||

− | : <math>p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* \in B\tens B'</math>. |
||

− | |||

− | We know that both <math>u_{11}v</math> and <math>u'_{11}v'</math> are nilpotent. But we have: |
||

− | : <math>\begin{align} |
||

− | \bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^n |
||

− | &= \bigl((pu_{11} + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^n\\ |
||

− | &= (pu_{11}vp^* + qu'_{11}v'q^*)^n\\ |
||

− | &= p(u_{11}v)^np^* + q(u'_{11}v')^nq^* |
||

− | \end{align}</math> |
||

− | |||

− | Therefore <math>(u\tens u')_{11}(pvp^* + qv'q^*)</math> is nilpotent. So we can compute <math>\mathrm{App}(u\tens u', pvp^* + qv'q^*)</math>: |
||

− | : <math>\begin{align} |
||

− | &\mathrm{App}(u\tens u', pvp^* + qv'q^*)\\ |
||

− | &= (u\tens u')_{22} + (u\tens u')_{21}(pvp^* + qv'q^*)\sum\bigl((u\tens u')_{11}(pvp^* + qv'q^*)\bigr)^k(u\tens u')_{12}\\ |
||

− | &= pu_{22}p^* + qu'_{22}q^* + (pu_{21}p^* + qu'_{21}q^*)(pvp^* + qv'q^*)\sum\bigl((pu_{11}p^* + qu'_{11}q^*)(pvp^* + qv'q^*)\bigr)^k(pu_{12}p^* + qu'_{12}q^*)\\ |
||

− | &= p\bigl(u_{22} + u_{21}v\sum(u_{11}v)^ku_{12}\bigr)p^* + q\bigl(u'_{22} + u'_{21}v'\sum(u'_{11}v')^ku'_{12}\bigr)q^*\\ |
||

− | &= p\mathrm{App}(u, v)p^* + q\mathrm{App}(u', v')q^* |
||

− | \end{align}</math> |
||

− | thus lives in <math>B\tens B'</math>. |
||

− | |||

− | === Other monoidal constructions === |
||

− | |||

− | ==== Contraposition ==== |
||

− | |||

− | Let <math>A</math> and <math>B</math> be some types; we have: |
||

− | : <math>A\limp B = A\orth\limpinv B\orth</math> |
||

− | |||

− | Indeed, <math>u\in A\limp B</math> means that for any <math>v</math> and <math>w</math> in respectively <math>A</math> and <math>B\orth</math> we have <math>u.(pvp^* + qwq^*)\in\bot</math> which is exactly the definition of <math>A\orth\limpinv B\orth</math>. |
||

− | |||

− | We will denote <math>u\orth</math> the operator: |
||

− | : <math>u\orth = pu_{22}p^* + pu_{12}q^* + qu_{12}p^* + qu_{11}q^*</math> |
||

− | where <math>u_{ij}</math> is given by externalization. Therefore the externalization of <math>u\orth</math> is: |
||

− | : <math>(u\orth)_{ij} = u_{\bar i\,\bar j}</math> where <math>\bar .</math> is defined by <math>\bar1 = 2, \bar2 = 1</math>. |
||

− | From this we deduce that <math>u\orth\in B\orth\limp A\orth</math> and that <math>(u\orth)\orth = u</math>. |
||

− | |||

− | ==== Commutativity ==== |
||

− | Let <math>\sigma</math> be the operator: |
||

− | : <math>\sigma = ppq^*q^* +pqp^*q^* + qpq^*p^* + qqp^*p^*</math>. |
||

− | One can check that <math>\sigma</math> is the internalization of the operator <math>S</math> on <math>H\oplus H\oplus H\oplus H</math> defined by: <math>S(x_1\oplus x_2\oplus x_3\oplus x_4) = x_4\oplus x_3\oplus x_2\oplus x_1</math>. In particular the components of <math>\sigma</math> are: |
||

− | : <math>\sigma_{11} = \sigma_{22} = 0</math>; |
||

− | : <math>\sigma_{12} = \sigma_{21} = pq^* + qp^*</math>. |
||

− | |||

− | Let <math>A</math> and <math>B</math> be types and <math>u</math> and <math>v</math> be operators in <math>A</math> and <math>B</math>. Then <math>pup^* + qvq^*</math> is in <math>A\tens B</math> and as <math>\sigma_{11}.(pup^* + qvq^*) = 0</math> we may compute: |
||

− | : <math>\begin{align} |
||

− | \mathrm{App}(\sigma, pup^* + qvq^*) |
||

− | &= \sigma_{22} + \sigma_{21}(pup^* + qvq^*)\sum(\sigma_{11}(pup^* + qvq^*))^k\sigma_{12}\\ |
||

− | &= (pq^* + qp^*)(pup^* + qvq^*)(pq^* + qp^*)\\ |
||

− | &= pvp^* + quq^* |
||

− | \end{align}</math> |
||

− | But <math>pvp^* + quq^*\in B\tens A</math>, thus we have shown that: |
||

− | : <math>\sigma\in (A\tens B) \limp (B\tens A)</math>. |
||

− | |||

− | ==== Distributivity ==== |
||

− | We get distributivity by considering the operator: |
||

− | : <math>\delta = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math> |
||

− | that is similarly shown to be in type <math>A\tens(B\tens C)\limp(A\tens B)\tens C</math> for any types <math>A</math>, <math>B</math> and <math>C</math>. |
||

− | |||

− | |||

− | ==== Weak distributivity ==== |
||

− | We can finally get weak distributivity thanks to the operators: |
||

− | : <math>\delta_1 = pppp^*q^* + ppqp^*q^*q^* + pqq^*q^*q^* + qpp^*p^*p^* + qqp q^*p^*p^* + qqq q^*p^*</math> and |
||

− | : <math>\delta_2 = ppp^*p^*q^* + pqpq^*p^*q^* + pqqq^*q^* + qppp^*p^* + qpqp^*q^*p^* + qqq^*q^*p^*</math>. |
||

− | |||

− | Given three types <math>A</math>, <math>B</math> and <math>C</math> then one can show that: |
||

− | : <math>\delta_1</math> has type <math>((A\limp B)\tens C)\limp A\limp (B\tens C)</math> and |
||

− | : <math>\delta_2</math> has type <math>(A\tens(B\limp C))\limp (A\limp B)\limp C</math>. |
||

− | |||

− | === Execution formula, version 2: composition === |
||

− | |||

− | Let <math>A</math>, <math>B</math> and <math>C</math> be types and <math>u</math> and <math>v</math> be operators respectively in types <math>A\limp B</math> and <math>B\limp C</math>. |
||

− | |||

− | As usual we will denote <math>u_{ij}</math> and <math>v_{ij}</math> the operators obtained by externalization of <math>u</math> and <math>v</math>, eg, <math>u_{11} = p^*up</math>, ... |
||

− | |||

− | As <math>u</math> is in <math>A\limp B</math> we have that <math>\mathrm{App}(u, 0)=u_{22}\in B</math>; similarly as <math>v\in B\limp C</math>, thus <math>v\orth\in C\orth\limp B\orth</math>, we have <math>\mathrm{App}(v\orth, 0) = v_{11}\in B\orth</math>. Thus <math>u_{22}v_{11}</math> is nilpotent. |
||

− | |||

− | We define the operator <math>\mathrm{Comp}(u, v)</math> by: |
||

− | : <math>\begin{align} |
||

− | \mathrm{Comp}(u, v) &= p(u_{11} + u_{12}\sum(v_{11}u_{22})^k\,v_{11}u_{21})p^*\\ |
||

− | &+ p(u_{12}\sum(v_{11}u_{22})^k\,v_{12})q^*\\ |
||

− | &+ q(v_{21}\sum(u_{22}v_{11})^k\,u_{21})p^*\\ |
||

− | &+ q(v_{22} + v_{21}\sum(u_{22}v_{11})^k\,u_{22}v_{12})q^* |
||

− | \end{align}</math> |
||

− | |||

− | This is well defined since <math>u_{11}v_{22}</math> is nilpotent. As an example let us compute the composition of <math>u</math> and <math>\iota</math> in type <math>B\limp B</math>; recall that <math>\iota_{ij} = \delta_{ij}</math>, so we get: |
||

− | : <math> |
||

− | \mathrm{Comp}(u, \iota) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^* = u |
||

− | </math> |
||

− | Similar computation would show that <math>\mathrm{Comp}(\iota, v) = v</math> (we use <math>pp^* + qq^* = 1</math> here). |
||

− | |||

− | Coming back to the general case we claim that <math>\mathrm{Comp}(u, v)</math> is in <math>A\limp C</math>: let <math>a</math> be an operator in <math>A</math>. By computation we can check that: |
||

− | : <math>\mathrm{App}(\mathrm{Comp}(u, v), a) = \mathrm{App}(v, \mathrm{App}(u, a))</math>. |
||

− | Now since <math>u</math> is in <math>A\limp B</math>, <math>\mathrm{App}(u, a)</math> is in <math>B</math> and since <math>v</math> is in <math>B\limp C</math>, <math>\mathrm{App}(v, \mathrm{App}(u, a))</math> is in <math>C</math>. |
||

− | |||

− | If we now consider a type <math>D</math> and an operator <math>w</math> in <math>C\limp D</math> then we have: |
||

− | : <math>\mathrm{Comp}(\mathrm{Comp}(u, v), w) = \mathrm{Comp}(u, |
||

− | \mathrm{Comp}(v, w))</math>. |
||

− | |||

− | Putting together the results of this section we finally have: |
||

− | {{Theorem| |
||

− | Let GoI(H) be defined by: |
||

− | * objects are types, ''ie'' sets <math>A</math> of operators satisfying: <math>A\biorth = A</math>; |
||

− | * morphisms from <math>A</math> to <math>B</math> are operators in type <math>A\limp B</math>; |
||

− | * composition is given by the formula above. |
||

− | |||

− | Then GoI(H) is a star-autonomous category. |
||

− | }} |
||

− | |||

− | = The Geometry of Interaction as an abstract machine = |

## Latest revision as of 16:39, 30 September 2011

The *geometry of interaction*, GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.

This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of as a morphism *from* *A* *to* *B* and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting *on* , that is a morphism from to . For proof composition the problem was then, given an operator on and another one on to construct a new operator on . This problem was solved by the *execution formula* that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an *operational semantics*, as opposed to traditionnal denotational semantics.

The first instance of the GoI was restricted to the *M**E**L**L* fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as *Geometry of Interaction 3* and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of implicit complexity

The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; the execution formula appears as the composition of two automata interacting through a common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the *M**E**L**L* fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.

## Contents |

# [edit] The Geometry of Interaction as operators

The original construction of GoI by Girard follows a general pattern already mentionned in the section on coherent semantics under the name *symmetric reducibility* and that was first put to use in phase semantics. First set a general space *P* called the *proof space* because this is where the interpretations of proofs will live. Make sure that *P* is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on .

Second define a particular subset of *P* that will be denoted by ; then derive a duality on *P*: for , *u* and *v* are dual^{[1]}iff .

Such a duality defines an orthogonality relation, with the usual derived definitions and properties.

For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, *ie*, is the set of nilpotent operators in *P*. Let us explicit this: two operators *u* and *v* are dual if there is a nonnegative integer *n* such that (*u**v*)^{n} = 0. This duality is symmetric: if *u**v* is nilpotent then *v**u* is nilpotent also.

Last define a *type* as a subset *T* of the proof space that is equal to its bidual: . This means that iff for all operator , that is such that for all , we have .

The real work^{[2]}is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the *adequacy lemma*: if *u* is the interpretation of a proof of the formula *A* then *u* belongs to the type associated to *A*.

## [edit] Partial isometries

The first step is to build the proof space. This is constructed as a special set of partial isometries on a separable Hilbert space *H* which turns out to be generated by partial permutations on the canonical basis of *H*.

These so-called * p-isometries* enjoy some nice properties, the most important one being that a

*p*-isometry is a sum of

*p*-isometries iff all the terms of the sum have disjoint domains and disjoint codomains. As a consequence we get that a sum of

*p*-isometries is null iff each term of the sum is null.

A second important property is that operators on *H* can be *externalized* using *p*-isometries into operators acting on , and conversely operators on may be *internalized* into operators on *H*. This is widely used in the sequel.

## [edit] The *-autonomous structure

The second step is to interpret the linear logic multiplicative operations, most importantly the cut rule.

Internalization/externalization is the key for this: typically the type is interpreted by a set of *p*-isometries which are internalizations of operators acting on .

The (interpretation of) the cut-rule is defined in two steps: firstly we use nilpotency to define an operation corresponding to lambda-calculus application which given two *p*-isometries in respectively and *A* produces an operator in *B*. From this we deduce the composition and finally obtain a structure of *-autonomous category, that is a model of multiplicative linear logic.

## [edit] The exponentials

Finally we turn to define exponentials, that is connectives managing duplication. To do this we introduce an isomorphism (induced by a *p*-isometry) between *H* and : the first component of the tensor is intended to hold the address of the the copy whereas the second component contains the content of the copy.

We eventually get a quasi-model of full MELL; quasi in the sense that if we can construct *p*-isometries for usual structural operations in MELL (contraction, dereliction, digging), the interpretation of linear logic proofs is not invariant w.r.t. cut elimination in general. It is however invariant in some good cases, which are enough to get a correction theorem for the interpretation.

# [edit] The Geometry of Interaction as an abstract machine

# [edit] Notes and references

- ↑ In modern terms one says that
*u*and*v*are*polar*. - ↑ The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of
*double glueing*.