Geometry of interaction
m (→Contraposition: update macro) 
(redefinition of the proof space (begin)) 

Line 9:  Line 9:  
= The Geometry of Interaction as operators = 
= The Geometry of Interaction as operators = 

−  The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility''. First set a general space called the ''proof space'' because this is where the interpretations of proofs will live. In the case of GoI, the proof space is the space of bounded operators on <math>\ell^2</math>. Note that the proof space generally contains much more objects than interpretations of proofs; in the GoI case we will see that interpretations of proofs happen to be some very peculiar kind of partial isometries. 
+  The original construction of GoI by Girard follows a general pattern already mentionned in [[coherent semantics]] under the name ''symmetric reducibility'' and that was first put to use in [[phase semantics]]. First set a general space <math>P</math>called the ''proof space'' because this is where the interpretations of proofs will live. Make sure that <math>P</math> is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on <math>\ell^2</math>. 
−  Second define a duality on this space that will be denoted as <math>u\perp v</math>. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators <math>u</math> and <math>v</math> are dual if <math>uv</math> is nilpotent, that is, if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. We will denote by <math>\bot</math> the set of nilpotent operators so that the duality reads: 
+  Second define a particular subset of <math>P</math> that will be denoted by <math>\bot</math>; then derive a duality on <math>P</math>: for <math>u,v\in P</math>, <math>u</math> and <math>v</math> are dual<ref>In modern terms one says that <math>u</math> and <math>v</math> are ''polar''.</ref>, iff <math>uv\in\bot</math>. 
−  : <math>u\perp v</math> iff <math>uv\in\bot</math>. 

−  This duality applies to operators and shouldn't be confused with orthogonality of vectors. To enforce this we will reserve the notation <math>\perp</math> exclusively for the duality of operators and never use it for othogonality of vectors. 
+  For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ''ie'', <math>\bot</math> is the set of nilpotent operators in <math>P</math>. Let us explicit this: two operators <math>u</math> and <math>v</math> are dual if there is a nonegative integer <math>n</math> such that <math>(uv)^n = 0</math>. Note in particular that <math>uv\in\bot</math> iff <math>vu\in\bot</math>. 
−  Last define a ''type'' as a subset <math>T</math> of the proof space that is 
+  When <math>X</math> is a subset of <math>P</math> define <math>X\orth</math> as the set of elements of <math>P</math> that are dual to all elements of <math>X</math>: 
−  equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in 
+  : <math>X\orth = \{u\in P, \forall v\in X, uv\in\bot\}</math>. 
−  T</math> iff for all operator <math>v\in T\orth</math>, that is such that 

−  <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. In particular note that <math>0</math> belongs to any type. 

−  It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>. 
+  This constrution has a few properties that we will use without mention in the sequel. Given two subset <math>X</math> and <math>Y</math> of <math>P</math> we have: 
+  * if <math>X\subset Y</math> then <math>y\orth\subset X</math>; 

+  * <math>X\subset X\biorth</math>; 

+  * <math>X\triorth = X\orth</math>. 

+  
+  Last define a ''type'' as a subset <math>T</math> of the proof space that is equal to its bidual: <math>T = T\biorth</math>. This means that <math>u\in T</math> iff for all operator <math>v\in T\orth</math>, that is such that <math>u'v\in\bot</math> for all <math>u'\in T</math>, we have <math>uv\in\bot</math>. 

+  
+  The real work<ref>The difficulty is to find the right duality that will make logical operations interpretable. General conditions that allows to achieve this have been formulated by Hyland and Schalk thanks to their theory of ''double gluing''.</ref>, is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the ''adequacy lemma'': if <math>u</math> is the interpretation of a proof of the formula <math>A</math> then <math>u</math> belongs to the type associated to <math>A</math>. 

== Preliminaries == 
== Preliminaries == 

−  We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}x_n^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product: 
+  We begin by a brief tour of the operations in Hilbert spaces that we use. In this article <math>H</math> will stand for the Hilbert space <math>\ell^2(\mathbb{N})</math> of sequences <math>(x_n)_{n\in\mathbb{N}}</math> of complex numbers such that the series <math>\sum_{n\in\mathbb{N}}x_n^2</math> converges. If <math>x = (x_n)_{n\in\mathbb{N}}</math> and <math>y = (y_n)_{n\in\mathbb{N}}</math> are two vectors of <math>H</math> we denote by <math>\langle x,y\rangle</math> their scalar product: 
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>. 
: <math>\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n</math>. 

−  Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself: 
+  Two vectors of <math>H</math> are ''othogonal'' if their scalar product is nul. This notion is not to be confused with the duality of operators defined above. The ''norm'' of a vector is the square root of the scalar product with itself: 
: <math>\x\ = \sqrt{\langle x, x\rangle}</math>. 
: <math>\x\ = \sqrt{\langle x, x\rangle}</math>. 

−  Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have: 
+  Let us denote by <math>(e_k)_{k\in\mathbb{N}}</math> the canonical hilbertian basis of <math>H</math>: <math>e_k = (\delta_{kn})_{n\in\mathbb{N}}</math> where <math>\delta_{kn}</math> is the Kroenecker symbol: <math>1</math> if <math>k=n</math>, <math>0</math> otherwise. Thus if <math>x=(x_n)_{n\in\mathbb{N}}</math> is a sequence in <math>H</math> we have: 
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>. 
: <math> x = \sum_{n\in\mathbb{N}} x_ne_n</math>. 

−  In this article we call ''operator'' on <math>H</math> a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values: 
+  An ''operator'' on <math>H</math> is a ''continuous'' linear map from <math>H</math> to <math>H</math>. Continuity is equivalent to the fact that operators are ''bounded'', which means that one may define the ''norm'' of an operator <math>u</math> as the sup on the unit ball of the norms of its values: 
: <math>\u\ = \sup_{\{x\in H,\, \x\ = 1\}}\u(x)\</math>. 
: <math>\u\ = \sup_{\{x\in H,\, \x\ = 1\}}\u(x)\</math>. 

−  The set of (bounded) operators is denoted <math>\mathcal{B}(H)</math>. This is our proof space. 
+  The set of (bounded) operators is denoted by <math>\mathcal{B}(H)</math>. 
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel: 
The ''range'' or ''codomain'' of the operator <math>u</math> is the set of images of vectors; the ''kernel'' of <math>u</math> is the set of vectors that are anihilated by <math>u</math>; the ''domain'' of <math>u</math> is the set of vectors orthogonal to the kernel: 

−  : <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>; 
+  * <math>\mathrm{Codom}(u) = \{u(x),\, x\in H\}</math>; 
−  : <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>; 
+  * <math>\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}</math>; 
−  : <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>. 
+  * <math>\mathrm{Dom}(u) = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}</math>. 
These three sets are closed subspaces of <math>H</math>. 
These three sets are closed subspaces of <math>H</math>. 

Line 50:  Line 50:  
such that <math>p^2 = p</math> and <math>\p\ = 0</math> or <math>1</math>. A projector is autoadjoint and its domain is equal to its codomain. 
such that <math>p^2 = p</math> and <math>\p\ = 0</math> or <math>1</math>. A projector is autoadjoint and its domain is equal to its codomain. 

−  A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries. 
+  A ''partial isometry'' is an operator <math>u</math> satisfying <math>uu^* u = u</math>; as a consequence <math>uu^*</math> is a projector, the ''final projector of <math>u</math>'', the range of which is the range of <math>u</math>. Similarly <math>u^* u</math> is also a projector, the initial projector of <math>u</math>, the range of which is the domain of <math>u</math>. The restriction of <math>u</math> to its domain is an isometry. Projectors are particular examples of partial isometries. 
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>. 
If <math>u</math> is a partial isometry then <math>u^*</math> is also a partial isometry the domain of which is the codomain of <math>u</math> and the codomain of which is the domain of <math>u</math>. 

Line 58:  Line 58:  
=== Partial permutations and partial isometries === 
=== Partial permutations and partial isometries === 

−  It turns out that most of the operators needed to interpret logical operations are generated by ''partial permutations'' on the basis, which in particular entails that they are partial isometries. 
+  We will now define our proof space which turns out to be the set of partial isometries acting as permutations on a fixed basis of <math>H</math>. 
−  More precisely a partial permutation <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is onetoone onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by: 
+  More precisely a ''partial permutation'' <math>\varphi</math> on <math>\mathbb{N}</math> is a function defined on a subset <math>D_\varphi</math> of <math>\mathbb{N}</math> which is onetoone onto a subset <math>C_\varphi</math> of <math>\mathbb{N}</math>. <math>D_\varphi</math> is called the ''domain'' of <math>\varphi</math> and <math>C_\varphi</math> its ''codomain''. Partial permutations may be composed: if <math>\psi</math> is another partial permutation on <math>\mathbb{N}</math> then <math>\varphi\circ\psi</math> is defined by: 
−  : <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>; 
+  * <math>n\in D_{\varphi\circ\psi}</math> iff <math>n\in D_\psi</math> and <math>\psi(n)\in D_\varphi</math>; 
−  : if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>; 
+  * if <math>n\in D_{\varphi\circ\psi}</math> then <math>\varphi\circ\psi(n) = \varphi(\psi(n))</math>; 
−  : the codomain of <math>\varphi\circ\psi</math> is the image of the domain. 
+  * the codomain of <math>\varphi\circ\psi</math> is the image of the domain. 
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now. 
Partial permutations are well known to form a structure of ''inverse monoid'' that we detail now. 

Line 91:  Line 91:  
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>. 
The domain of <math>u_\varphi</math> is the subspace spaned by the family <math>(e_n)_{n\in D_\varphi}</math> and the codomain of <math>u_\varphi</math> is the subspace spaned by <math>(e_n)_{n\in C_\varphi}</math>. As a particular case if <math>\varphi</math> is <math>1_D</math> the partial identity on <math>D</math> then <math>u_\varphi</math> is the projector on the subspace spaned by <math>(e_n)_{n\in D}</math>. 

−  If <math>\psi</math> is another partial permutation then we have: 
+  {{Proposition 
+  Let <math>\varphi</math> and <math>\psi</math> be two partial permutations. We have: 

: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>. 
: <math>u_\varphi u_\psi = u_{\varphi\circ\psi}</math>. 

−  If <math>\varphi</math> is a partial permutation then the adjoint of <math>u_\varphi</math> is: 
+  The adjoint of <math>u_\varphi</math> is: 
: <math>u_\varphi^* = u_{\varphi^{1}}</math>. 
: <math>u_\varphi^* = u_{\varphi^{1}}</math>. 

−  In particular the projector on the domain of <math>u_{\varphi}</math> is given by: 
+  In particular the initial projector of <math>u_{\varphi}</math> is given by: 
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>. 
: <math>u^*_\varphi u_\varphi = u_{1_{D_\varphi}}</math>. 

−  and similarly the projector on the codomain of <math>u_\varphi</math> is: 
+  and the final projector of <math>u_\varphi</math> is: 
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>. 
: <math>u_\varphi u_\varphi^* = u_{1_{C_\varphi}}</math>. 

+  
+  Projectors generated by partial identities commute; in particular we have: 

+  : <math>u_\varphi u_\varphi^*u_\psi u_\psi^* = u_\psi u_\psi^*u_\varphi u_\varphi^*</math>. 

+  }} 

+  
+  {{Definition 

+  The ''proof space'' <math>\mathcal{P}</math> is the set of partial isometries of the form <math>u_\varphi</math> for partial permutations <math>\varphi</math> on <math>\mathbb{N}</math>. 

+  }} 

+  
+  In particular note that <math>0\in\mathcal{P}</math>. The set <math>\mathcal{P}</math> is a submonoid of <math>\mathcal{B}(H)</math> but it is not a subalgebra: in general given <math>u,v\in\mathcal{P}</math> we don't necessarily have <math>u+v\in\mathcal{P}</math>. However we have: 

{{Proposition 
{{Proposition 

−  Let <math>u_\varphi</math> and <math>u_\psi</math> be two partial isometries generated by partial permutations. Then we have: 
+  Let <math>u, v\in\mathcal{P}</math>. Then <math>u+v\in\mathcal{P}</math> iff <math>u</math> and <math>v</math> have orthogonal domains and codomains, that is: 
−  : <math>u_\varphi + u_\psi = 0</math> iff <math>u_\varphi = u_\psi = 0</math>, 
+  : <math>u+v\in\mathcal{P}</math> iff <math>uu^*vv^* = u^*uv^*v = 0</math>. 
−  that is iff <math>\varphi</math> and <math>\psi</math> are the nowhere defined partial permutation. 
+  }} 
−  }} Indeed suppose <math>u_\varphi + u_\psi = 0</math> then for any <math>n</math> we have <math>u_\varphi(e_n) + u_\psi(e_n) = e_{\varphi(n)} + e_{\psi(n)} = 0</math> which is possible only if <math>\varphi(n)</math> and <math>\psi(n)</math> are undefined. 
+  
+  Also note that if <math>u+v=0</math> then <math>u=v=0</math>. 

=== From operators to matrices: internalization/externalization === 
=== From operators to matrices: internalization/externalization === 

Line 138:  Line 148:  
: <math>u_{21} = q^*up</math>; 
: <math>u_{21} = q^*up</math>; 

: <math>u_{22} = q^*uq</math>. 
: <math>u_{22} = q^*uq</math>. 

+  
+  The <math>u_{ij}</math>'s are called the ''components'' of <math>u</math>. Note that if <math>u</math> is generated by a partial permutation, that is if <math>u\in\mathcal{P}</math> then so are the <math>u_{ij}</math>'s. Moreover we have: 

+  : <math>u = (pp^*+qq^*)u(pp^*+qq^*) = pu_{11}p^* + pu_{12}q^* + qu_{21}p^* + qu_{22}q^*</math> 

+  which entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains. This can be verified for example by computing the product of the final projectors of <math>pu_{11}p^*</math> and <math>pu_{12}q^*</math>: 

+  : <math>\begin{align} 

+  (pu_{11}p^*)(pu^*_{11}p^*)(pu_{12}q^*)(qu_{12}^*p^*) 

+  &= (pp^*upp^*)(pp^*u^*pp^*)(pp^*uqq^*)(qq^*u^*pp^*)\\ 

+  &= pp^*upp^*u^*pp^*uqq^*u^*pp^*\\ 

+  &= pp^*u(pp^*)(u^*pp^*u)qq^*u^*pp^*\\ 

+  &= pp^*u(u^*pp^*u)(pp^*)qq^*u^*pp^*\\ 

+  &= pp^*uu^*pp^*u(pp^*)(qq^*)u^*pp^*\\ 

+  &= 0 

+  \end{align}</math> 

+  where we used the fact that all projectors in <math>\mathcal{P}</math> commute, which is in particular the case of <math>pp^*</math> and <math>u^*pp^*u</math>. 

== Interpreting the multiplicative connectives == 
== Interpreting the multiplicative connectives == 

−  Recall that when <math>u</math> and <math>v</math> are operators we denote by <math>u\perp v</math> the fact that <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators so that <math>u\perp v</math> iff <math>uv\in\bot</math>. 
+  Recall that when <math>u</math> and <math>v</math> are partial isometries in <math>\mathcal{P}</math> we say they are dual when <math>uv</math> is nilpotent, and that <math>\bot</math> denotes the set of nilpotent operators. A ''type'' is a subset of <math>\mathcal{P}</math> that is equal to its bidual. In particular <math>X\orth</math> is a type for any <math>X\subset\mathcal{P}</math>. We say that <math>X</math> ''generates'' the type <math>X\biorth</math>. 
−  
−  If <math>X</math> is set of operators also recall that <math>X\orth</math> denotes the set of dual operators: 

−  : <math>X\orth = \{v\in \mathcal{B}(H) \text{ such that }\forall u\in X, uv \in\bot\}</math>. 

−  
−  There are a few properties of this duality that we will use without mention in the sequel; let <math>X</math> and <math>Y</math> be sets of operators: 

−  : <math>X\subset X\biorth</math>; 

−  : <math>X\orth = X\triorth</math>. 

−  : if <math>X\subset Y</math> then <math>Y\orth\subset X\orth</math>; 

−  
−  In particular <math>X\orth</math> is always a type (equal to its biorthogonal). We say that <math>X</math> ''generates'' the type <math>X\biorth</math>. 

=== The tensor and the linear application === 
=== The tensor and the linear application === 

−  Given two types <math>A</math> and <math>B</math> two types, we define their tensor by: 
+  Given two types <math>A</math> and <math>B</math>, we define their tensor by: 
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math> 
: <math>A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth</math> 

−  Note the closure by biorthogonal to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form: 
+  Note the closure by bidual to make sure that we obtain a type. From what precedes we see that <math>A\tens B</math> is generated by the internalizations of operators on <math>H\oplus H</math> of the form: 
: <math>\begin{pmatrix} 
: <math>\begin{pmatrix} 

u & 0\\ 
u & 0\\ 

Line 161:  Line 185:  
Unfolding this definition we see that we have: 
Unfolding this definition we see that we have: 

−  : <math>A\limp B = \{u\in\mathcal{B}(H)\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>. 
+  : <math>A\limp B = \{u\in\mathcal{P}\text{ such that } \forall v\in A, \forall w\in B\orth,\, u.(pvp^* + qwq^*) \in\bot\}</math>. 
=== The identity === 
=== The identity === 

Line 211:  Line 235:  
{{Corollary 
{{Corollary 

Under the hypothesis of the theorem we have: 
Under the hypothesis of the theorem we have: 

−  : <math>A\limp B = \{u\in\mathcal{B}(H) \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>. 
+  : <math>A\limp B = \{u\in\mathcal{P} \text{ such that }\forall v\in A: u_{11}v\in\bot\text{ and } \mathrm{App}(u, v)\in B\}</math>. 
}} 
}} 

Line 222:  Line 246:  
=== The tensor rule === 
=== The tensor rule === 

−  Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted <math>u\tens u'</math> by: 
+  Let now <math>A, A', B</math> and <math>B'</math> be types and consider two operators <math>u</math> and <math>u'</math> respectively in <math>A\limp B</math> and <math>A\limp B'</math>. We define an operator denoted by <math>u\tens u'</math> by: 
: <math>\begin{align} 
: <math>\begin{align} 

u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\ 
u\tens u' &= ppp^*upp^*p^* + qpq^*upp^*p^* + ppp^*uqp^*q^* + qpq^*uqp^*q^*\\ 
Revision as of 17:36, 23 April 2010
The geometry of interaction, GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.
This was a striking novelty as it was the first time that a mathematical model of logic (lambdacalculus) didn't interpret a proof of as a morphism from A to B^{[1]}, and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting on , that is a morphism from to . For proof composition the problem was then, given an operator on and another one on to construct a new operator on . This problem was solved by the execution formula that bares some formal analogies with Kleene's formula for recursive functions. For this reason GoI was claimed to be an operational semantics, as opposed to traditionnal denotational semantics.
The first instance of the GoI was restricted to the MELL fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambdacalculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as Geometry of Interaction 3 and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of implicit complexity
The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proofnets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambdacalculus in the AbadiGonthierLévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the MELL fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.
Contents

The Geometry of Interaction as operators
The original construction of GoI by Girard follows a general pattern already mentionned in coherent semantics under the name symmetric reducibility and that was first put to use in phase semantics. First set a general space Pcalled the proof space because this is where the interpretations of proofs will live. Make sure that P is a (not necessarily commutative) monoid. In the case of GoI, the proof space is a subset of the space of bounded operators on .
Second define a particular subset of P that will be denoted by ; then derive a duality on P: for , u and v are dual^{[2]}, iff .
For the GoI, two dualities have proved to work; we will consider the first one: nilpotency, ie, is the set of nilpotent operators in P. Let us explicit this: two operators u and v are dual if there is a nonegative integer n such that (uv)^{n} = 0. Note in particular that iff .
When X is a subset of P define as the set of elements of P that are dual to all elements of X:
 .
This constrution has a few properties that we will use without mention in the sequel. Given two subset X and Y of P we have:
 if then ;
 ;
 .
Last define a type as a subset T of the proof space that is equal to its bidual: . This means that iff for all operator , that is such that for all , we have .
The real work^{[3]}, is now to interpret logical operations, that is to associate a type to each formula, an object to each proof and show the adequacy lemma: if u is the interpretation of a proof of the formula A then u belongs to the type associated to A.
Preliminaries
We begin by a brief tour of the operations in Hilbert spaces that we use. In this article H will stand for the Hilbert space of sequences of complex numbers such that the series converges. If and are two vectors of H we denote by their scalar product:
 .
Two vectors of H are othogonal if their scalar product is nul. This notion is not to be confused with the duality of operators defined above. The norm of a vector is the square root of the scalar product with itself:
 .
Let us denote by the canonical hilbertian basis of H: where δ_{kn} is the Kroenecker symbol: 1 if k = n, 0 otherwise. Thus if is a sequence in H we have:
 .
An operator on H is a continuous linear map from H to H. Continuity is equivalent to the fact that operators are bounded, which means that one may define the norm of an operator u as the sup on the unit ball of the norms of its values:
 .
The set of (bounded) operators is denoted by .
The range or codomain of the operator u is the set of images of vectors; the kernel of u is the set of vectors that are anihilated by u; the domain of u is the set of vectors orthogonal to the kernel:
 ;
 ;
 .
These three sets are closed subspaces of H.
The adjoint of an operator u is the operator u^{ * } defined by for any .
A projector is an idempotent operator of norm 0 (the projector on the null subspace) or 1, that is an operator p such that p^{2} = p and or 1. A projector is autoadjoint and its domain is equal to its codomain.
A partial isometry is an operator u satisfying uu^{ * }u = u; as a consequence uu^{ * } is a projector, the final projector of u, the range of which is the range of u. Similarly u^{ * }u is also a projector, the initial projector of u, the range of which is the domain of u. The restriction of u to its domain is an isometry. Projectors are particular examples of partial isometries.
If u is a partial isometry then u^{ * } is also a partial isometry the domain of which is the codomain of u and the codomain of which is the domain of u.
If the domain of u is H that is if u^{ * }u = 1 we say that u has full domain, and similarly for codomain. If u and v are two partial isometries, the equation uu^{ * } + vv^{ * } = 1 means that the codomains of u and v are orthogonal and that their direct sum is H.
Partial permutations and partial isometries
We will now define our proof space which turns out to be the set of partial isometries acting as permutations on a fixed basis of H.
More precisely a partial permutation on is a function defined on a subset of which is onetoone onto a subset of . is called the domain of and its codomain. Partial permutations may be composed: if ψ is another partial permutation on then is defined by:
 iff and ;
 if then ;
 the codomain of is the image of the domain.
Partial permutations are well known to form a structure of inverse monoid that we detail now.
A partial identitie is a partial permutation 1_{D} whose domain and codomain are both equal to a subset D on which 1_{D} is the identity function. Partial identities are idempotent for composition.
Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as 0 and the identity on that we will denote by 1. This latter permutation is the neutral for composition.
If is a partial permutation there is an inverse partial permutation whose domain is and who satisfies:
Given a partial permutation one defines a partial isometry by:
In other terms if is a sequence in then is the sequence defined by:
 if , 0 otherwise.
We will (not so abusively) write when is undefined.
The domain of is the subspace spaned by the family and the codomain of is the subspace spaned by . As a particular case if is 1_{D} the partial identity on D then is the projector on the subspace spaned by .
Proposition
Let and ψ be two partial permutations. We have:
 .
The adjoint of is:
 .
In particular the initial projector of is given by:
 .
and the final projector of is:
 .
Projectors generated by partial identities commute; in particular we have:
 .
Definition
The proof space is the set of partial isometries of the form for partial permutations on .
In particular note that . The set is a submonoid of but it is not a subalgebra: in general given we don't necessarily have . However we have:
Proposition
Let . Then iff u and v have orthogonal domains and codomains, that is:
 iff uu^{ * }vv^{ * } = u^{ * }uv^{ * }v = 0.
Also note that if u + v = 0 then u = v = 0.
From operators to matrices: internalization/externalization
It will be convenient to view operators on H as acting on , and conversely. For this purpose we define an isomorphism by where and are partial isometries given by:
 p(e_{n}) = e_{2n},
 q(e_{n}) = e_{2n + 1}.
From the definition p and q have full domain, that is satisfy p^{ * }p = q^{ * }q = 1. On the other hand their codomains are orthogonal, thus we have p^{ * }q = q^{ * }p = 0. Note that we also have pp^{ * } + qq^{ * } = 1.
The choice of p and q is actually arbitrary, any two partial isometries with full domain and orthogonal codomains would do the job.
Let U be an operator on . We can write U as a matrix:
where each u_{ij} operates on H.
Now through the isomorphism we may transform U into the operator u on H defined by:
 u = pu_{11}p^{ * } + pu_{12}q^{ * } + qu_{21}p^{ * } + qu_{22}q^{ * }.
We call u the internalization of U. Internalization is compatible with composition (functorial so to speak): if V is another operator on then the internalization of the matrix product UV is the product uv.
Conversely given an operator u on H we may externalize it obtaining an operator U on :
 u_{11} = p^{ * }up;
 u_{12} = p^{ * }uq;
 u_{21} = q^{ * }up;
 u_{22} = q^{ * }uq.
The u_{ij}'s are called the components of u. Note that if u is generated by a partial permutation, that is if then so are the u_{ij}'s. Moreover we have:
 u = (pp^{ * } + qq^{ * })u(pp^{ * } + qq^{ * }) = pu_{11}p^{ * } + pu_{12}q^{ * } + qu_{21}p^{ * } + qu_{22}q^{ * }
which entails that the four terms of the sum have pairwise disjoint domains and pairwise disjoint codomains. This can be verified for example by computing the product of the final projectors of pu_{11}p^{ * } and pu_{12}q^{ * }:
where we used the fact that all projectors in commute, which is in particular the case of pp^{ * } and u^{ * }pp^{ * }u.
Interpreting the multiplicative connectives
Recall that when u and v are partial isometries in we say they are dual when uv is nilpotent, and that denotes the set of nilpotent operators. A type is a subset of that is equal to its bidual. In particular is a type for any . We say that X generates the type .
The tensor and the linear application
Given two types A and B, we define their tensor by:
Note the closure by bidual to make sure that we obtain a type. From what precedes we see that is generated by the internalizations of operators on of the form:
This is an abuse of notations as this operation is more like a direct sum than a tensor. We will stick to this notation though because it defines the interpretation of the tensor connective of linear logic.
The linear implication is derived from the tensor by duality: given two types A and B the type is defined by:
 .
Unfolding this definition we see that we have:
 .
The identity
The interpretation of the identity is an example of the internalization/externalization procedure. Given a type A we are to find an operator ι in type , thus satisfying:
 .
An easy solution is to take ι = pq^{ * } + qp^{ * }. In this way we get ι(pup^{ * } + qvq^{ * }) = qup^{ * } + pvq^{ * }. Therefore (ι(pup^{ * } + qvq^{ * }))^{2} = quvq^{ * } + pvup^{ * }, from which one deduces that this operator is nilpotent iff uv is nilpotent. It is the case since u is in A and v in .
It is interesting to note that the ι thus defined is actually the internalization of the operator on given by the matrix:
 .
We will see once the composition is defined that the ι operator is the interpretation of the identity proof, as expected.
The execution formula, version 1: application
Let A and B be two types and u an operator in . By definition this means that given v in A and w in the operator u.(pvp^{ * } + qwq^{ * }) is nilpotent.
Let us define u_{11} to u_{22} by externalization as above. If we compute (u.(pvp^{ * } + qwq^{ * }))^{n} we see that this is a finite sum of operators of the form:
 ,
 ,
 or
where each of these monimials has exactly n factors of the form u_{i1}v or u_{i2}w.
From the nilpotency of u.(pvp^{ * } + qwq^{ * }) we deduce that u_{11}v is nilpotent by considering the particular case where w = 0. We also have that q^{ * }(u.(pvp^{ * } + qwq^{ * }))^{n}q is null for n big enough, which means that monomials of type 1 above are null as soon as their length (the number of factors of the form u_{i1}v or u_{i2}w) is bigger than n.
This implies that the two following operators are nilpotent:
 u_{11}v and
 .
Conversely if these two operators are nilpotent then one can show that so is u.(pvp^{ * } + qwq^{ * }). Moreover we have:
 .
We define the application of u to v as:
 .
Note that this is well defined as soon as u_{11}v is nilpotent.
We summarize what has just been shown in the following theorem:
Theorem
Let u be an operator, A and B be two types; the following conditions are equivalent:
 ;
 for any , we both have:
 u_{11}v is nilpotent and
 .
Corollary
Under the hypothesis of the theorem we have:
 .
As an example if we compute the application of the interpretation of the identity ι in type to the operator then we have:
 .
Now recall that ι = pq^{ * } + qp^{ * } so that ι_{11} = ι_{22} = 0 and ι_{12} = ι_{21} = 1 and we thus get:
 App(ι,v) = v
as expected.
The tensor rule
Let now A,A',B and B' be types and consider two operators u and u' respectively in and . We define an operator denoted by by:
Once again the notation is motivated by linear logic syntax and is contradictory with linear algebra practice since what we denote by actually is the internalization of the direct sum .
Indeed if we think of u and u' as the internalizations of the matrices:
 and
then we may write:
Thus the components of are given by:
 .
and we see that is actually the internalization of the matrix:
We are now to show that if we suppose uand u' are in types and , then is in . For this we consider v and v' in respectively in A and A', so that pvp^{ * } + qv'q^{ * } is in , and we show that .
Since u and u' are in and we have that App(u,v) and App(u',v') are respectively in B and B', thus:
 .
We know that both u_{11}v and u'_{11}v' are nilpotent. But we have:
Therefore is nilpotent. So we can compute :
thus lives in .
Other monoidal constructions
Contraposition
Let A and B be some types; we have:
Indeed, means that for any v and w in respectively A and we have which is exactly the definition of .
We will denote the operator:
where u_{ij} is given by externalization. Therefore the externalization of is:
 where is defined by .
From this we deduce that and that .
Commutativity
Let σ be the operator:
 σ = ppq^{ * }q^{ * } + pqp^{ * }q^{ * } + qpq^{ * }p^{ * } + qqp^{ * }p^{ * }.
One can check that σ is the internalization of the operator S on defined by: . In particular the components of σ are:
 σ_{11} = σ_{22} = 0;
 σ_{12} = σ_{21} = pq^{ * } + qp^{ * }.
Let A and B be types and u and v be operators in A and B. Then pup^{ * } + qvq^{ * } is in and as σ_{11}.(pup^{ * } + qvq^{ * }) = 0 we may compute:
But , thus we have shown that:
 .
Distributivity
We get distributivity by considering the operator:
 δ = ppp^{ * }p^{ * }q^{ * } + pqpq^{ * }p^{ * }q^{ * } + pqqq^{ * }q^{ * } + qppp^{ * }p^{ * } + qpqp^{ * }q^{ * }p^{ * } + qqq^{ * }q^{ * }p^{ * }
that is similarly shown to be in type for any types A, B and C.
Weak distributivity
We can finally get weak distributivity thanks to the operators:
 δ_{1} = pppp^{ * }q^{ * } + ppqp^{ * }q^{ * }q^{ * } + pqq^{ * }q^{ * }q^{ * } + qpp^{ * }p^{ * }p^{ * } + qqpq^{ * }p^{ * }p^{ * } + qqqq^{ * }p^{ * } and
 δ_{2} = ppp^{ * }p^{ * }q^{ * } + pqpq^{ * }p^{ * }q^{ * } + pqqq^{ * }q^{ * } + qppp^{ * }p^{ * } + qpqp^{ * }q^{ * }p^{ * } + qqq^{ * }q^{ * }p^{ * }.
Given three types A, B and C then one can show that:
 δ_{1} has type and
 δ_{2} has type .
Execution formula, version 2: composition
Let A, B and C be types and u and v be operators respectively in types and .
As usual we will denote u_{ij} and v_{ij} the operators obtained by externalization of u and v, eg, u_{11} = p^{ * }up, ...
As u is in we have that ; similarly as , thus , we have . Thus u_{22}v_{11} is nilpotent.
We define the operator Comp(u,v) by:
This is well defined since u_{11}v_{22} is nilpotent. As an example let us compute the composition of u and ι in type ; recall that ι_{ij} = δ_{ij}, so we get:
 Comp(u,ι) = pu_{11}p^{ * } + pu_{12}q^{ * } + qu_{21}p^{ * } + qu_{22}q^{ * } = u
Similar computation would show that Comp(ι,v) = v (we use pp^{ * } + qq^{ * } = 1 here).
Coming back to the general case we claim that Comp(u,v) is in : let a be an operator in A. By computation we can check that:
 App(Comp(u,v),a) = App(v,App(u,a)).
Now since u is in , App(u,a) is in B and since v is in , App(v,App(u,a)) is in C.
If we now consider a type D and an operator w in then we have:
 Comp(Comp(u,v),w) = Comp(u,Comp(v,w)).
Putting together the results of this section we finally have:
Theorem
Let GoI(H) be defined by:
 objects are types, ie sets A of operators satisfying: ;
 morphisms from A to B are operators in type ;
 composition is given by the formula above.
Then GoI(H) is a starautonomous category.
The Geometry of Interaction as an abstract machine
Cite error:
<ref>
tags exist, but no <references/>
tag was found