# Geometry of interaction

(Difference between revisions)

The geometry of interaction, GoI in short, was defined in the early nineties by Girard as an interpretation of linear logic into operators algebra: formulae were interpreted by Hilbert spaces and proofs by partial isometries.

This was a striking novelty as it was the first time that a mathematical model of logic (lambda-calculus) didn't interpret a proof of $A\limp B$ as a morphism from (the space interpreting) A to (the space interpreting) B and proof composition (cut rule) as the composition of morphisms. Rather the proof was interpreted as an operator acting on (the space interpreting) $A\limp B$, that is a morphism from $A\limp B$ to $A\limp B$. For proof composition the problem was then, given an operator on $A\limp B$ and another one on $B\limp C$ to construct a new operator on $A\limp C$. This problem was originally expressed as a feedback equation solved by the execution formula. The execution formula has some formal analogies with Kleene's formula for recursive functions, which allowed to claim that GoI was an operational semantics, as opposed to traditionnal denotational semantics.

The first instance of the GoI was restricted to the MELL fragment of linear logic (Multiplicative and Exponential fragment) which is enough to encode lambda-calculus. Since then Girard proposed several improvements: firstly the extension to the additive connectives known as Geometry of Interaction 3 and more recently a complete reformulation using Von Neumann algebras that allows to deal with some aspects of implicit complexity

The GoI has been a source of inspiration for various authors. Danos and Regnier have reformulated the original model exhibiting its combinatorial nature using a theory of reduction of paths in proof-nets and showing the link with abstract machines; in particular the execution formula appears as the composition of two automata that interact one with the other through their common interface. Also the execution formula has rapidly been understood as expressing the composition of strategies in game semantics. It has been used in the theory of sharing reduction for lambda-calculus in the Abadi-Gonthier-Lévy reformulation and simplification of Lamping's representation of sharing. Finally the original GoI for the MELL fragment has been reformulated in the framework of traced monoidal categories following an idea originally proposed by Joyal.

# The Geometry of Interaction as operators

The original construction of GoI by Girard follows a general pattern already mentionned in coherent semantics under the name symmetric reducibility. First set a general space in which the interpretations of proofs will live; here, in the case of GoI, the space is the space of bounded operators on $\ell^2$.

Second define a suitable duality on this space that will be denoted as $u\perp v$. For the GoI, two dualities have proved to work, the first one being nilpotency: two operators u and v are dual if uv is nilpotent, that is, if there is an nonegative integer n such that (uv)n = 0.

Last define a type as a subset T of the proof space that is equal to its bidual: $T = T\biorth$. In the case of GoI this means that $u\in T$ iff for all operator v, if $v\in T\orth$, that is if u'v is nilpotent for all $u'\in T$, then $u\perp v$, that is uv is nilpotent.

It remains now to interpret logical operations, that is associate a type to each formula, an object to each proof and show the adequacy lemma, if u is the interpretation of a proof of the formula A then u belongs to the type associated to A.

## Preliminaries

We begin by a brief tour of the operations in Hilbert spaces that will be used in the sequel. In this article H will stand for the Hilbert space $\ell^2(\mathbb{N})$ of sequences $(x_n)_{n\in\mathbb{N}}$ of complex numbers such that the series $\sum_{n\in\mathbb{N}}|x_n|^2$ converges. If $x = (x_n)_{n\in\mathbb{N}}$ and $y = (y_n)_{n\in\mathbb{N}}$ are two vectors of H we denote by $\langle x,y\rangle$ their scalar product: $\langle x, y\rangle = \sum_{n\in\mathbb{N}} x_n\bar y_n$.

Two vectors of H are othogonal if their scalar product is nul. This notion is not to be confused with the orthogonality of operators defined above. The norm of a vector is the square root of the scalar product with itself: $\|x\| = \sqrt{\langle x, x\rangle}$.

Let us denote by $(e_k)_{k\in\mathbb{N}}$ the canonical hilbertian basis of H: $e_k = (\delta_{kn})_{n\in\mathbb{N}}$ where δkn is the Kroenecker symbol. Thus if $x=(x_n)_{n\in\mathbb{N}}$ is a sequence in H we have: $x = \sum_{n\in\mathbb{N}} x_ne_n$.

In this article we call operator on H a continuous linear map from H to H. The continuity allows to define the norm of an operator u to be the sup on the unit ball of the norms of its values: $\|u\| = \sup_{\{x\in H,\, \|x\| = 1\}}\|u(x)\|$.

The range or codomain of the operator u is the set of images of vectors; the kernel of u is the set of vectors that are anihilated by u; the domain of u is the set of vectors orthogonal to the kernel: $\mathrm{Codom}(u) = \{u(x),\, x\in H\}$; $\mathrm{Ker}(u) = \{x\in H,\, u(x) = 0\}$; $\mathrm{Dom}(u) = \mathrm{Ker}(u)\orth = \{x\in H,\, \forall y\in\mathrm{Ker}(u), \langle x, y\rangle = 0\}$.

These three sets are closed subspaces of H.

The adjoint of an operator u is the operator u * defined by $\langle u(x), y\rangle = \langle x, u^*(y)\rangle$ for any $x,y\in H$.

A projector is an idempotent operator of norm 1, that is an operator p such that p2 = p and $\|p\| = 1$. A projector is auto-adjoint and its domain is equal to its codomain.

A partial isometry is an operator u satisfying uu * u = u; as a consequence uu * is a projector the range of which is the range of u. Similarly u * u is also a projector the range of which is the domain of u. The restriction of u to its domain is an isometry. Projectors are particular examples of partial isometries.

If u is a partial isometry then u * is also a partial isometry the domain of which is the codomain of u and the codomain of which is the domain of u.

If the domain of u is H that is if u * u = 1 we say that u has full domain, and similarly for codomain. If u and v are two partial isometries, the equation uu * + vv * = 1 means that the codomains of u and v are orthogonal and that their direct sum is H.

### Partial permutations and partial isometries

It turns out that most of the operators needed to interpret logical operations are generated by partial permutations on the basis, which in particular entails that they are partial isometries.

More precisely a partial permutation $\varphi$ on $\mathbb{N}$ is a function defined on a subset $D_\varphi$ of $\mathbb{N}$ which is one-to-one onto a subset $C_\varphi$ of $\mathbb{N}$. $D_\varphi$ is called the domain of $\varphi$ and $C_\varphi$ its codomain. Partial permutations may be composed: if ψ is another partial permutation on $\mathbb{N}$ then $\varphi\circ\psi$ is defined by: $n\in D_{\varphi\circ\psi}$ iff $n\in D_\psi$ and $\psi(n)\in D_\varphi$;
if $n\in D_{\varphi\circ\psi}$ then $\varphi\circ\psi(n) = \varphi(\psi(n))$;
the codomain of $\varphi\circ\psi$ is the image of the domain.

Partial permutations are well known to form a structure of inverse monoid that we detail now.

A partial identitie is a partial permutation 1D whose domain and codomain are both equal to a subset D on which 1D is the identity function. Among partial identities one finds the identity on the empty subset, that is the empty map, that we will denote as 0 and the identity on $\mathbb{N}$ that we will denote 1. This latter permutation is the neutral for composition.

If $\varphi$ is a partial permutation there is an inverse partial permutation $\varphi^{-1}$ whose domain is $D_{\varphi^{-1}} = C_{\varphi}$ and who satisfies: $\varphi^{-1}\circ\varphi = 1_{D_\varphi}$ $\varphi\circ\varphi^{-1} = 1_{C_\varphi}$

Given a partial permutation $\varphi$ one defines a partial isometry $u_\varphi$ by $u_\varphi(e_n) = e_{\varphi(n)}$ if $n\in D_\varphi$, 0 otherwise. In other terms if $x=(x_n)_{n\in\mathbb{N}}$ is a sequence in $\ell^2$ then $u_\varphi(x)$ is the sequence $(y_n)_{n\in\mathbb{N}}$ defined by: $y_n = x_{\varphi^{-1}(n)}$ if $n\in C_\varphi$, 0 otherwise.

The domain of $u_\varphi$ is the subspace spaned by the family $(e_n)_{n\in D_\varphi}$ and the codomain of $u_\varphi$ is the subspace spaned by $(e_n)_{n\in C_\varphi}$. As a particular case if $\varphi$ is 1D the partial identity on D then $u_\varphi$ is the projector on the subspace spaned by $(e_n)_{n\in D}$.

If ψ is another partial permutation then we have: $u_\varphi u_\psi = u_{\varphi\circ\psi}$.

If $\varphi$ is a partial permutation then the adjoint of $u_\varphi$ is: $u_\varphi^* = u_{\varphi^{-1}}$.

In particular the projector on the domain of $u_{\varphi}$ is given by: $u^*_\varphi u_\varphi = u_{1_{D_\varphi}}$.

and similarly the projector on the codomain of $u_\varphi$ is: $u_\varphi u_\varphi^* = u_{1_{C_\varphi}}$.

## Interpreting the tensor

The first step is, given two types A and B, to construct the type $A\tens B$. For this purpose we will define an isomorphism $H\oplus H \cong H$ by $x\oplus y\rightsquigarrow p(x)+q(y)$ where $p:H\mapsto H$ and $q:H\mapsto H$ are partial isometries given by:

p(en) = e2n,
q(en) = e2n + 1.

This is actually arbitrary, any two partial isometries p,q with full domain and such that the sum of their codomains is H would do the job.

We shall From the definition p and q have full domain, that is satisfy p * p = q * q = 1. On the other hand their codomains are orthogonal, thus we have p * q = q * p = 0. Note that we also have pp * + qq * = 1 although this property is not needed in the sequel.

Let U be an operator on $H\oplus H$. We can write U as a matrix: $U = \begin{pmatrix} U_{11} & U_{12}\\ U_{21} & U_{22} \end{pmatrix}$

where each Uij operates on H.

Now through the isomorphism $H\oplus H\cong H$ we may transform U into the operator $\bar U$ on H defined by: $\bar U = pU_{11}p^* + pU_{12}q^* + qU_{21}p^* + qu_{22}q^*$.

We call $\bar U$ the internalization of U.

Given A and B two types, we define their tensor by: $A\tens B = \{pup^* + qvq^*, u\in A, v\in B\}\biorth$

From what precedes we see that $A\tens B$ is generated by the internalizations of operators on $H\oplus H$ of the form: $\begin{pmatrix} u & 0\\ 0 & v \end{pmatrix}$