## Monday, April 11, 2005

### Was von Neumann right after all?

The work with TGD inspired model for topological quantum computation led to the realization that von Neumann algebras, in particular hyper-finite factors of type II_1 seem to provide the mathematics needed to develop a more explicit view about the construction of S-matrix. I have already discussed a vision for how to achieve this. In this chapter I will discuss in more explicit manner various aspects fascinating aspects of type II_1 factors and their physical interpretation in TGD framework.

### Philosophical ideas behind von Neumann algebras

The goal of von Neumann was to generalize the algebra of quantum mechanical observables. The basic ideas behind the von Neumann algebra are dictated by physics. The algebra elements allow Hermitian conjugation and observables correspond to Hermitian operators. A measurable function of operator belongs to the algebra. The predictions of quantum theory are expressible in terms of traces of observables. Density matrix defining expectations of observables in ensemble is the basic example. The highly non-trivial requirement of von Neumann was that identical a priori probabilities for a detection of states of infinite state system must make sense. Since quantum mechanical expectation values are expressible in terms of operator traces, this requires that unit operator has unit trace. For finite-dimensional case it is easy to build observables out of minimal projections to 1-dimensional eigen spaces of observables. For infinite-dimensional case the probably of projection to 1-dimensional sub-space vanishes if each state is equally probable. The notion of observable must thus be modified by excluding 1-dimensional minimal projections, and allow only projections for which the trace would be infinite using the straightforward generalization of the matrix algebra trace as dimension of the projection. The non-trivial implication of the fact that traces of projections are never larger than one is that the eigen spaces of the density matrix must be infinite-dimensional for non-vanishing projection probabilities. Quantum measurements can lead with a finite probability only to mixed states with density matrix which is projection operator to infinite-dimensional subspace. The simple von Neumann algebras for which unit operator has unit trace are known as factors of type II_1. The definitions of adopted by von Neumann allow however more general algebras. Type I_n algebras correspond to finite-dimensional matrix algebras with finite traces whereas I_{infty} does not allow bounded traces. For algebras of type III traces are always infinite and the notion of trace becomes useless (it might be however possible to assign to the trace a number theoretic interpretation, say as infinite prime having unit norm for any finite-p p-adic topology).

### von Neumann, Dirac, and Feynman

The association of algebras of type I with the standard quantum mechanics allowed to unify matrix mechanism with wave mechanics. Note however that the assumption about continuous momentum state basis is in conflict with separability but the particle-in-box idealization allows to circumvent this problem (the notion of space-time sheet brings the box in physics as something completely real). Because of the finiteness of traces von Neumann regarded the factors of type II_1 as fundamental and factors of type III as pathological. The highly pragmatic and successful approach of Dirac based on the notion of delta function, plus the emergence of Feynman graphs and functional integral meant that von Neumann approach was forgotten to a large extent. Algebras of type II_1 have emerged only much later in conformal and topological quantum field theories allowing to deduce invariants of knots, links and 3-manifolds. Also algebraic structures known as bi-algebras, Hopf algebras, and ribbon algebras relate closely to type II_1 factors. In topological quantum computation based on braids and corresponding topological S-matrices they play an especially important role. In axiomatic quantum field theory defined in Minkowski space the algebras of observables associated with bounded space-time regions correspond quite generally to the type III_1 hyper-finite factor. One can criticize the idea about identical a priori probabilities but the assumption could be also justified by the finiteness of quantum theory. Indeed, it is traces which produce the infinities of quantum field theories. The regularization procedures used to eliminate the divergences might be actually a manner to transform III_1 type algebra of quantum field theories to II_1 type algebra.

### Factors of type II_1 and quantum TGD

For me personally the realization that TGD Universe is tailored for topological quantum computation led also to the realization that hyper-finite (ideal for numerical approximations) von Neumann algebras of type II_1 have a direct relevance for TGD. 1. Equivalence of generalized loop diagrams with tree diagrams The work with bi-algebras led to the proposal that the generalized Feynman diagrams of TGD at space-time level satisfy a generalization of the duality of old-fashioned string models. Generalized Feynman diagrams containing loops are equivalent with tree diagrams so that they could be interpreted as representing computations or analytic continuations. This symmetry can be formulated as a condition on algebraic structures generalizing bi-algebras. The new element is the possibility of vacuum lines having natural counterpart at the level of bi-algebras and braid diagrams. At space-time level they correspond to vacuum extremals. 2. Generalized Feynman diagrams and basic properties of hyper-finite II_1 factors The basic facts about von Neumann factors of II_1 suggest a more concrete view about the general mathematical framework needed.
• The effective 2-dimensionality of the construction of quantum states and configuration space geometry in quantum TGD framework makes hyper-finite factors of type II_1 very natural as operator algebras of the state space. Indeed, the elements of conformal algebras are labelled by discrete numbers and also the modes of induced spinor fields are labelled by discrete label, which guarantees that the tangent space of the configuration space is a separable Hilbert space and Clifford algebra is thus a hyper-finite type II_1 factor. The same holds true also at the level of configuration space degrees of freedom so that bosonic degrees of freedom correspond to a factor of type I_infty unless super-symmetry reduces it to a factor of type II_1.
• Four-momenta relate to the positions of tips of future and past directed light cones appearing naturally in the construction of S-matrix. In fact, configuration space of 3-surfaces can be regarded as union of big-bang/big crunch type configuration spaces obtained as a union of light-cones with parameterized by the positions of their tips. The algebras of observables associated with bounded regions of M^4 are hyper-finite and of type III_1. The algebras of observables in the space spanned by the tips of these light-cones are not needed in the construction of S-matrix so that there are good hopes of avoiding infinities coming from infinite traces.
• Many-sheeted space-time concept forces to refine the notion of sub-system. Jones inclusions N\subset M for factors of type II_1 define in a generic manner imbedding interacting sub-systems to a universal II_1 factor which now corresponds naturally to infinite Clifford algebra of the tangent space of configuration space of 3-surfaces and contains interaction as M:N-dimensional analog of tensor factor. Topological condensation of space-time sheet to a larger space-time sheet, formation of bound states by the generation of join along boundaries bonds, interaction vertices in which space-time surface branches like a line of Feynman diagram: all these situations could be described by Jones inclusion characterized by the Jones index M:N assigning to the inclusion also a minimal conformal field theory and conformal theory with k=1 Kac Moody for M:N=4. M:N=4 option need not be realized physically and might correspond to the fact that dimensional regularization works only in D=4-epsilon.
• The construction of generalized Feynmann diagrams requires the identification of the counterparts of propagators as unitarity evolutions of single particle systems along 3-D light-like causal determinants representing lines of generalized Feynman diagrams as orbits of partons. von Neumann algebras allow universal unitary automorphism Delta^{it} fixed apart for inner automorphisms, and this automorphism is extremely natural candidate for this unitary evolution. Only the value of the parameter t would remain open.
• The vertices must be constructed as overlaps of lines (light-like 3-D CDs) entering the vertex, which is 2-D partonic surface. Jones inclusions might define universal vertices. A simultaneous imbedding of all lines to a factor M of type II_1 is required and the vertex can be obtained as a vacuum expectation of the product of operators defining the state. The only non-uniqueness problem is that the imbeddings are fixed apart from inner automorphism only. Algebraic hologram idea is realized in the sense that the operator algebras of all other lines are imbeddable to the operator algebra of a given line. The triviality of loops for generalized Feynman diagrams gives strong conditions on vertices.
3. Is hbar dynamical? The work with topological quantum computation inspired the hypothesis that hbar might be dynamical, and that its values might relate in a simple manner to the logarithms of Beraha numbers giving Jones indices M:N. The model for the evolution of hbar implied that hbar is infinite for the minimal value M:N=1 of Jones index. The construction of a model explaining the strange finding that planetary orbits seem to correspond to a gigantic value of "gravitational" Planck constant led to the hypothesis that when the system gets non-perturbative so that the perturbative expansion in terms of parameter k=alpha Q_1Q_2 ceases to converge, a phase transition increasing the value of hbar to hbar_s= k*hbar/v_0, where v_0 the ratio of Planck length to CP_2 length, occurs. This involves also a transition to a macroscopic quantum phase since Compton lengths and times increase dramatically. Dark matter would correspond to ordinary matter with large value of hbar, which is conformally confined in the sense the sum of complex super-canonical conformal weights (related in a simple manner to the complex zeros of Riemann Zeta) is real for the many-particle state behaving like a single quantum coherent unit. The value of hbar for M:N=1 is large but not infinite, and thus in conflict with the original original proposal. A more refined suggestion is that the evolution of hbar as a function of M:N=4cos^2(pi/n) can be interpreted as a renormalization group evolution for the phase resolution. The earlier identification is replaced by a linear renormalization group equation for 1/hbar allowing as its solutions the earlier solution plus an arbitrary integration constant. Hence 1/hbar can approach to a finite value 1/hbar(3)= v_0/k*hbar(n--> infty) at the limit n--> 3. The evolution equation gives a concrete view about how various charges should be imbedded in Jones inclusion to the larger algebra so that the value of hbar appearing in commutators evolves in the required manner. The dependence of hbar on the parameters of interacting systems means that it is associated with the interface of the interacting systems. Instead of being an absolute constant of nature hbar becomes something characterizing the interaction between two systems, the "position" of II_1 factor N inside M. The interface could correspond to wormhole contacts, join along boundaries bond, light-like causal determinant, etc... This property of hbar is consistent with the fact that vacuum functional expressible as an exponent of Kähler action does not depend at all on hbar. If this vision is correct, and the evidence for its correctness is growing steadily, the conclusion is that the struggle with the infinities of quantum field theories which started at the days of Dirac and for which M-theory represented the catastrophic grand finale has been solely due to the bad mathematics. If the pragmatic colleagues had believed von Neumann, the landscape of theoretical physics might look quite different now. Matti Pitkänen