https://matpitka.blogspot.com/2011/06/

Wednesday, June 29, 2011

What about counterparts of T, S, and U dualities in TGD framework?

The natural question is what could be the TGD counterparts of S-, T- and U-dualities. If one accepts the identification of U-duality as product U=ST and the proposed counterpart of T duality as a strong form of general coordinate invariance discussed in previous posting, it remains to understand the TGD counterpart of S-duality - in other words electric-magnetic duality - relating the theories with gauge couplings g and 1/g. Quantum criticality selects the preferred value of gK: Kähler coupling strength is very near to fine structure constant at electron length scale and can be equal to it. Since there is no coupling constant evolution associated with αK, it does not make sense to say that gK becomes strong and is replaced with its inverse at some point. One should be able to formulate the counterpart of S-duality as an identity following from the weak form of electric-magnetic duality and the reduction of TGD to almost topological QFT. This seems to be the case.

TGD based view about S-duality

The following arguments suggests that in TGD framework S duality is realized for each preferred extremal of Kähler action separately whereas in standard view the duality would be realized only at the level of path integral defining the partition function.

  1. For preferred extremals the interior parts of Kähler action reduces to a boundary term because the term jμAμ vanishes. The weak form of electric-magnetic duality requires that Kähler electric charge is proportional to Kähler magnetic charge, which implies reduction to abelian Chern-Simons term: the Kähler coupling strength does not appear at all in Chern-Simons term. The proportionality constant beween the electric and magnetic parts JE and JB of Kähler form however enters into the dynamics through the boundary conditions stating the weak form of electric-magnetic duality. At the Minkowskian side the proportionality constant must be proportional to gK2 to guarantee a correct value for the unit of Kähler electric charge - equal to that for electric charge in electron length scale- from the assumption that electric charge is proportional to the topologically quantized magnetic charge. It has been assumed that

    JE= αK JB

    holds true at both sides of the wormhole throat but this is an un-necessarily strong assumption at the Euclidian side. In fact, the self-duality of CP2 Kähler form stating

    JE=JB

    favours this boundary condition at the Euclidian side of the wormhole throat. Also the fact that one cannot distinguish between electric and magnetic charges in Euclidian region since all charges are magnetic can be used to argue in favor of this form. The same constraint arises from the condition that the action for CP2 type vacuum extremal has the value required by the argument leading to a prediction for gravitational constant in terms of the square of CP2 radius and αK the effective replacement gK2 → 1 would spoil the argument.

  2. Minkowskian and Euclidian regions should correspond to a strongly/weakly interacting phase in which Kähler magnetic/electric charges provide the proper description. In Euclidian regions associated with CP2 type extremals there is a natural interpretation of interactions between magnetic monopoles associated with the light-like throats: for CP2 type vacuum extremal itself magnetic and electric charges are actually identical and cannot be distinguished from each other. Therefore the duality between strong and weak coupling phases seems to be trivially true in Euclidian regions if one has JB= JE at Euclidian side of the wormhole throat. This is however an un-necessarily strong condition as the following argument shows.

  3. In Minkowskian regions the interaction is via Kähler electric charges and elementary particles have vanishing total Kähler magnetic charge consisting of pairs of Kähler magnetic monopoles so that one has confinement characteristic for strongly interacting phase. Therefore Minkowskian regions naturaly correspond to a weakly interacting phase for Kähler electric charges. One can write the action density at the Minkowskian side of the wormhole throat as

    (JE2-JB2)/αK= αKJB2 - JB2K .

    The exchange JE↔ JB accompanied by gK2→ -1/gK2 leaves the action density invariant. Since only the behavior of the vacuum functional infinitesimally near to the wormhole throat matters by almost topological QFT property, the duality is realized. Note that the argument goes through also in Euclidian regions so that it does not allow to decide which is the correct form of weak form of electric-magnetic duality.

  4. S-duality could correspond geometrically to the duality between partonic 2-surfaces responsible for magnetic fluxes and string worlds sheets responsible for electric fluxes as rotations of Kähler gauge potentials around them and would be very closely related with the counterpart of T-duality implied by the strong form of general coordinate invariance and saying that space-like 3-surfaces at the ends of space-time sheets are equivalent with light-like 3-surfaces connecting them.

Comparison with standard view about dualities

One can compare the proposed realization of T-, S and U-duality to the more general dualities defined by the modular group SL(2,Z), which in QFT framework can hold true for the path integral over all possible gauge field configurations. In the resent case the dualities hold true for every preferred extremal separately and the functional integral is only over the space-time projections of fixed Kähler form of CP2. Modular invariance for Maxwell action was discussed by E. Verlinde for Maxwell action with θ term for a general 4-D compact manifold with Euclidian signature of metric. In this case one has path integral giving sum over infinite number of extrema characterized by the cohomological equivalence class of the Maxwell field the action exponential to a high degree. Modular invariance is broken for CP2: one obtains invariance only for τ→ τ+2 whereas S induces a phase factor to the path integral.

  1. In the recent case these homology equivalence classes would correspond to homology equivalence classes of holomorphic partonic 2-surfaces associated with the critical points of K\"ahler function with respect to zero modes.

  2. In the case that the Euclidian contribution to the Kähler action is expressible solely in terms of wormhole throat Chern-Simons terms, and one can neglect the measurement interaction terms, the exponent of Kähler action can be expressed in terms of Chern-Simons action density as

    L= τ LC-S ,

    LC-S=J∧ A ,

    τ=1/gK2 +ik/4π , k=1 .

    Here the parameter τ transforms under full SL(2,Z) group as

    τ→ (aτ+b)/(cτ+d) .

    The generators of SL(2,Z) transformations are T: τ → τ+1, S:τ→-1/τ. The imaginary part in the exponents corresponds to Kac-Moody central extension k=1.

    This form corresponds also to the general form of Maxwell action with CP breaking θ term given by

    L= 1/gK2 J∧*J +i(θ/8π2) J∧J , θ=2π .

    Hence the Minkowskian part mimicks the θ term but with a value of θ for which the term does not give rise to CP breaking in the case that the action is full action for CP2 type vacuum extremal so that the phase equals to 2π and phase factor case is trivial. It would seem that the deviation from the full action for CP2 due to the presence of wormhole throats reducing the value of the full Kähler action for CP2 type vacuum extremal gives rise to CP breaking. One can visualize the excluded volume as homologically non-trivial geodesic spheres with some thickness in two transverse dimensions. At the limit of infinitely thin geodesic spheres CP breaking would vanish. The effect is exponentially sensitive to the volume deficit.

CP breaking and ground state degeneracy

Ground state degeneracy due to the possibility of having both signs for Minkowskian contribution to the exponent of vacuum functional provides a general view about the description of CP breaking in TGD framework.

  1. In TGD framework path integral is replaced by inner product involving integral over WCV. The vacuum functional and its conjugate are associated with the states in the inner product so that the phases of vacuum functionals cancel if only one sign for the phase is allowed. Minkowskian contribution would have no physical significance. This of course cannot be the case. The ground state is actually degenerate corresponding to the phase factor and its complex conjugate since kenosqrtg can have two signs in Minkowskian regions. Therefore the inner products between states associated with the two ground states define 2× 2 matrix and non-diagonal elements contain interference terms due to the presence of the phase factor. At the limit of full CP2 type vacuum extremal the two ground states would reduce to each other and the determinant of the matrix would vanish.

  2. A small mixing of the two ground states would give rise to CP breaking and the first principle description of CP breaking in systems like K-Kbar and of CKM matrix should reduce to this mixing. K0 mesons would be CP even and odd states in the first approximation and correspond to the sum and difference of the ground states. Small mixing would be present having exponential sensitivity to the actions of CP2 type extremals representing wormhole throats. This might allow to understand qualitatively why the mixing is about 50 times larger than expected for B0 mesons.

  3. There is a strong temptation to assign the two ground states with two possible arrows of geometric time. At the level of M-matrix the two arrows would correspond to state preparation at either upper or lower boundary of CD. Do long- and shortlived neutral K mesons correspond to almost fifty-fifty orthogonal superpositions for the two arrow of geometric time or almost completely to a fixed arrow of time induced by environment? Is the dominant part of the arrow same for both or is it opposite for long and short-lived neutral measons? Different lifetimes would suggest that the arrow must be the same and apart from small leakage that induced by environment. CP breaking would be induced by the fact that CP is performed only K0 but not for the environment in the construction of states. One can probably imagine also alternative interpretations.
Remark: The proportionality of Minkowskian and Euclidian contributions to the same Chern-Simons term implies that the critical points with respect to zero modes appear for both the phase and modulus of vacuum functional. The Kähler function property does not allow extrema for vacuum functional as a function of complex coordinates of WCW since this would mean Kähler metric with non-Euclidian signature. If this were not the case the stationary values of phase factor and extrema of modulus of the vacuum functional would correspond to different configurations.

For details see the new chapter Motives and Infinite Primes of "Physics as a Generalized Number Theory" or the article with same title.

Sunday, June 26, 2011

K-theory, branes, and TGD

K-theory is an essential part of the motivic cohomology. Unfortunately, this theory is very abstract and the articles written by mathematicians are usually incomprehensible for a physicist. Hence the best manner to learn K-theory is to learn about its physics applications. The most important applications are brane classification in super string models and M-theory. The excellent lectures by Harah Evslin with title What doesn't K-theory classify? make it possible to learn the basic motivations for the classification, what kind of classifications are possible, and what are the failures. Also the Wikipedia article gives a bird's eye of view about the problems. As a by-product one learns something about the basic ideas of K-theory.

In the sequel I will discuss critically the basic assumptions of brane world scenario, sum up my understanding about the problems related to the topological classification of branes and also to the notion itself, ask what goes wrong with branes and demonstrate how the problems are avoided in TGD framework, and conclude with a proposal for a natural generalization of K-theory to include also the division of bundles inspired by the generalization of Feynman diagrammatics in quantum TGD, by zero energy ontology, and by the notion of finite measurement resolution.

Brane world scenario

The brane world scenario looks attractive from the mathematical point of view ine one is able to get accustomed with the idea that basic geometric objects have varying dimensions. Even accepting the varying dimensions, the basic physical assumptions behind this scenario are vulnerable to criticism.

  1. Branes are geometric objects of varying dimension in the 10-/11-dimensional space-time -call it M- of superstring theory/M-theory. In M-theory the fundamental strings are replaced with M-branes, which are 2-D membranes with 3-dimensional orbit having as its magnetic dual 6-D M5-brane. Branes are thought to emerge non-perturbatively from fundamental 2-branes but what this really means is not understood. One has D-p-branes with Dirichlet boundary conditions fixing a p+1-dimensional surface of M as brane orbit: one of the dimensions corresponds to time. Also S-branes localized in time have been proposed.

  2. In the description of the classical limit branes interact with the classical fields of the target space by the generalization of the minimal coupling of charged point-like particle to electromagnetic gauge potential. The coupling is simply the integral of the gauge potential over the world-line - the value of 1-form for the wordline. Point like particle represents 0-brane and in the case of p-brane the generalization is obtained by replacing the gauge potential represented by a 1-from with p+1-form. The exterior derivative of this p+1-form is p+2-form representing the analog of electromagnetic field. Complete dimensional democracy strongly suggests that string world sheets should be regarded as 1-branes.

  3. From TGD point of view the introduction of branes looks a rather ad hoc trick. By generalizing the coupling of electromagnetic gauge potential to the word line of point like particle one could introduce extended objects of various dimensions also in the ordinary 4-D Maxwell theory but they would be always interpreted as idealizations for the carriers of 4- currents. Therefore the crucial step leading to branes involves classical idealization in conflict with Uncertainty Principle and the genuine quantal description in terms of fields coupled to gauge potentials.

    My view is that the most natural interpretation for what is behind branes is in terms of currents in D=10 or D= 11 space-time. In this scheme branes have role only as semi-classical idealizations making sense only above some scale. Both the reduction of string theories to quantum field theories by holography and the dynamical character of the metric of the target space conforms with super-gravity interpretation. Internal consistency requires also the identification of strings as branes so that superstring theories and M-theory would reduce to an idealization to 10-/11-dimensional quantum gravity.

In this framework the brave brane world episode would have been a very useful Odysseia. The possibility to interpret various geometric objects physically has proved to be an extremely powerful tool for building provable conjectures and has produced lots of immensely beautiful mathematics. As a fundamental theory this kind of approach does not look convincing to me.

The basic challenge: classify the conserved brane charges associated with branes

One can of course forget these critical arguments and look whether this general picture works. The first thing that one can do is to classify the branes topologically. I made the same question about 32 years ago in TGD framework: I thought that cobordism for 3-manifolds might give highly interesting topological conservation laws. I was disappointed. The results of Thom's classical article about manifold cobordism demonstrated that there is no hope for really interesting conservation laws. The assumption of Lorentz cobordism meaning the existence of global time-like vector field would make the situation more interesting but this condition looked too strong and I could not see a real justification for it. In generalized Feynman diagrammatics there is no need for this kind of condition.

There are many alternative approaches to the classification problem. One can use homotopy, homology, cohomology and their relative and other variants, topological or algebraic K-theory, twisted K-theory, and variants of K-theory not yet existing but to be proposed within next years. The list is probably endless unless something like motivic cohomology brings in enlightment.

  1. First of all one must decide whether one classifies p-dimensional time=constant sections of p-branes or their p+1-dimensional orbits. Both approaches have been applied although the first one is natural in the standard view about spontaneous compactification. For the first option topological invariants could be seen as conserved charges: homotopy invariants and homological and cohomological characteristics of branes provide this kind of invariants. For the latter option the invariants would be analogous to instanton number characterizing the change of magnetic charge.

  2. Purely topological invariants come first in mind. Homotopy groups of the brane are invariants inherent to the brane (the brane topology can however change). Homological and cohomological characteristics of branes in singular homology characterize the imbedding to the target space. There are also more delicate differential topological invariants such as de Rham cohomology defining invariants analogous to magnetic charges. Dolbeault cohomology emerges naturally for even-dimensional branes with complex structure.

  3. Gauge theories - both abelian and non-Abelian - define a standard approach to the construction of brane charges for the bundle structures assigned with branes. Chern-Simons classes are fundamental invariants of this kind. Also more delicate invariants associated with gauge potentials can be considered. Chern-Simons theory with vanishing field strengths for solutions of field equations provides a basic example about this. For intance, SU(2) Chern-Simons theory provides 3-D topological invariants and knot invariants.

  4. More refined approaches involve K-theory -closely related to motivic cohomology - and its twisted version. The idea is to reduce the classification of branes to the classification of the bundle structures associated with them. This approach has had remarkable successes but has also its short-comings.

The challenge is to find the mathematical classification which suits best the physical intuitions (, which might be fatally wrong as already proposed) but is universal at the same time. This challenge has turned out to be tough. The Ramond-Ramond (RR) p-form fields of type II superstring theory are rather delicate objects and a source of most of the problems. The difficulties emerge also by the presence of Neveu-Schwartz 3-form H =dB defining classical background field.

K-theory has emerged as a good candidate for the classification of branes. It leaves the confines of homology and uses bundle structures associated with branes and classifies these. There are many K-theories. In topological K-theory bundles form an algebraic structure with sum, difference, and multiplication. Sum is simply the direct sum for the fibers of the bundle with common base space. Product reduces to a tensor product for the fibers. The difference of bundles represents a more abstract notion. It is obtained by replacing bundles with pairs in much the same way as rationals can be thought of as pairs of integers with equivalence (m,n)= (km,kn), k integer. Pairs (n,1) representing integers and pairs (1,n) their inverses. In the recent case one replaces multiplication with sum and regards bundle pairs and (E,F) and (E+G,F+G) equivalent. Although the pair as such remains a formal notion, each pair must have also a real world representativs. Therefore the sign for the bundle must have meaning and corresponds to the sign of the charges assigned to the bundle. The charges are analogous to winding of the brane and one can call brane with negative winding antibrane. The interpretation in terms of orientation looks rather natural. Later a TGD inspired concrete interpretation for the bundle sum, difference, product and also division will be proposed.

Problems related to the existence of spinor structure

Many problems in the classification of brane charges relate to the existence of spinor structure. The existence of spinor structure is a problem already in general general relativity since ordinary spinor structure exists only if the second Stiefel-Whitney class of the manifold is non-vanishing: if the third Stiefel-Whitney class vanishes one can introduce so called spinc structure. This kind of problems are encountered already in lattice QCD, where periodic boundary conditions imply non-uniqueness having interpretation in terms of 16 different spinor structures with no obvious physical interpretation. One the strengths of TGD is that the notion of induced spinor structure eliminates all problems of this kind completely. One can therefore find direct support for TGD based notion of spinor structure from the basic inconsistency of QCD lattice calculations!

  1. Freed-Witten anomaly appearing in type II string theories represents one of the problems. Freed and Witten show that in the case of 2-branes for which the generalized gauge potential is 3-form so called spinc structure is needed and exists if the third Stiefel-Whitney class w3 related to second Stiefel Whitney class whose vanishing guarantees the existence of ordinary spin structure (in TGD framework spinc structure for CP2 is absolutely essential for obtaining standard model symmetries).

    It can however happen that w3 is non-vanishing. In this case it is possible to modify the spinc structure if the condition w3+[H]=0 holds true. It can however happen that there is an obstruction for having this structure - in other words w3+[H] does not vanish - known as Freed-Witten anomaly. In this case K-theory classification fails. Witten and Freed argue that physically the wrapping of cycle with non-vanishing w3 + [H] by a Dp-brane requires the presence of D(p-2) brane cancelling the anomaly. If D(p-2) brane ends to anti-Dp in which case charge conservation is lost. If there is not place for it to end one has semi-infinite brane with infinite mass, which is also problematic physically. Witten calls these branes baryons: these physically very dubious objects are not classified by K-theory.

  2. The non-vanishing of w3+[H]=0 forces to generalize K-theory to twisted K-theory. This means a modification of the exterior derivative to get twisted de Rham cohomology and twisted K-theory and the condition of closedness in this cohomology for certain form becomes the condition guaranteeing the existence of the modified spinc structure. D-branes act as sources of these fields and the coupling is completely analogous to that in electrodynamics. In the presence of classical Neveu-Schwartz (NS-NS) 3-form field H associated with the back-ground geometry the field strength Gp+1 = dCp is not gauge invariant anymore. One must replace the exterior derivative with its twisted version to get twisted de Rham cohomology:

    d→ d+ H∧ .

    There is a coupling between p- and p+2-forms together and gauge symmetries must be modified accordingly. The fluxes of twisted field strengths are not quantized but one can return to original p-forms which are quantized. The coupling to external sources also becomes more complicated and in the case of magnetic charges one obtains magnetically charged Dp-branes. Dp-brane serves as a source for D(p-2)- branes.

    This kind of twisted cohomology is known by mathematicians as Deligne cohomology. At the level of homology this means that if branes with dimension of p are presented then also branes with dimension p+2 are there and serve as source of Dp-branes emanating from them or perhaps identifiable as their sub-manifolds. Ordinary homology fails in this kind of situation and the proposal is that so called twisted K-theory could allow to classify the brane charges.

  3. A Lagrangian formulation of brane dynamics based on the notion of p-brane democracy due to Peter Townsend has been developed by various authors.

Ashoke Sen has proposed a grand vision for understanding the brane classification in terms of tachyon condensation in absence of NS-NS field H. The basic observation is that stacks of space-filling D- and anti D-branes are unstable against process called tachyon condensation which however means fusion of p+1-D brane orbits rather than p-dimensional time slicse of branes. These branes are however accompanied by lower-dimensional branes and the decay process cannot destroy these. Therefore the idea arises that suitable stacks of D9 branes and anti-D9-branes could code for all lower-dimensional brane configurations as the end products of the decay process.

This leads to a creation of lower-dimensional branes. All decay products of branes resulting in the decay cascade would be by definition equivalent. The basic step of the decay process is the fusion of D-branes in stack to single brane. In bundle theoretic language one can say that the D-branes and anti-D branes in the stack fuse together to single brane with bundle fiber which is direct sum of the fibers on the stack. This fusion process for the branes of stack would correspond in topological K-theory. The fusion of D-branes and anti-D branes would give rise to nothing since the fibers would have opposite sign. The classification would reduce to that for stacks of D9-branes and anti D9-branes.

Problems with Hodge duality and S-duality

The K-theory classification is plagued by problems all of which need not be only technical.

  1. R-R fields are self dual and since metric is involved with the mapping taking forms to their duals one encounters a problem. Chern characters appearing in K-theory are rational valued but the presence of metric implies that the Chern characters for the duals need not be rational valued. Hence K-theory must be replaced with something less demanding.

    The geometric quantization inspired proposal of Diaconescu, Moore and Witten is based on the polarization using only one half of the forms to get rid of the proboem. This is like thinking the 10-D space-time as phase space and reducing it effectively to 5-D space: this brings strongly in mind the identification of space-time surfaces as hyper-quaternionic (associative) sub-manifolds of imbedding space with octonionic structure and one can ask whether the basic objects also in M-theory should be taken 5-dimensional if this line of thought is taken seriously. An alternative approach uses K-theory to classify the intersections of branes with 9-D space-time slice as has been porposed by Maldacena, Moore and Seiberg.

  2. There another problem related to classification of the brane charges. Witten, Moore and Diaconescu have shown that there are also homology cycles which are unstable against decay and this means that twisted K-theory is inconsistent with the S-duality of type IIB string theory. Also these cycles should be eliminated in an improved classification if one takes charge conservation as the basic condition and an hitherto un-known modification of cohomology theory is needed.

  3. There is also the problem that K-theory for time slices classifies only the R-R field strengths. Also R-R gauge potentials carry information just as ordinary gauge potentials and this information is crucial in Chern-Simons type topological QFTs. K-theory for entire target space classifies D-branes as p+1-dimensional objects but in this case the classification of R-R field strengths is lost.

The existence of non-representable 7-D homology classes for targent space dimension D>9

There is a further nasty problem which destroys the hopes that twisted K-theory could provide a satisfactory classification. Even worse, something might be wrong with the superstring theory itself. The problem is that not all homology classes allow a representation as non-singular manifolds. The first dimension in which this happens is D=10, the dimension of super-string models! Situation is of course the same in M-theory. The existence of the non-representables was demonstrated by Thom - the creator of catastrophe theory and of cobordism theory for manifolds- for a long time ago.

What happens is that there can exist 7-D cycles which allow only singular imbeddings. A good example would be the imbedding of twistor space CP3, whose orbit would have conical singularity for which CP3 would contract to a point at the "moment of big bang". Therefore homological classification not only allows but demands branes which are orbifolds. Should orbifolds be excluded as unphysical? If so then homology gives too many branes and the singular branes must be excluded by replacing the homology with something else. Could twisted K-theory exclude non-representable branes as unstable ones by having non-vanishing w3+[H]? The answer to the question is negative: D6-branes with w3+[H]=0 exist for which K-theory charges can be both vanishing or non-vanishing.

One can argue that non-representability is not a problem in superstring models (M-theory) since spontaneous compactification leads to M× X6 (M× X7). On the other hand, Cartesian product topology is an approximation which is expected to fail in high enough length scale resolution and near big bang so that one could encounter the problem. Most importantly, if M-theory is theory of everything it cannot contain this kind of beauty spots.

What could go wrong with super string theory and how TGD circumvents the problems?

As a proponent of TGD I cannot avoid the temptation to suggest that at least two things could go wrong in the fundamental physical assumptions of superstrings and M-theory.

  1. The basic failure would be the construction of quantum theory starting from semiclassical approximation assuming localization of currents of 10 - or 11-dimensional theory to lower-dimensional sub-manifolds. What should have been a generalization of QFT by replacing pointlike particles with higher-dimensional objects would reduce to an approximation of 10- or 11-dimensional supergravity.

    This argument does not bite in TGD. 4-D space-time surfaces are indeed fundamental objects in TGD as also partonic 2-surfaces and braids. This role emerges purely number theoretically inspiring the conjecture that space-time surfaces are associative sub-manifolds of octonionic imbedding spaces, from the requirement of extended conformal invariance, and from the non-dynamical character of the imbedding space.

  2. The condition that all homology equivalence classes are representable as manifolds excludes all dimensions D> 9 and thus super-strings and M-theory as a physical theory. This would be the case since branes are unavoidable in M-theory as is also the landscape of compactifications. In semiclassical supergravity interpretation this would not be catastrophe but if branes are fundamental objects this shortcoming is serious. If the condition of homological representability is accepted then target space must have dimension D<10 and the arguments sequence leading to D=8 and TGD is rather short. The number theoretical vision provides the mathematical justification for TGD as the unique outcome.

  3. The existence of spin structure is clearly the source of many problems related to R-R form. In TGD framework the induction of spinc structure of the imbedding space resolves all problems associated with sub-manifold spin structures. For some reason the notion of induced spinor structure has not gained attention in super string approach.

  4. Conservative experimental physicist might criticize the emergence of branes of various dimensions as something rather weird. In TGD framework electric-magnetic duality can be understood in terms of general coordinate invariance and holography and branes and their duals have dimension 2, 3, and 4 organize to sub-manifolds of space-time sheets. The TGD counterpart for the fundamental M-2-brane is light-like 3-surface. Its magnetic dual has dimension given by the general formula pdual= D-p-4, where D is the dimension of the target space. In TGD one has D=8 giving pdual= 2. The first interpretation is in terms of self-duality. A more plausible interpretation relies on the identification of the duals of light-like 3-surfaces as spacelike-3-surfaces at the light-like boundaries of CD. General Coordinate Invariance in strong sense implies this duality. For partonic 2-surface one would have p=2 and pdual=3. The identification of the dual would be as space-time surface. The crucial distinction to M-theory would be that branes of different dimension would be sub-manifolds of space-time surface.

  5. For p=0 one would have pdual=4 assigning five-dimensional surface to orbits of point-like particles identifiable most naturally as braid strands. One cannot assign to it any direct physical meaning in TGD framework and gauge invariance for the analogs of brane gauge potentials indeed excludes even-dimensional branes in TGD since corresponding forms are proportional to Kähler gauge potential (so that they would be analogous to odd-dimensional branes allowed by type IIB superstrings).

    4-branes could be however mathematically useful by allowing to define Morse theory for the critical points of the Minkowskian part of Kähler action. While writing this I learned that Witten has proposed a 4-D gauge theory approach with N=4 SUSY to the classification of knots. Witten also ends up with a Morse theory using 5-D space-times in the category-theoretical formulation of the theory. For some time ago I also proposed that TGD as almost topological QFT defines a theory of knots, knot braidings, and of 2-knots in terms of string world sheets. Maybe the 4-branes could be useful for understanding of the extrema of TGD of the Minkowskian part of Kähler action which would take take the same role as Hamiltonian in Floer homology: the extrema of 5-D brane action would connect these extrema.

  6. Light-like 3-surfaces could be seen as the analogs von Neuman branes for which the boundary conditions state that the ends of space-like 3-brane defined by the partonic 2-surfaces move with light-velocity. The interpretation of partonic 2-surfaces as space-like branes at the ends of CD would in turn make them D-branes so that one would have a duality between D-branes and N-brane interpretations. T-duality exchanges von Neumann and Dirichlet boundary conditions so that strong from of general coordinate invariance would correspond to both electric-magnetic and T-duality in TGD framework. Note that T-duality exchanges type IIA and type IIB super-strings with each other.

  7. What about causal diamonds and their 7-D lightlike boundaries? Could one regard the light-like boundaries of CDs as analogs of 6-branes with light-like direction defining time-like direction so that space-time surfaces would be seen as 3-branes connecting them? This brane would not have magnetic dual since the formula for the dimensions of brane and its magnetic dual allows positive brane dimension p only in the range (1,3).

Can one identify the counterparts of R-R and NS-NS fields in TGD?

R-R and NS-NS 3-forms are clearly in fundamental role in M-theory. Since in TGD partonic 2-surfaces define the analogs of fundamental M-2-branes, one can wonder whether these 3-forms could have TGD counterparts.

  1. In TGD framework the 3-forms G3,A =dC2,A defined as the exterior derivatives of the two-forms C2,A identified as products C2,A=HAJ of Hamiltonians HA of δ M4+/-× CP2 with Kähler forms of factors of δ M4+/-× CP2 define an infinite family of closed 3-forms belonging to various irreducible representations of rotation group and color group. One can consider also the algebra generated by products HA A, HAJ, HA A∧ J, HA J∧ J, where A resp. J denotes the Kähler gauge potential resp. Kähler form or either δ M4+/- or CP2. A resp. Also the sum of Kähler potentials resp. forms of δ M4+/- and CP2 can be considered.

  2. One can define the counterparts of the fluxes ∫ Adx as fluxes of HA A over braid strands, HAJ over partonic 2-surfaces and string world sheets, HA A∧ J over 3-surfaces, and HAJ∧ J over space-time sheets. Gauge invariance however suggests that for non-constant Hamiltonians one must exclude the fluxes assigned to odd dimensional surfaces so that only odd-dimensional branes would be allowed. This would exclude 0-branes and the problematic 4-branes. These fluxes should be quantized for the critical values of the Minkowskian contributions and for the maxima with respect to zero modes for the Euclidian contributions to Kähler action. The interpretation would be in terms of Morse function and Kähler function if the proposed conjecture holds true. One could even hope that the charges in Cartan algebra are quantized for all preferred extremals and define charges in these irreducible representations for the isometry algebra of WCW. The quantization of electric fluxes for string world sheets would give rise to the familiar quantization of the rotation ∫ E dl of electric field over a loop in time direction taking place in superconductivity.

  3. Should one interpret these fluxes as the analogs of NS-NS-fluxes or R-R fluxes? The exterior derivatives of the forms G3 vanish which is the analog for the vanishing of magnetic charge densities (it is however possible to have the analogs of homological magnetic charge). The self-duality of Ramond p-forms could be posed formally (Gp= *G8-p) but does not have any implications for p< 4 since the space-time projections vanish in this case identically for p>3. For p=4 the dual of the instanton density J∧ J is proportional to volume form if M4 and is not of topological interest. The approach of Witten eliminating one half of self dual R-R-fluxes would mean that only the above discussed series of fluxes need to be considered so that one would have no troubles with non-rational values of the fluxes nor with the lack of higher dimensional objects assignable to them. An interesting question is whether the fluxes could define some kind of K-theory invariants.

  4. In TGD imbedding space is non-dynamical and there seems to be no counterpart for the NS 3-form field H=dB. The only natural candidate would correspond to Hamiltonian B=J giving H=dB=0. At quantum level this might be understood in terms of bosonic emergence meaning that only Ramond representations for fermions are needed in the theory since bosons correspond to wormhole contacts with fermion and anti-fermions at opposite throats. Therefore twisted cohomology is not needed and there is no need to introduce the analogy of brane democracy and 4-D space-time surfaces containing the analogs of lower-dimensional brains as sub-manifolds are enough. The fluxes of these forms over partonic 2-surfaces and string world sheets defined non-abelian analogs of ordinary gauge fluxes reducing to rotations of vector potentials and suggested be crucial for understanding braidings of knots and 2-knots in TGD framework. Note also that the unique dimension D=4 for space-time makes 4-D space-time surfaces homologically self-dual so that only they are needed.

Could one divide bundles?

TGD differs from string models in one important aspects: stringy diagrams do not have interpretation as analogs of vertices of Feynman diagrams: the stringy decay of partonic 2-surface to two pieces does not represent particle decay but a propagation along different paths for incoming particle. Particle reactions in turn are described by the vertices of generalized Feynman diagrams in which the ends of incoming and outgoing particles meet along partonic 2-surface. This suggests a generalization of K-theory for bundles assignable to the partonic 2-surfaces. It is good to start with a guess for the concrete geometric realization of the sum and product of bundles in TGD framework.

  1. The analogs of string diagrams could represent the analog for direct sum. Difference between bundles could be defined geometrically in terms of trouser vertex A+B→ C. B would by definition represent C-A. Direct sum could make sense for single particle states and have as space-time correlate the conservation of braid strands.

  2. A possible concretization in TGD framework for the tensor product is in terms of the vertices of generalized Feynman diagrams at which incoming light-like 3-D orbits of partons meet along their ends. The tensor product of incoming state spaces defined by fermionic oscillator algebras is naturally formed. Tensor product would have also now as a space-time correlate conservation of braid strands. This does not mean that the number of braid strands is conserved in reactions if also particular exchanges can carry the braid strands of particles coming to the vertex.

Why not define also division of bundles in terms of the division for tensor product? In terms of the 3-vertex for generalized Feynman diagrams A⊗ B=C representing tensor product B would be by definition C/A. Therefore TGD would extend the K-theory algebra by introducing also division as a natural operation necessitated by the presence of the join along ends vertices not present in string theory. I would be surprised if some mathematician would not have published the idea in some exotic journal. Below I represent an argument that this notion could be also applied in the mathematical description of finite measurement resolution in TGD framework using inclusions of hyper-finite factor. Division could make possible a rigorous definition for for non-commutative quantum spaces.

Tensor division could have also other natural applications in TGD framework.

  1. One could assign bundles M+ and M- to the upper and lower light-like boundaries of CD. The bundle M+/M- would be obtained by formally identifying the upper and lower light-like boundaries. More generally, one could assign to the boundaries of CD positive and negative energy parts of WCW spinor fields and corresponding bundle structures in "half WCW". Zero energy states could be seen as sections of the unit bundle just like infinite rationals reducing to real units as real numbers would represent zero energy states.

  2. Finite measurement resolution would encourage tensor division since finite measurement resolution means essentially the loss of information about everything below measurement resolution represented as a tensor product factor. The notion of coset space formed by hyper-finite factor and included factor could be understood in terms of tensor division and give rise to quantum group like space with fractional quantum dimension in the case of Jones inclusions. Finite measurement resolution would therefore define infinite hierarchy of finite dimensional non-commutative spaces characterized by fractional quantum dimension. In this case the notion of tensor product would be somewhat more delicate since complex numbers are effectively replaced by the included algebra whose action creates states not distinguishable from each other. The action of algebra elements to the state |B> in the inner product < A|B> must be equivalent with the action of its hermitian conjugate to the state < A|. Note that zero energy states are in question so that the included algedra generates always modifications of states which keep it as a zero energy state.

For details see the new chapter Motives and Infinite Primes of "Physics as a Generalized Number Theory" or the article with same title.

Wednesday, June 22, 2011

How detailed the quantum classical correspondence can be?

Can the dynamics defined by preferred extremals of Kähler action be dissipative in some sense? The generation of the arrow of time has a nice realization in zero energy ontology as a choice of well-defined particle numbers and other quantum numbers at the "lower" end of CD. By quantum classical correspondence this should have a space-time correlate. Gradient dynamics is a highly phenomenological realization of the dissipative dynamics and one must try to identify a microscopic variant of dissipation in terms of entropy growth of some kind. If the arrow of time and dissipation has space-time correlate, there are hopes about the identification of this kind of correlate.

Quantum classical correspondence has been perhaps the most useful guiding principle in the construction of quantum TGD. What is says that not only quantum numbers but also quantum jump sequences should have space-time correlates: about this the failure of strict determinism of Kähler action gives good hopes. Even the quantum superposition - at least for certain situations - might have space-time correlates.

  1. Measurement interaction term in the modified Dirac action at the upper end of CD indeed defines a coupling to the classical dynamics kenociteallb/Dirac in a very delicate manner. This kind of measurement interaction is indeed basic element of quantum TGD. Also the color and charges and angular momentum associated with the Hamiltonians at point of braids could couple to the dynamics via the boundary conditions.

  2. The braid strand with a given Hamiltonian could obey Hamiltonian equations of motion: this would give rise to a skeleton of space-time defined by braid strands possibly continued to string world sheets and would provided different realization of quantum classical correspondence. Symplectic tringulation suggests by the symplectic QFT proposed to describe physics in zero modes would add to the skeleton edges connecting string ends continued to 2-D sheets in the interior of space-time.

  3. Quantum TGD can be regarded as a square root of thermodynamics in well-defined sense. Could it be possible to couple the Hermitian square root of density matrix appearing in M-matric and characterizing zero energy state thermally to the geometry of space-time sheets by coupling it to the classical dynamical via boundary conditions depending on its eigenvalues? The necessity to choose single eigenvalue spoils the attempt and one obtains only a representation for single measurement outcome. It seems that one can achieve only a representation of the ensemble at space-time level consisting of space-time sheets representing various outcomes of measurement. This ensemble would be realized as ensemble of sub-CDs for a given CD.

  4. One can pose even more ambigious question: could quantum superposition of WCW spinor fields have a space-time correlate in the sense that all space-time surfaces in the superposition would carry information about the superposition itself? Obviously this would mean self-referentiality via quantum-classical feedback.

The following discussion concentrates on possible space-time correlates for the quantum superposition of WCW spinor fields and for the arrow of time.

  1. It seems difficult to imagine space-time correlate for the quantum superposition of final states with varying quantum numbers since these states correspond to quantum superpositions of different space-time surfaces. How could one code information about quantum superposition of space-time surfaces to the space-time surfaces appearing in the superposition? This kind of self-referentiality seems to be necessary if one requires that various quantum numbers characterizing the superposition (say momentum) couple via boundary conditions to the space-time dynamics.

  2. The failure of non-determinism of quantum dynamics is behind dissipation and strict determinism fails for Kähler action. This gives hopes that the dynamics induces also arrow of time. Energy non-conservation is of course excluded and one should be able to identify a measure of entropy and the analog of second law of thermodynamics telling what happens at for preferred extremals when the situation becomes non-deterministic. The vertices of generalized Feynman graphs are natural places were non-determinism emerges as are also sub-CDs. Naive physical intuition would suggest that dissipation means generation of entropy: the vertices would favor decay of particles rather than their spontaneous assembly. The analog of blackhole entropy assignable to partonic 2-surfaces might allow to characterize this quantatively. The symplectic area of partonic 2-surface could be a symplectic invariant of this kind.

  3. Could the mysterious branching of partonic 2-surfaces -obviously analogous to even more mysterious branching of quantum state in many worlds interpretation of quantum mechanics- assigned to the multivalued character of the correspondence between canonical momentum densities and time derivatives of H coordinates allow to understand how the arrow of time is represented at space-time level? Recall that this brancing is what implies the effective hierarchy of Planck constants as integer multiples of its minimal value absolutely crucial for the application of TGD in biology and consciousness and to the understanding of dark matter as large hbar phases

    1. This branching would effectively replace CD with its singular covering with number of branches dependin on space-time region. The relative homology with respect to the upper boundary of CD (so that the branches of the trees would effectively meet there) could define the analog of Floer homology with various paths defined by the orbits of partonic 2-surfaces along lines of generalize Feynman diagram defining the first homology group. Typically tree like structures would be involved with the ends of the tree at the upper boundary of CD effectively identified.

    2. This branching could serve as a representation for the branching of quantum state to a superposition of eigenstates of measured quantum observables. If this is the case, the various branches to which partonic 2-surface decays at partonic 2-surface would more or less relate to quantum superposition of final states in particle reaction. The number of branches would be finite by finite measurement resolution. For a given choice of the arrow of geometric time the partonic surface would not fuse back at the upper end of CD.

    3. Rather paradoxically, the space-time correlate for the dissipation would reduce the dissipation by increasing the effective value of hbar: the interpretation would be however in terms of dark matter identified in terms of large hbar phase. In the same manner dissipation would be accompanied by evolution since the increase of hbar naturally implies formation of macroscopically quantum coherent states. The space-time representation of dissipation would compensate the increase of entropy at the ensemble level.

    4. The geometric representation of quantum superposition might take place only in the intersection of real and p-adic worlds and have interpretation in terms of cognitive representations. In the intersection one can also have a generalization of second law kenociteallb/nmpc in which the generation of genuine negentropy in some space-time regions via the build up of cognitive representation compensated by the generation of entropy at other space-time regions. The entropy generating behavior of living matter conforms with this modification of the second law. The negentropy measure in question relies on the replacement of logarithms of probabilities with logarithms of their p-adic norms and works for rational probabilities and also their algebraic variants for finite-dimensional algebraic extensions of rationals.

    5. Each state in the superposition of WCW quantum states would contain this representation as its space-time correlate realizing self-referentiality at quantum level in the intersection of real and p-adic worlds. Also the state function reduced members of ensemble could contain this cognitive representation at space-time level. Essentially quantum memory making possible self-referential linguistic representation of quantum state in terms of space-time geometry and topology would be in question. The formulas written by mathematicians would define similar map from quantum level to the space-time level making possible to "see" one's thoughts.

For more details see the new chapter Infinite Primes and Motives of "Physics as Generalized Number Theory" or the article with same title.

Floer homology and TGD

TGD can be seen as almost topological quantum field theory. This could have served as a motivation for spending most of last months to the attempt to learn some of the mathematics related to various kind of homologies and cohomologies. The decisive stimulus came from the attempt to understand the basic ideas of motivic cohomology. I am not a specialist and do not have any ambition or abilities to become such. My goals is to see whether these ideas could be applied in quantum TGD.

Documentation is the best manner to develop ideas and the learning process has materialized as a new chapter entitled Infinite Primes and Motives of "Physics as Generalized Number Theory". It soon became clear that much of the mathematics needed by TGD has existed for decades and developing all the time. The difficult task is to understand the essentials of this mathematics and translate to the language that I talk and understand. Also generalization is unavoidable. Those who think that new physics can be done by taking math as such are wasting their time.

Among another things I have been learning about various cohomologies and homologies - about quantum cohomology, about Floer homology and topological string theories, about Gromov-Witten invariants,... It would be very naive to think that these notions would work as such in TGD framework. It looks however very plausible that the their generalizations to TGD exist, and could be very useful in the more detailed formulation of quantum TGD. The crucially important notion is finite measurement resolution making everything almost topological and highly number theoretic. In this brain-stormy spirit I have even become a proud father of my own pet homology, which I have christened as braided Galois homology. It is based on the correspondence between infinite primes and polynomials of several variables and is formulated in braided group algebras with braidings realized as symplectic flows and generalizing somewhat the usual notion of homology meaning that the square of boundary operation gives something in commutator group reducing to unit element of ordinary homology only in the factor group obtained by dividing with the commutator group.

Floer homology in its original form replaces Morse function in symplectic manifold M in the loop space LM of M. The loops can be seen as homotopies of Hamiltonians and paths in loops space describe cylinders in M. With an appropriate choice of symplectic action these cylinderes can be regarded as (pseudo-)holomorphic surface completely analogous to string orbits. By combining Floer's theory with Witten's discovery about the connection between the Morse theory and supersymmetry one ends up with topological QFTs as a manner to formulate Floer homology and various variants of this notion- in particular topological QFTs characterizing topology of three-manifolds.

This kind of learning periods are very useful as a rule since they allow to improve bird's eye of view about TGD and its problems. The understanding of both quantum TGD and its classical counterpart is still far from from comprehensive.

For instance, the view about the physical and mathematical roles of Kähler actions for Euclidian and Minkowskian space-time regions is far from clear. Do they provide dual descriptions as suggested or are both needed? Kähler action for preferred extremal in Euclidian regions defines naturally positive definite Kähler function. But can one regard the Kähler action in Minkowskian regions as equivalent definition for Kähler function or should one regard it as imaginary as the presence of square root of metric determinant would suggest? What could be the interpretation in this case? The basic ideas about Floer homology suggest and answer to these questions.

  1. Since quantum fluctuating WCW degrees of freedom correspond to a symmetric space assignable to the symplectic group in TGD framework symplectic geometry is of special interest from TGD point of view. Floer homology is indeed about symplectic geometry as also Gromov-Witten invariants and topological string theories developed for the purpose of calculating these invariants. Hence the question whether Floer homology could have a generalization to TGD framework is highly relevant.

  2. As such Floer homology for M4× CP2 is deadly boring since it reduces to ordinary singular homology. The correspondence between canonical momentum densities of Kähler action and time derivatives of imbedding space coordinates is however one-to-many- and inspires the replacement of the imbedding space with its singular covering with different space-time regions corresponding to different number of sheets for the covering. The effective hierarchy of Planck constants emerges as a result. The homology in WCW could be mapped to the homology of this structure just as the homology of loop space of M is mapped to that of M in Floer theory.

  3. The obvious question is how to generalize Floer homology to TGD framework and the obvious guess is that Kähler action for preferred extremals must take the role of symplectic action for pseudo-holomorphic surfaces which could in fact be replaced with hyper-quaternionic space-time surfaces containing string world sheets whose ends defined braid strands carrying quantum numbers and which intersect partonic 2-surfaces at the future and past light-like boundaries of CDs. This actually suggests an obvious generalization for quantum cohomology based on quantal notion of intersection: partonic surfaces intersect if there exist a string world sheets connecting them. Fuzzy intersection has interpretation in terms of causal dependence: by effective 2-dimensionality this causal dependence is along light-like 3-surfaces and along space-like 3-surfaces at the boundaries of CDs. The notion of quantum intersection is so beautiful that one an almost forgive for the theoricians who have begun to take seriously the idea about branes connected by strings.

  4. The question providing the new insight is simple. Could Kähler function allow to define Morse theory? The answer is negative. Kähler metric must be positive definite so that the Hessian associated with it in quantum fluctuating degrees of freedom must have positive signature: no saddle points are possible in quantum fluctuating degrees of freedom although in zero modes they are allowed. Second counter argument is that quantum Morse theory is based on path integral rather than functional integral.

    How could one circumvent this difficulty? Could Kähler action in Minkowskian regions- naturally imaginary by negative sign of metric determinant- give an imaginary contribution to the vacuum functional and define Morse function so that both Kähler and Morse would find a prominent role in the world order of TGD? Maybe! The presence of Kähler function and Morse function in the vacuum functional would give much more direct connection with the path integral approach and Kähler function would also make path integral well-defined since one integrates only over preferred extremals of Kähler action for which Kähler action reduces to Chern-Simons term coming from Minkowskian region and contribution from Euclidian region (generalized Feynman graph).

Should one assume that the reduction to Chern-Simons terms occurs for the preferred extremals in both Minkowskian and Euclidian regions or only in Minkowskian regions?

  1. All arguments for this have been represented for Minkowskian regions involve local light-like momentum direction which does not make sense in the Euclidian regions. This does not however kill the argument: one can have non-trivial solutions of Laplacian equation in the region of CP2 bounded by wormhole throats: for CP2 itself only covariantly constant right-handed neutrino represents this kind of solution and at the same time supersymmetry. In the general case solutions of Laplacian represent broken super-symmetries and should be in one-one correspondences with the solutions of the modified Dirac equation. The interpretation for the counterparts of momentum and polarization would be in terms of classical representation of color quantum numbers.

    If the reduction occurs in Euclidian regions, it gives in the case of CP2 two 3-D terms corresponding to two 3-D gluing regions for three coordinate patches needed to define coordinates and spinor connection for CP2 so that one would have two Chern-Simons terms. Without any other contributions the first term would be identical with that from Minkowskian region apart from imaginary unit. Second Chern-Simons term would be however independent of this. For wormhole contacts the two terms could be assigned with opposite wormhole throats and would be identical with their Minkowskian cousins from imaginary unit. This looks a little bit strange.

  2. There is however a very delicate issue involved. Quantum classical correspondence requires that the quantum numbers of partonic states must be coded to the space-time geometry, and this is achieved by adding to the action a measurement interaction term which reduces to what is almost a gauge term present only in Chern-Simons-Dirac equation but not at space-time interior. This term would represent a coupling to Poincare quantum numbers at the Minkowskian side and to color and electro-weak quantum numbers at CP2 side. Therefore the net Chern-Simons contributions and would be different.

  3. There is also a very beautiful argument stating that Dirac determinant for Chern-Simons-Dirac action equals to Kähler function, which would be lost if Euclidian regions would not obey holography. The argument obviously generalizes and applies to both Morse and Kähler function.
In any case, it is still too early to give up the possibility that these two parts of Kähler action (real and positive- imaginary) provide dual descriptions as functional integral and path integral: Wick rotations is what comes in mind. Certainly, the rigorous definition of the path integral would be as difficult -should one say hopeless- as in ordinary QFT.

Floer homology and Gromov-Witten invariants provide also other insights about quantum TGD. For more details see the new chapter Infinite Primes and Motives or the article with same title.

Thursday, June 16, 2011

Black holes at LHC? Or maybe just scaled up bottonium?

The latest Tommaso Dorigo's posting has a rather provocative title: The Plot Of The Week - A Black Hole Candidate. Some theories inspired by string theories predict micro black holes at LHC. Micro blackholes have been proposed as explanation for certain exotic cosmic ray events such as Centauros, which however seem to have standard physics explanation.

Without being a specialist one could expect that evaporating black hole would be in many respects analogous to quark gluon plasma phase decaying to elementary particles producing jets. Or any particle like system, which has forgot all information about colliding particles which created it- say the information about the scattering plane of partons leading to the jets as a final state and reflecting itself as the coplanarity of the jets. If the information about initial state is lost, one would expect more or less spherical jet distribution. The variable used as in the study is sum of transverse energies for jets emerging from same point and having at least 50 GeV transverse energy. QCD predicts that this kind of events should be rather scarce and if they are present, one can seriously consider the possibility of new physics.

The LHC document containing the sensational proposal is titled Search for Black Holes in pp collisions at sqrt(s) = 7 TeV and has the following abstract:

An update on a search for microscopic black hole production in pp collisions at a center-of-mass energy of 7 TeV by the CMS experiment at the LHC is presented using a 2011 data sample corresponding to an integrated luminosity of 190 pb−1. This corresponds to a six-fold increase in statistics compared to the original search based on 2010 data. Events with large total transverse energy have been analyzed for the presence of multiple energetic jets, leptons, and photons, typical of a signal from an evaporating black hole. A good agreement with the expected standard model backgrounds, dominated by QCD multijet production, has been observed for various multiplicities of the final state. Stringent model-independent limits on new physics production in high-multiplicity energetic final states have been set, along with model-specific lim- its on semi-classical black hole masses in the 4-5 TeV range for a variety of model parameters. This update extends substantially the sensitivity of the 2010 analysis.

The abstract would suggest that nothing special has been found but in sharp contrast with this the article mentions black hole candidate decaying to 10 jets with total transverse energy ST. The event is illustrated in the figure 3 of the article. The large number of jets emanating from single point would suggest a single object decaying producing the jets.

Personally I cannot take black holes as an explanation of the event seriously. What can I offer instead? p-Adic mass calculations rely on p-adic thermodynamics and this inspires obvious questions. What p-adic cooling and heating processes could mean? Can one speak about p-adic hot spots? What p-adic overheating and over-cooling could mean? Could the octaves of pions and possibly other mesons explaining several anomalous findings including CDF bump correspond to unstable over-heated hadrons for which the p-adic prime near power of two is smaller than normally and p-adic mass scale is correspondingly scaled up by a power of two?

The best manner to learn is by excluding various alternative explanations for the 10 jet event.

  1. M89 variants of QCD jets are excluded both because their production requires higher energies and because their number would be small. The first QCD three-jets were observed around 1979. q-qbar-g three-jet was in question and it was detected in e+ e- collision with cm energy about 7 GeV. The naive scaling by factor 512 would suggest that something like 5.6 TeV cm energy is needed to observed M89 parton jets. The recent energy is 7 TeV so that there are hopes of observing M89 three- jets in decays of heavy M89. For instance, the decays of charmonium and bottonium of M89 physics to three gluons or two-gluons and photon would create three-jets.

  2. Ordinary quark gluon plasma is excluded since in a sufficiently large volume of quark gluon plasma so called jet quenching occurs so that jets have small transverse energies. This would be due to the dissipation of energy in the dense quark gluon plasma. Also ordinary QCD jets are predicted to be rare at these transverse energies: this is of course the very idea of how black hole evaporation might be observed. Creation of quark gluon plasma of M89 hadron physics cannot be in question since ordinary quark gluon plasma was created in p-anti-p collision with cm energy of few TeV so that something like 512 TeV of cm energy might be needed!

  3. Could the decay correspond to a decay of a blob of M89 hadronic phase to M107 hadrons? How this process could take place? I proposed for about 15 years ago see (this) that the transition from M89 hadron physics to M107 hadron physics might take place as a p-adic cooling via a cascade like process via highly unstable intermediate hadron physics. The p-adic temperature is quantized and given by Tp=n/log(p)≈ n/klog(2) for p≈ 2k and p-adic cooling process would proceed in a step-wise manner as k→ k+2→ k+4+... Also k → k+1→ k+2+.. with mass scale reduced in powers of square root of 2 can be considered. If only octaves are allowed, the p-adic prime characterizing the hadronic space-time sheets and quark mass scale could decrease in nine steps from M89 mass scale proportional to 2-89/2 octave by octave down to the hadronic mass scale proportional 2-107/2 as k=89→ 91→ 93...→ 107. At each step the mass in the propagator of the particle would be changed. In particular on mass shell particles would become off mass shell particles which could decay.

    At quark level the cooling process would naturally stop when the value of k corresponds to that characterizing the quark. For instance b quark one has k(b)=103 so that 7 steps would be involved. This would mean the decay of M89 hadrons to highly unstable intermediate states corresponding to k=91,93,...,107. At every step states almost at rest could be produced and the final decay would produce large number of jets and the outcome would resemble the spectrum blackhole evaporation. Note that for u,d,s quarks one has k=113 characterizing also nuclei and muon which would mean that valence quark space-time sheets of lightest hadrons would be cooler than hadronic space-time sheet, which could be heated by sea partons. Note also that quantum superposition of phases with several p-adic temperatures can be considered in zero energy ontology.

    This is of course just a proposal and might not be the real mechanism. If M89 hadrons are dark in TGD sense as the TGD based explanation of CDF-D0 discrepancy suggests, also the transformation changing the value of Planck constant is involved.

  4. This picture does not make sense in the model explaining DAMA observations and DAMA-Xenon100 anomaly, CDF bump (see this) and two and half year old CDF anomaly (see this) . The model involves creation of second octave of M89 pions decaying in stepwise manner. A natural interpretation of p-adic octaves of pions is in terms of a creation of over-heated unstable hadronic space-time sheet having k=85 instead of k=89 and p-adically cooling down to relatively thermally stable M89 sheet and containing light mesons and electroweak bosons. If so then the production of CDF bump would correspond to a creation of hadronic space-time sheet with p-adic temperature corresponding to k=85 cooling by the decay to k=87 pions in turn decaying to k=89. After this the decay to M107 hadrons and other particles would take place.

Consider now whether the 10 jet event could be understood as a creation of a p-adic hot spot perhaps assignable to some heavy meson of M89 physics. For current quarks the p-adic primes can be much large so that in the case of u and d quark the masses can be in 10 MeV range (which together with detailed model for light hadrons supports the view that quarks can appear at several p-adic temperatures).

  1. According to p-adic mass calculations ordinary charmed quark corresponds to k=104=107-3 and that of bottom quark to k=103=107-4, which is prime and correspond to the second octave of M107 mass scale assignable to the highest state of pion cascade. By naive scaling M89 charmonium states (Ψ would correspond to k=89-3=86 with mass of about 1.55 TeV by direct scaling. k=89-4=85 would give mass about 3.1 GeV and there is slight evidence for a resonance around 3.3 TeV perhaps identifiable as charmonium. Υ (bottonium) consisting of bbar pair correspond to k=89-4=85 just like the second octave of M89 pion. The mass of M89 Υ meson would be about 4.8 TeV for k=85. k=83 one obtains 9.6 TeV, which exceeds the total cm energy 7 TeV.

  2. Intriguingly, k=85 for the bottom quark and for first octave of charmonium would correspond to the second octave of M89 pion. Could it be that the hadronic space-time sheet of Υ is heated to the p-adic temperature of the bottom quark and then cools down in a stepwise manner? If so, the decay of Υ could proceed by the decay to higher octaves of light M89 mesons in a process involving two steps and could produce a large number jets.

  3. For the decay of ordinary Υ meson 81.7 per cent of the decays take place via ggg state. In the recent case they would create three M89 parton jets producing relativistic M89 hadrons. 2.2 per cent of decays take place via γ gg state producing virtual photon plus M89 hadrons. The total energies of the three jets would be about 1.6 TeV each and much higher than the energies of QCD jets so that this kind of jets would serve as a clearcut signature of M89 hadron physics and its bottom quark. Note that there already exists slight evidence for charmonium state. Recall that the total transverse energy of the 10 jet event was about 1 TeV.

    Also direct decays to M89 hadrons take place. η' +anything- presumably favored by the large contribution of bbar state in η' - corresponds to 2.9 per cent branching ratio for ordinary hadrons. If second octaves of η' and other hadrons appear in the hadron state, the decay product could be nearly at rest and large number of M89 would result in the p-adic cooling process (the naive scaling of η' mass gives .5 TeV and second octave would correspond to 2 TeV.

  4. If two octave p-adic over-heating is dynamically favored, one must also consider the first octave of of scaled variant of J/Ψ state with mass 3.1 GeV scaled up to 3.1 TeV for the first octave. The dominating hadronic final state in the decay of J/Ψ is ρ+/-π-/+ with branching ratio of 1.7 per cent. The branching fractions of ωπ+π+π-π-, ωπ+π-π0, and ωπ+π+pi- are 8.5× 10-3 4.0× 10-3, and 8.6× 10-3 respectively. The second octaves for the masses of ρ and π would be 1.3 TeV and .6 TeV giving net mass of 1.9 TeV so that these mesons would be relativistic if charmonium state with mass around 3.3 TeV is in question. If the two mesons decay by cooling, one would obtain two jets decaying two jets. Since the original mesons are relativistic one would probably obtain two wide jets decomposing to sub-jets. This would not give the desired fireball like outcome.

    The decays ωπ+π+π-π- (see Particle Data Tables would produce five mesons, which are second octaves of M89 mesons. The rest masses of M89 mesons would in this case give total rest mass of 3.5 TeV. In this kind of decay -if kinematically possible- the hadrons would be nearly at rest. They would decay further to lower octaves almost at rest. These states in turn would decay to ordinary quark pairs and electroweak bosons producing a large number of jets and black hole like signatures might be obtained. If the process proceeds more slowly from M89 level, the visible jets would correspond to M89 hadrons decaying to ordinary hadrons. Their transverse energies would be very high.

To sum up, a possible interpretation for the 10-jet event in TGD framework would be as p-adic hot spot produced in collision created by the overheating of M89 hadronic space-time sheets by the presence of bottonium or possibly charmonium state. The general signature of M89 hadron physics is jets which are much more energetic than QCD jets and that data indicate their presence.

For more about new physics predicted by TGD see the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". For reader's convenience I have added a short pdf article Is the new boson reported by CDF pion of M89 hadron physics? at my homepage.

Wednesday, June 15, 2011

Could Gromov-Witten invariants and braided Galois homology together allow to construct WCW spinor fields?

The challenge of TGD is to understand the structure of WCW spinor fields both in the zero modes which correspond to symplectically invariant degrees of freedom not contributing to the WCW Kähler metric and in quantum fluctuating degrees of freedom parametrized by the symplectic group of δ M4+/-× CP2. Basically the challenge is is to understand the symplectic (or more precisely, contact geometry of δ M4+/-× CP2. It seems that mathematicians and mathematical physicists (Gromov, Witten, Floer, and so on) have developed refined concepts for dealing with this problem.

One can develop good arguments suggesting that an appropriate generalization of Gromov-Witten invariants to covariants combined with braid Galois homology discussed in previous posting could allow do construct WCW spinor fields and at the same time M-matrices defining the rows of the unitary U-matrix between zero energy states. Finite measurement resolution would be the magic notion making everything calculable.

In the proposed framework the view about construction of WCW spinor fields would be roughly following.

  1. One can distinguish between WCW "orbital" degrees of freedom and fermionic degrees of freedom and in the case of WCW degrees of freedom also between zero modes and quantum fluctuating degrees of freedom. Zero modes correspond essentially to the non-local symplectic invariants assignable to the projections of the δ M4+/- and CP2 Kähler forms to the space-time surface. Quantum fluctuating degrees of freedom correspond to the symplectic algebra in the basis defined by Hamiltonians belonging to the irreps of rotation group and color group.

  2. At the level of partonic 2-surfaces finite measurement resolution leads to discretization in terms of braid ends and symplectic triangulation. At the level of WCW discretization replaces symplectic group with its discrete subgroup. This discrete subgroup must result as a coset space defined by the subgroup of symplectic group acting as Galois group in the set of braid points and its normal subgroup leaving them invariant. The group algebra of this discrete subgroup of symplectic group would have interpretation in terms of braided Galois cohomology. This picture provides an elegant realization for finite measurement resolution and there is also a connection with the realization of finite measurement resolution using categorification.

  3. The generating function for Gromow Witten invariants would define an excellent candidate for the part of WCW spinor field defining on zero modes only. The generalization of Gromov-Witten invariants to n-point functions defined by Hamiltonians of δ M4+/-× CP2 are symplectic invariants if net δ M4+/-× CP2 quantum numbers vanish. The most general definition assumes that the vanishing of quantum numbers occurs only for zero energy states having disjoint unions of partonic 2-surfaces at the boundaries of CDs as geometric correlate. A close analogy to the topological string theory of type A emerges. This seems puzzling since in the twistorial approach to N=4 SUSY however topological stringy theory of type B emerges. The celebrated mirror symmetry relating Calabi-Yau-manifolds means that topological string theory of type A is mapped to that of type B in the mirror transformation. The proposal is that the two formulations of TGD in terms of M4× CP2 on one hand and CP3 × CP3 on one hand are related in the similar manner so that the analog of topological string theory of type B would apply in the latter representation of quantum TGD.

  4. The proposed generalized homology theory involving braided Galois group and symplectic group of δ M4+/-× CP2 would realize the "almost" in TGD as almost topological QFT in finite measurement resolution replacing symplectic group with its discretized version. This algebra would relate to the quantum fluctuating degrees of freedom. The braids would carry only fermion number and there would be no Hamiltonians attached with them. The braided Galois homology could define in the more general situation invariants of symplectic isotopies.

  5. One should also add four-momenta and twistors to this picture. The separation of dynamical fermionic and sup-symplectic degrees of freedom suggesets that the Fourier transforms for amplitudes containing the fermionic braid end points as arguments define twistorial amplitudes. The representations of light-like momenta using twistors would lead to a generalization of the twistor formalism. At zero momentum limit one would obtain symplectic QFT with states characterized by collections of Hamiltonians and their super-counterparts.

For details see the new chapter Motives and Infinite Primes of "TGD as a Generalized Number Theory" or the article with same title.

Tuesday, June 14, 2011

Categorification and finite measurement resolution

I read a very stimulating article by John Baez with title Categorification about the basic ideas behind a process called categorification. The process starts from sets consisting of elements. In the following I describe the basic ideas and propose how categorification could be applied to realize the notion of finite measurement resolution in TGD framework.

What categorification is?

In categorification sets are replaced with categories and elements of sets are replaced with objects. Equations between elements are replaced with isomorphisms between objects: the right and left hand sides of equations are not the same thing but only related by an isomorphism so that they are not tautologies anymore. Functions between sets are replaced with functors between categories taking objects to objects and morphisms to morphisms and respecting the composition of morphisms. Equations between functions are replaced with natural isomorphisms between functors, which must satisfy certain coherence laws representable in terms of commuting diagrams expressing conditions such as commutativity and associativity.

The isomorphism between objects represents equation between elements of set replaces identity. What about isomorphisms themselves? Should also these be defined only up to an isomorphism of isomorphism? And what about functors? Should one continue this replacement ad infinitum to obtain a hierarchy of what might be called n-categories, for which the process stops after n:th level. This rather fuzzy buisiness is what mathematicians like John Baez are actually doing.

Why categorification?

There are good motivations for the categofication. Consider the fact that natural numbers. Mathematically oriented person would think number '3' in terms of an abstract set theoretic axiomatization of natural numbers. One could also identify numbers as a series of digits. In the real life the representations of three-ness are more concrete involving many kinds of associations. For child '3' could correspond to three fingers. For a mystic it could correspond to holy trinity. For a Christian "faith,hope,love". All these representations are isomorphic representation of threeness but as real life objects three sheeps and three cows are not identical.

We have however performed what might be called decategorification: that is forgitten that the isomorphic objects are not equal. Decatecorification was of course a stroke of mathematical genius with enormous practical implications: our information society represents all kinds of things in terms of numbers and simulates successfully the real world using only bit sequences. The dark side is that treating people as mere numbers can lead to a rather cold society.

Equally brilliant stroke of mathematical genius is the realization that isomorphic objects are not equal. Decategorization means a loss of information. Categorification brings back this information by bringing in consistency conditions known as coherence laws and finding these laws is the hard part of categorization meaning discovery of new mathematics. For instance, for braid groups commutativity modulo isomorphisms defines a highly non-trivial coherence law leading to an extremely powerful notion of quantum group having among other things applications in topological quantum compuatation.

No-one would have proposed categorification unless it were demanded by practical needs of mathematics. In many mathematical applications it is obvious that isomorphism does not mean identity. For instance, in homotopy theory all paths deformable to each other in continuous manner are homotopy equivalent but not identical. Isomorphism is now homotopy. These paths can be connected and form a groupoid. The outcome of the groupoid operation is determined up to homotopy. The deformations of closed path starting from a given point modulo homotopies form homotopy group and one can interpret the elements of homotopy group as copies of the point which are isomorphic. The replacement of the space with its universal covering makes this distinction explicit. One can form homotopies of homotopies and continue this process ad infinitum and obtain in this manner homotopy groups as characterizes of the topology of the space.

Cateforification as a manner describe finite measurement resolution?

In quantum physics gauge equivalence represents a standard example about equivalence modulo isomorphisms which are now gauge transformations. There is a practical strategy to treat the situation: perform a gauge choice by picking up one representative amongst infinitely many isomorphic objects. At the level of natural numbers a very convenient gauge fixing would correspond the representation of natural number as a sequence of decimal digits rather than image of three cows.

In TGD framework a excellent motivation for categorification is the need to find an elegant mathematical realization for the notion of finite measurement resolution. Finite measurement resolutions (or cognitive resolutions) at various levels of information transfer hierarchy imply accumulation of uncertainties. Consider as a concrete example uncertainty in the determination of basic parameters of a mathematical model. This uncertainty is reflected to final outcome as via a long sequence of mathematical maps and additional uncertainties are produced by the approximations at each step of this process.

How could onbe describe the finite measurement resolution elegantly in TGD Universe? Categorification suggests a natural method. The points equivalent with measurement resolution are isomorphic with each other. A natural guess inspired by gauge theories is that one should perform a gauge choice as an analog of decategorification. This allows also to avoid continuum of objects connected by arrows: reader can easily imagine what a mess results when one tries to do this;-)!

  1. At space-time level gauge choice means discretization of partonic 2-surfaces replacing them with a discrete set points serving as representatives of equivalence classes of points equivalent under finite measurement resolution. An especially interesting choice of points is as rational points or algebraic numbers and emerges naturally in p-adicization process. One can also introduce what I have called symplectic triangulation of partonic 2-surfaces with the nodes of the triangulation representing the discretization and carrying quantum numbers of various kinds.

  2. At the level of "world classical worlds" (WCW) this means the replacement of the sub-group if the symplectic group of δ M4× CP2 -call it G- permuting the points of the symplectic triangulation with its discrete subgroup obtained as a factor group G/H, where H is a normal subgroup of G leaving the points of the symplectic triangulation fixed. One can also consider subgroups of the permutation group for the points of the triangulation. One can also consider flows with these properties to get braided variant of G/H. It would seem that one cannot regard the points of triangulation as isomorphic in the category theoretical sense. This because, one can have quantum superpositions of states located at these points and the factor group acts as the analog of isometry group. One can also have many-particle states with quantum numbers at several points. The possibility to assign quantum numbers to a given point becomes the physical counterpart for the axiom of choice. What is so fantastic is that finite measurement resolution leads to a replacement of the infinite-dimensional world of classical points with a discrete structure. Therefore operation like integration over entire "world of classical worlds" is replaced with a discrete sum. This makes things much easier- believe or not - and if not try yourself;-).

  3. What suggests itself strongly is a hierarchy of n-categories as a proper description for the finite measurement resolution. The increase of measurement resolution means increase for the number of braid points. One has also braids of braids of braids structure implied by the possibility to map infinite primes, integers, and rationals to rational functions of several variables and the conjecture possibility to represent the hierarchy of Galois groups involved as symplectic flows. If so the hierarchy of n-categories would correspond to the hierarchy of infinite primes having also interpretation in terms of repeated second quantization of an arithmetic SUSY such that many particle states of previous level become single particle states of the next level.

The finite measurement resolution has also a representation in terms of inclusions of hyperfinite factors of type II1 about which the Clifford algebra generated by the gamma matrices of WCW represents an example.

  1. The included algebra represents finite measurement resolution in the sense that its action generates states which are cannot be distinguished from each other within measurement resolution used. The natural conjecture is that this indistuinguishability corresponds to a gauge invariance for some gauge group and that TGD Universe is analogous to Turing machine in that almost any gauge group can be represented in terms of finite measurement resolution.

  2. Second natural conjecture inspired by the fact that symplectic groups have enormous representabive power is that these gauge symmetries allow representation as subgroups of the symplectic group of δ M4× CP2. A nice article about universality of symplectic groups is the article The symplectification of science by Mark. J. Gotay.

  3. An interesting question is whether there exists a finite-dimensional space, whose symplecto-morphisms would allow a representation of any gauge group (or of all possible Galois groups as factor groups) and whether δ M4× CP2 could be a space of this kind with the smallest possible dimension.

Arrows are not all that is needed

There have been proposals that categories could be fundamental and space-time, symmetries, and particles could emerge in some sense. Personally I do not find this idea sound.

  1. Categories consist of discrete objects and on basis of above arguments provide indispensable tool for physicist and consciousness theorist since both measurement resolution and cognitive resolution are always finite. In fact, finite resolution is not at all a negative thing since it forces abstraction process by forming equivalence classes from objects not distinguishable from each other. Written language is one of the victories of abstraction process: just a sequence of letters "human" becomes are representation for entire species. It would be however nonsense to assume that the world is actually discrete. Practically all physics would be lost and only manner to get it would be by effectively replacing the discrete structures with continuum. Mathematics would suffer the same fate and there would be very little use for category theory.

  2. I find also the idea of throwing away group theory as very weird. Isomorphisms between objects form groups and are the corner stone of category theory: why should one throw them away? 90 per cent of recent day quantum physics is group theory and the above arguments suggest that category theory is natural in the description of finite measurement resolution reducing the infinite-dimensional groups involved to discrete groups and giving also a profound connection with number theory. Without symmetries we do not have observables which in quantum theory correspond to Lie algebras for continuous groups. As a matter fact, in TGD framework the role of symmetries is taken to extreme: zero energy states correspond to Lie algebra for an infinite-dimensional Yangian. The world of quantum worlds is Lie algebra.

  3. It has been also suggested that so called associahedrons emerging in n-category theory could replace space-time and space as fundamental objects. Associahedrons are polygons used to represent geometrically associativity or its weaker form modulo isomorphism for the products of n objects bracketed in all possible manners. The polygon defines a hierarchy containing sub-polygons as its edges containing.... Associativity states the isomorphy of these polygons. According to John Baez associahedrons indeed allow a beautiful geometric realization of the coherence laws.

    One must however not forget that the very notion of associahedron is an auxiliary tool which assumes the notion of Euclidian space so that the claim about emergence of space from category theory is an illusion just as the claims that continuous space-time can emerge from a discrete lattice at infrared limit.

    One should also remember that the notion of n-category has its roots in homotopy theory which describes topological invariants of various spaces. Only non-sense with arrows remains if one throws away all those structures whose description has motivated the development of the category theoretical approach. This kind of emergence is also in conflict with the very idea of categorification since it would identify the isomorphic points- say points of continuum equivalent within measuremet resolution- to get discrete structure and then conclude that this discrete structure is all that exists.