Friday, October 31, 2008

Two birthday gifts

October 30 was my birthday. It is nice to have birthdays and also friends. I got even gifts. Thank You! Two of the gifts made me especially happy. As a matter fact, I am so proud that I want to tell about them to all of you!

The first gift

During my friday morning blog walk I found that Peter Woit had told at my birthday about a possible discovery of a new long-lived particle by CDF experiment. There is a detailed paper [1] with title Study of multi-muon events produced in p-pbar collisions at sqrt(s)=1.96 TeV by CDF collaboration added to the ArXiv October 29 - the eve of my birthday;-). Thank you!

Since I am too excited to type for details myself, I just copy the brief summary of Peter Woit about the finding.

The article originates in studies designed to determine the b-bbar cross-section by looking for events, where a b-bbar pair is produced, each component of the pair decaying into a muon. The b-quark lifetime is of order a picosecond, so b-quarks travel a millimeter or so before decaying. The tracks from these decays can be reconstructed using the inner silicon detectors surrounding the beam-pipe, which has a radius of 1.5 cm. They can be characterized by their “impact parameter”, the closest distance between the extrapolated track and the primary interaction vertex, in the plane transverse to the beam.

If one looks at events where the b-quark vertices are directly reconstructed, fitting a secondary vertex, the cross-section for b-bbar production comes out about as expected. On the other hand, if one just tries to identify b-quarks by their semi-leptonic decays, one gets a value for the b-bbar cross-section that is too large by a factor of two. In the second case, presumably there is some background being misidentified as b-bbar production.

The new result is based on a study of this background using a sample of events containing two muons, varying the tightness of the requirements on observed tracks in the layers of the silicon detector. The background being searched for should appear as the requirements are loosened. It turns out that such events seem to contain an anomalous component with unexpected properties that disagree with those of the known possible sources of background. The number of these anomalous events is large (tens of thousands), so this cannot just be a statistical fluctuation.

One of the anomalous properties of these events is that they contain tracks with large impact parameters, of order a centimeter rather than the hundreds of microns characteristic of b-quark decays. Fitting this tail by an exponential, one gets what one would expect to see from the decay of a new, unknown particle with a lifetime of about 20 picoseconds. These events have further unusual properties, including an anomalously high number of additional muons in small angular cones about the primary ones.

The lifetime is estimated to be considerably longer than b quark life time and below the lifetime 89.5 ps of K0,s mesons. The fit to the tail of "ghost" muons gives the estimate of 20 picoseconds.

Second gift

As if this gift give would not have enough I received also a second one! Thank you, thank you! In October 29 also another remarkable paper [2] had appeared in arXiv. It was titled Observation of an anomalous positron abundance in the cosmic radiation . PAMELA collaboration finds an excess of cosmic ray positrons at energies 10→50 GeV. PAMELA anomaly is discussed in Resonaances. ATIC in turn sees an excess of electrons and positrons going all the way up to energies of order 500-800 GeV [3].

Also Peter Woit refers to these cosmic ray anomalies and also to the article LHC Signals for a SuperUnified Theory of Dark Matter by Nima Arkadi-Hamed and Neal Weiner [4], where a model of dark matter inspired by these anomalies is proposed together with a prediction of lepton jets with invariant masses with mass scale of order GeV. The model assumes a new gauge interaction for dark matter particles with Higgs and gauge boson masses around GeV. The prediction is that LHC should detect "lepton jets" with smaller angular separations and GeV scale invariant masses.

TGD explanation of CDF anomaly

Consider first CDF anomaly. TGD predicts a fractal hierarchy of QCD type physics. In particular, colored excitations of leptons are predicted to exist. Neutral lepto-pions would have mass only slightly above two times the charged lepton mass. Also charged leptopions are predicted and their masses depend on what is the p-adic mass scale of neutrino. It is not clear whether it is much longer than that for charged colored lepton as in the case of ordinary leptons. If so, then the mass of charged leptopion is essentially that of charged lepton.

  1. There exists a considerable evidence for colored electrons dating back to the seventies see the chapter Recent status of Leptohadron hypothesis of the book "P-Adic Length Scale Hypothesis And Dark Matter Hierarchy" and references therein. The anomalous production of electron positron pairs discovered in heavy ion collisions can be understood in terms of decays of electro-pions produced in the strong non-orthogonal electric and magnetic fields created in these collisions. The action determining the production rate would be proportional to the product of the leptopion field and highly unique "instanton" action for electromagnetic field determined by anomaly arguments so that the model is highly predictive.

  2. Also the .511 MeV emission line [5,6] from the galactic center can be understood in terms of decays of neutral electro-pions to photon pairs. Electro-pions would reside at magnetic flux tubes of strong galactic magnetic fields. It is also possible that these particles are dark in TGD sense.

  3. There is also evidence for colored excitations of muon and muo-pion [7,8]. TGD based model is discussed here. Muo-pions could be produced by the same mechanism as electro-pions in high energy collisions of charged particles when strong non-orthogonal magnetic and electric fields are generated.

Also t-hadrons are possible and CDF anomaly can be understood in terms of a production of t-hadrons as the following argument demonstrates.

  1. t-QCD at high energies would produce "lepton jets" just as ordinary QCD. In particular muon pairs with invariant energy below 2m(t) ~ 3.6 GeV could be produced by the decays of neutral t-pions. The production of monochromatic gamma ray pairs is predicted to dominate the decays. Note that the space-time sheet associated with both ordinary hadrons and t lepton correspond to the p-adic prime M107=2107-1.

  2. One can imagine several options for the detailed production mechanism. One can imagine several options for the detailed production mechanism in strong non-orthogonal fields E and B of colliding proton and antiproton creating τ-pions.
    1. The decay of virtual t-pions created in these fields to pairs of leptobaryons generates lepton jets. Since colored leptons correspond to color octets, lepto-baryons could correspond to states of form LLL or L[`L]L.
    2. The option inspired by a blog discussion with Ervin Goldfein is that a coherent state of t-pions is created first and is then heated to QCD plasma like state producing the lepton jets like in QCD.
    3. The option inspired by CDF model is that a p-adically scaled up variant of on mass shell neutral t-pion having k=103 and 4 times larger mass than k=107 t-pion is produced and decays to three k=105 t-pions with k=105 neutral t-pion in turn decaying to three k=107 t-pions.

  3. The basic characteristics of the anomalous muon pair prediction seems to fit with what one would expect from a jet generating a cascade of t-pions. Muons with both charges would be produced democratically from neutral t-pions; the number of muons would be anomalously high; and the invariant masses of muon pairs would be below 3.6 GeV for neutral t-pions and below 1.8 GeV for charged t-pions if colored neutrinos are light.

  4. The prediction for neutral leptopion mass is 3.6 GeV and same as in the paper of CDF collaboration [13], which had appeared to the arXiv Monday morning as I learned from the blog of Tommaso. The masses for particles h2, h2 and h1 suggested in the article were 3.6 GeV, 7.3 GeV, and 15 GeV. p-Adic length scale hypothesis predicts that allowed mass scales come as powers of sqrt(2) and these masses come in good approximation as powers of 2. Several p-adic scales appear in low energy hadron physics for quarks and this replaces Gell-Mann formula for low-lying hadron masses. Therefore one can ask whether these masses correspond to neutral tau-pion with p= Mk=2k-1, k=107) and its scaled up variants with p=about 2k, k= 105, and k=103 (also prime). The prediction for masses would be 3.6 GeV, 7.2 GeV, 14.4 GeV.

    The model however differs from TGD based model in many respects and the powers of two follow from the assumed production mechanism. h1 is assumed to be pair produced and decay to h2 pair decaying in turn to h3 pair. The decay of free τ-pion to two τ-pions is forbidden by parity conservation but can take place in E·B "instanton" bacground inducing parity breaking so that the decay cascade is possible also for τ-pions. The lightest state is assumed to be neutral and to decay to a τ pair. In TGD framework both charged and neutral τ-pions are possible. The correct prediction for the lifetime provides a strong support for the identification of long-lived state as charged τ-pion with mass near τ mass so that the decay to μ and its antineutrino dominates. Neutral τ-pion lifetime is 1.12×: 10-17 seconds as will be found. For its higher excitations decay rate to two photons would scale as the mass of τ-pion. The decay rate to two leptopions

  5. The lifetime of 20 ps can be assigned with charged t-pion decaying weakly only into muon and neutrino. This provides a killer test for the hypothesis. In absence of CKM mixing for colored neutrinos, the decay rate to lepton and its antineutrino is given by


    G(pt® L+

    n
     

    L 
    ) = G2m(L)2 f2(p)(m(pt)2-m(L)2)2

    4pm3(pt)
    .

    The parameter f(pt) characterizing the coupling of pion to the axial current can be written as f(pt) = r(pt)m(pt). For ordinary pion one has f(p) = 93 MeV and r(p)=.67. The decay rate for charged t-pion is obtained by simple scaling giving


    G(pt® L+

    n
     

    L 
    ) = 8x2 u2 y3(1-z2) 1

    cos2(qc)
    G(p® m+

    nm
     
    ),


    x = m(L)

    m(m)
    , y= m(t)

    m(p)
    , z = m(L)

    2m(t)
    , u = r(pt)

    r(p)
    .

    If the p-adic mass scale of the colored neutrino is same as for ordinary neutrinos, the mass of charged leptopion is in good approximation equal to the mass of t and the decay rates to t and electron are much slower than to muons so that muons are produced preferentially.

  6. For m(t)=1.8 GeV and m(p) = .14 GeV and the same value for fp as for ordinary pion the lifetime is obtained by scaling from the lifetime of charged pion about 2.6×10-8 s. The prediction is 3.31×10-12 s to be compared with the experimental estimate about 20×10-12 s. r(pt)=.41rp gives a correct prediction. Hence the explanation in terms of t-pions seems to be rather convincing unless one is willing to believe in really nasty miracles.

  7. Neutral t-pion would decay dominantly to monochromatic pairs of gamma rays. The decay rate is dictated by the product of leptopion field and "instanton" action (inner product of E and B reducing to a total divergence) and is given by


    G(pt® g+g) = aem2m3(pt)

    64p3 f(pt)2
    = 2x-2y ×G(p® g+g) ,


    x = f(pt)

    m(pt)
    , y = m(t)

    m(p)
    ,


    G(p® g+g) = 7.37   eV .

    The predicted lifetime is 1.17×10-17 seconds.

  8. Second decay channel is to lepton pairs, with muon pair production dominating for kinematical reasons. The invariant mass of the pairs is 3.6 GeV of no other particles are produced. Whether the mass of colored neutrino is essentially the same as that of charged lepton or corresponds to the same p-adic scale as the mass of the ordinary neutrino remains an open question. If colored neutrino is light, the invariant mass of muon neutrino pair is below 1.78 GeV.

PAMELA and ATIC anomalies in TGD framework

TGD predicts also a hierarchy of hadron physics assignable to Mersenne primes. The mass scale of M89 hadron physics is by a factor 512 higher than that of ordinary hadron physics. Therefore a very rough estimate for the nucleons of this physics is 512 GeV. This suggest that the decays of M89 hadrons are responsible for the anomalous positrons and electrons up to energies 500-800 GeV reported by ATIC collaboration. An equally naive scaling for the mass of pion predicts that M89 pion has mass 72 GeV. This could relate to the anomalous cosmic ray positrons in the energy interval 10-50 GeV reported by PAMELA collaboration. Be as it may, the prediction is that M89 hadron physics exists and could make itself visible in LHC.

The surprising finding is that positron fraction (the ratio of flux of positrons to the sum of electron and positron fluxes) increases above 10 GeV. If positrons emerge from secondary production during the propagation of cosmic ray-nuclei, this ratio should decrease if only standard physics is be involved with the collisions. This is taken as evidence for the production of electron-positron pairs, possibly in the decays of dark matter particles. In fact, I found that I have told Recent Status of Leptohadron Hypothesis about production of anomalous electron-positron pairs in hadronic reactions [9,10,11,12] as evidence for lepto-hadron hypothesis.

Leptohadron hypothesis predicts that in high energy collisions of charged nuclei with charged particles of matter it is possible to produce also charged electro-pions, which decay to electrons or positrons depending on their charge and produce the electronic counterparts of the jets discovered in CDF. This proposal - and more generally leptohadron hypothesis - could be tested by trying to find whether also electronic jets can be found in proton-proton collisions. They should be present at considerably lower energies than muon jets. The simple-minded guess is that for proton-proton collisions the center of mass energy at which the jet formation begins to make itself visible is in constant ratio to the mass of charged lepton. From CDF data this ratio is around s1/2/m(τ)=x < 103. For electropions the threshold energy would be around 10-3x×.5 GeV and for muo-pions around 10-3x× 100 GeV.

Does a phase transition increasing the value of Planck constant take place in the production of leptopions?

The critical argument of Tommaso Dorigo in his blog inspired an attempt to formulate more precisely the hypothesis Ös/mt > x < 103. This led to the realization that a phase transition increasing Planck constant might happen in the production process as also the model for the production of electro-pions in heavy ion collisions requires.

Suppose that the instanton coupling gives rise to virtual neutral leptopions decaying to pairs of leptobaryons producing the jets. E and B could be associated with the colliding proton and antiproton or quarks.

  1. The amplitude for leptopion production is essentially Fourier transform of E·B, where E and B are the non-orthogonal electric and magnetic fields of the colliding particles. At the level of scales one has t ~ hbar/E, where t is the time during which E·B is large enough during collision and E is the energy scale of the virtual leptopion giving rise to the jet.

  2. In order to have jets one must have m(pt) << E. If the scaling law E µ Ös hold true, one indeed has Ös/m(pt) > x < 103.

  3. If proton and antiproton would move freely, t would be of the order of the time for proton to move through a distance, which is 2 times the Lorentz contracted radius of proton: τfree= 2×(1-v2)1/2 Rp/v = 2hbar/Ep. This would give for the energy scale of virtual t-pion the estimate E = hbar/tfree = Ös/4. x=4 is certainly quite too small value. Actually t > tfree holds true but one can argue that without new physics the time for the preservation of E·B cannot be by a factor of order 28 longer than for free collision.

  4. For a colliding quark pair one would have tfree = 4hbar/(spair(s))1/2, where (spair(s))1/2 would be the typical invariant energy of the pair which is exponentially smaller than Ös. Somewhat paradoxically from classical physics point of view, the time scale would be much longer for the collision of quarks than that for proton and antiproton.

The possible new physics relates to the possibility that leptopions are dark matter in the sense that they have Planck constant larger than the standard value.

  1. Suppose that the produced leptopions have Planck constant larger than its standard value hbar/hbar0. This is actually required by the model for electro-pion production since otherwise production cross section is not quite large enough.

  2. Assume that a phase transition increasing Planck constant occurs during the collision. Hence t is scaled up by a factor y = hbar/hbar0. The inverse of the leptopion mass scale is a natural candidate for the scaled up dark time scale. t(hbar0) ~ tfree, one obtains y ~ (smin(s))1/2/4m(pt) £ 28 giving for proton-antiproton option the first guess Ös/m(pt) > x < 210. If the value of y does not depend on the type of leptopion, the proposed estimates for muo- and electro-pion follow.

  3. If the fields E and B are associated with colliding quarks, only colliding quark pairs with (spair(s))1/2 > ( > )m(pt) contribute giving yq(s) = (spair(s))/s)1/2×y.

If the τ-pions produced in the magnetic field are on-mass shell τ-pions with k=103, the value of hbar would satisfy hbar/hbar0<25 and sqrt(s)/m(πτ)>x<27.

Summary

To sum up, the probability that a correct prediction for the lifetime of the new particle using only known lepton masses and standard formulas for weak decay rates follows by accident is extremely low. Throwing billion times coin and getting the same result every time might be something comparable to this. Therefore my sincere hope is that colleagues would be finally mature to take TGD seriously. If TGD based explanation of the anomalous production of electron positron pairs in heavy ion collisions would have been taken seriously for fifteen years ago, particle physics might look quite different now.

References

[1]CDF Collaboration (2008), Study of multi-muon events produced in p-pbar collisions at sqrt(s)=1.96 TeV.

[2] PAMELA Collaboration (2008), Observation of an anomalous positron abundance in the cosmic radiation.

[3] J. Chang et al. (ATIC) (2005), prepared for 29th International Cosmic Ray Conferences (ICRC 2005), Pune, India, 31 Aug 03 - 10 2005.

[4] N. Arkani-Hamed and N. Weiner (2008), LHC Signals for a SuperUnified Theory of Dark Matter.

[5] E. Churazov, R. Sunyaev, S. Sazonov, M. Revnivtsev, and D. Varshalovich, Mon. Not. Roy. 17. Astron. Soc. 357, 1377 (2005), astro-ph/0411351.

[6] G. Weidenspointner et al., Astron. Astrophys. 450, 1013 (2006), astro-ph/0601673.

[7] X.-G. He, J. Tandean, G. Valencia (2007), Has HyperCP Observed a Light Higgs Boson?,Phys. Rev. D74. http://arxiv.org/abs/hep-ph/0610274.

[8] X.-G. He, J. Tandean, G. Valencia (2007), Light Higgs Production in Hyperon Decay , Phys. Rev. Lett. 98. http://arxiv.org/abs/hep-ph/0610362.

[9] T. Akesson et al (1987), Phys. Lett. B192, 463,
T. Akesson et al (1987), Phys. Rev. D36, 2615.

[10] A.T. Goshaw et al (1979), Phys. Rev. Lett. 43, 1065.

[11] P.V. Chliapnikov et al (1984), Phys. Lett. B 141, 276.

[12]S. Barshay (1992) , Mod. Phys. Lett. A, Vol 7, No 20, p. 1843.

[13] P. Giromini, F. Happacher, M. J. Kim, M. Kruse, K. Pitts, F. Ptohos, S. Torre (2008), Phenomenological interpretation of the multi-muon events reported by the CDF collaboration. .

For details and background see the chapter Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Monday, October 27, 2008

Could operads allow the formulation of the generalized Feynman rules?

The previous discussion of symplectic fusion rules leaves open many questions.

  1. How to combine symplectic and conformal fields to what might be called symplecto-conformal fields?

  2. The previous discussion applies only in super-canonical degrees of freedom and the question is how to generalize the discussion to super Kac-Moody degrees of freedom.

  3. How four-momentum and its conservation in the limits of measurement resolution enters this picture?

  4. At least two operads related to measurement resolution seem to be present: the operads formed by the symplecto-conformal fields and by generalized Feynman diagrams. For generalized Feynman diagrams causal diamond (CD) is the basic object whereas disks of S2 are the basic objects in the case of symplecto-conformal QFT with a finite measurement resolution. These two different views about finite measurement resolution should be more or less equivalent and one should understand this equivalence at the level of details.

  5. Is it possible to formulate generalized Feynman diagrammatics and improved measurement resolution algebraically?

1. How to combine conformal fields with symplectic fields?

The conformal fields of conformal field theory should be somehow combined with symplectic scalar field to form what might be called symplecto-conformal fields.

  1. The simplest thing to do is to multiply ordinary conformal fields by a symplectic scalar field so that the fields would be restricted to a discrete set of points for a given realization of N-dimensional fusion algebra. The products of these symplecto-conformal fields at different points would define a finite-dimensional algebra and the products of these fields at same point could be assumed to vanish.

  2. There is a continuum of geometric realizations of the symplectic fusion algebra since the edges of symplectic triangles can be selected rather freely. The integrations over the coordinates zk (most naturally the complex coordinate of S2 transforming linearly under rotations around quantization axes of angular momentum) restricted to the circle appearing in the definition of simplest stringy amplitudes would thus correspond to the integration over various geometric realizations of a given N-dimensional symplectic algebra.

Fusion algebra realizes the notion of finite measurement resolution. One implication is that all n-point functions vanish for n > N. Second implication could be that the points appearing in the geometric realizations of N-dimensional symplectic fusion algebra have some minimal distance. This would imply a cutoff to the multiple integrals over complex coordinates zk varying along circle giving the analogs of stringy amplitudes. This cutoff is not absolutely necessary since the integrals defining stringy amplitudes are well-defined despite the singular behavior of n-point functions. One can also ask whether it is wise to introduce a cutoff that is not necessary and whether fusion algebra provides only a justification for the 1+ie prescription to avoid poles used to obtain finite integrals.

The fixed values for the quantities òAmdxm along the edges of the symplectic triangles could indeed pose a lower limit on the distance between the vertices of symplectic triangles. Whether this occurs depends on what one precisely means with symplectic triangle.

  1. The condition that the angles between the edges at vertices are smaller than p for triangle and larger than p for its conjugate is not enough to exclude loopy edges and one would obtain ordinary stringy amplitudes multiplied by the symplectic phase factors. The outcome would be an integral over arguments z1,z2,..zn for standard stringy n-point amplitude multiplied by a symplectic phase factor which is piecewise constant in the integration domain.

  2. The condition that the points at different edges of the symplectic triangle can be connected by a geodesic segment belonging to the interior of the triangle is much stronger and would induce a length scale cutoff. How to realize this cutoff at the level of calculations is not clear. One could argue that this problem need not have any nice solution and since finite measurement resolution requires only finite calculational resolution, the approximation allowing loopy edges is acceptable.

Symplecto-conformal should form an operad. This means that the improvement of measurement resolution should correspond also to an algebra homomorphism in which super-canonical symplecto-conformal fields in the original resolution are mapped by algebra homomorphism into fields which contain sum over products of conformal fields at different points: for the symplectic parts of field the products reduces always to a sum over the values of field. For instance, if the field at point s is mapped to an average of fields at points sk, nilpotency condition x2=0 is satisfied.

2. Symplecto-conformal fields in Super-Kac-Moody sector

The picture described above is an over-simplification since it applies only in super-canonical degrees of freedom. The vertices of generalized Feynman diagrams are absent from the description and CP2 Kähler form induced to space-time surface which is absolutely essential part of quantum TGD is nowhere visible in the treatment.

How should one bring in Super Kac-Moody (SKM) algebra representing the stringy degrees of freedom in the conventional sense of the world? The condition that the basic building bricks are same for the treatment of these degrees of freedom is a valuable guideline.

  1. In the transition from super-canonical to SKM degrees of freedom the light-cone boundary is replaced with the light-like 3-surface X3 representing the light-like random orbit of parton and serving as the basic dynamical object of quantum TGD. The sphere S2 of light-cone boundary is in turn replaced with a partonic 2-surface X2. This suggests how to proceed.

  2. In the case of SKM algebra the symplectic fusion algebra is represented geometrically as points of partonic 2-surface X2 by replacing the symplectic form of S2 with the induced CP2 symplectic form at the partonic 2-surface and defining U(1) gauge field. This gives similar hierarchy of symplecto-conformal fields as in the super-canonical case. This also realizes the crucial aspects of the classical dynamics defined by Kähler action. In particular, for vacuum 2-surfaces symplectic fusion algebra trivializes since Kähler magnetic fluxes vanish identically and 2-surfaces near vacua require a large value of N for the dimension of the fusion algebra since the available Kähler magnetic fluxes are small.

  3. In super-canonical case the projection along light-like ray allows to map the points at the light-cone boundaries of CD to points of same sphere S2. In the case of light-like 3-surfaces light-like geodesics representing braid strands allow to map the points of the partonic two-surfaces at the future and past light-cone boundaries to the partonic 2-surface representing the vertex. The earlier proposal was that the ends of strands meet at the partonic 2-surface so that braids would replicate at vertices. The properties of symplectic fields would however force identical vanishing of the vertices if this were the case. There is actually no reason to assume this condition and with this assumption vertices involving total number N of incoming and outgoing strands correspond to symplecto-conformal N-point function as is indeed natural. Also now Kähler magnetic flux induces cutoff distance.

  4. SKM braids reside at light-like 3-surfaces representing lines of generalized Feynman diagrams. If super-canonical braids are needed at all, they must be assigned to the two light-like boundaries of CD meeting each other at the sphere S2 at which future and past directed light-cones meet.

3. The treatment of four-momentum and other quantum numbers

Four-momentum enjoys a special role in super-canonical and SKM representations in that it does not correspond to a quantum number assignable to the generators of these algebras. It would be nice if the somewhat mysterious phase factors associated with the representation of the symplectic algebra could code for the four-momentum - or rather the analogs of plane waves representing eigenstates of four-momentum at the points associated with the geometric representation of the symplectic fusion algebra. The situation is more complex as the following considerations show.

3.1 The representation of longitudinal momentum in terms of phase factors

  1. The generalized coset representation for super-canonical and SKM algebras implies Equivalence Principle in the generalized sense that the differences of the generators of two super Virasoro algebras annihilate the physical states. In particular, the four-momenta associated with super-canonical resp. SKM degrees of freedom are identified as inertial resp. gravitational four- momenta and are equal by Equivalence Principle. The question is whether four-momentum could be coded in both algebras in terms of non-integrable phase factors appearing in the representations of the symplectic fusion algebras.

  2. Four different phase factors are needed if all components of four-momentum are to be coded. Both number theoretical vision about quantum TGD and the realization of the hierarchy of Planck constants assign to each point of space-time surface the same plane M2 Ì M4 having as the plane of non-physical polarizations. This condition allows to assign to a given light-like partonic 3-surface unique extremal of Kähler action defining the Kähler function as the value of Kähler action. Also p-adic mass calculations support the view that the physical states correspond to eigen states for the components of longitudinal momentum only (also the parton model for hadrons assumes this). This encourages to think that only M2 part of four-momentum is coded by the phase factors. Transversal momentum squared would be a well defined quantum number and determined from mass shell conditions for the representations of super-canonical (or equivalently SKM) conformal algebra much like in string model.

  3. The phase factors associated with the symplectic fusion algebra mean a deviation from conformal n-point functions, and the innocent question is whether these phase factors could be identified as plane-wave phase factors associated with the transversal part of the four-momentum so that the n-point functions would be strictly analogous with stringy amplitudes. In fact, the identification of the phase factors exp(iòAmdxm/(h/2p)) along a path as a phase factors exp(ipL,kDmk) defined by the ends of the path and associated with the longitudinal part of four-momentum would correspond to an integral form of covariant constancy condition [(dxm)/ds](m-iAm)Y = 0 along the edge of the symplectic triangle of more general path. Second phase factor would come from the integral along the (most naturally) light-like curve defining braid strand associated with the point in question. A geometric representation for the two projections of the gravitational four-momentum would thus result in SKM degrees of freedom and apart from the non-uniqueness related to the multiples of 2p the components of M2 momentum could be deduced from the phase factors. If one is satisfied with the projection of momentum in M2, this is enough.

  4. The phase factors assignable to CP2 Kähler gauge potential are Lorentz invariant unlike the phase factors assignable to four-momentum. One can try to resolve the problem by noticing an important delicacy involved with the formulation of quantum TGD as almost topological QFT. In order to have a non-vanishing four-momentum it is necessary to assume that CP2 Kähler form has Kähler gauge potential having M4 projection, which is Lorentz invariant constant vector in the direction of the vector field defined by light-cone proper time. One cannot eliminate this part of Kähler gauge potential by a gauge transformation since the symplectic transformations of CP2 do not induce genuine gauge transformations but only symmetries of vacuum extremals of Kähler action. The presence of the M4 projection is necessary for having a non-vanishing gravitational mass in the fundamental theory relying on Chern-Simons action for light-like 3-surface and the magnitude of this vector brings gravitational constant into TGD as a fundamental constant and its value is dictated by quantum criticality.

  5. Since the phase of the time-like phase factor is proportional to the increment of the proper time coordinate of light-cone, it is also Lorentz invariant! Since the selection of S2 fixes a rest frame, one can however argue that the representation in terms of phases is only for the rest energy in the case of massive particle. Also number theoretic approach selects a preferred rest frame by assigning time direction to the hyper-quaternionic real unit. In the case of massless particle this interpretation does not work since the vanishing of the rest mass implies that light-like 3-surface is piece of light-cone boundary and thus vacuum extremal. p-Adic thermodynamics predicting small mass even for massless particles can save the situation. Second possibility is that the phase factor defined by Kähler gauge potential is proportional to the Kähler charge of the particle and vanishes for massless particles.

  6. This picture would mean that the phase factors assignable to the symplectic triangles have nothing to do with momentum. Because the space-like phase factor exp(iSzDf/(h/2p)) associated with the edge of the symplectic triangle is completely analogous to that for momentum, one can argue that the symplectic triangulation should define a kind of spin network utilized in discretized approaches to quantum gravity. The interpretation raises the question about the interpretation of the quantum numbers assignable to the Lorentz invariant phase factors defined by the CP2 part of CP2 Kähler gauge potential.

  7. By generalized Equivalence Principle one should have two phase factors also in super-canonical degrees of freedom in order to characterize inertial four-momentum and spin. The inclusion of the phase factor defined by the radial integral along light-like radial direction of the light-cone boundary gives an additional phase factor if the gauge potential of the symplectic form of the light-cone boundary contains a gradient of the radial coordinate rM varying along light-rays. Gravitational constant would characterize the scale of the "gauge parts" of Kähler gauge potentials both in M4 and CP2 degrees of freedom. The identity of inertial and gravitational four-momenta means that super-canonical and SKM algebras represent one and same symplectic field in S2 and X2.

  8. Equivalence Principle in the generalized form requires that also the super-canonical representation allows two additional Lorentz invariant phase factors. These phase factors are obtained if the Kähler gauge potential of the light-cone boundary has a gauge part also in CP2. The invariance under U(2) Ì SU(3) fixes the choice the gauge part to be proportional to the gradient of the U(2) invariant radial distance from the origin of CP2 characterizing the radii of 3-spheres around the origin. Thus M4×CP2 would deviate from a pure Cartesian product in a very delicate manner making possible to talk about almost topological QFT instead of only topological QFT.

3.2 The quantum numbers associated with phase factors for CP2 parts of Kähler gauge potentials

Suppose that it is possible to assign two independent and different phase factors to the same geometric representation, in other words have two independent symplectic fields with the same geometric representation. The product of two symplectic fields indeed makes sense and satisfies the defining conditions. One can define prime symplectic algebras and decompose symplectic algebras to prime factors. Since one can allow permutations of elements in the products it becomes possible to detect the presence of product structure experimentally by detecting different combinations for products of phases caused by permutations realized as different combinations of quantum numbers assigned with the factors. The geometric representation for the product of n symplectic fields would correspond to the assignment of n edges to any pair of points. The question concerns the interpretation of the phase factors assignable to the CP2 parts of Kähler gauge potentials of S2 and CP2 Kähler form.

  1. The only reasonable interpretation for the two additional phase factors would be in terms of two quantum numbers having both gravitational and inertial variants and identical by Equivalence Principle. These quantum numbers should be Lorentz invariant since they are associated with the CP2 projection of the Kähler gauge potential of CP2 Kähler form.

  2. Color hyper charge and isospin are mathematically completely analogous to the components of four-momentum so that a possible identification of the phase factors is as a representation of these quantum numbers. The representation of plane waves as phase factors exp(ipkDmk/(h/2p)) generalizes to the representation exp(iQADFA/(h/2p)), where FA are the angle variables conjugate to the Hamiltonians representing color hyper charge and isospin. This representation depends on end points only so that the crucial symplectic invariance with respect to the symplectic transformations respecting the end points of the edge is not lost (U(1) gauge transformation is induced by the scalar jkAk, where jk is the symplectic vector field in question).

  3. One must be cautious with the interpretation of the phase factors as a representation for color hyper charge and isospin since a breaking of color gauge symmetry would result since the phase factors associated with different values of color isospin and hypercharge would be different and could not correspond to same edge of symplectic triangle. This is questionable since color group itself represents symplectic transformations. The construction of CP2 as a coset space SU(3)/U(2) identifies U(2) as the holonomy group of spinor connection having interpretation as electro-weak group. Therefore also the interpretation of the phase factors in terms of em charge and weak charge can be considered. In TGD framework electro-weak gauge potential indeed suffer a non-trivial gauge transformation under color rotations so that the correlation between electro-weak quantum numbers and non-integrable phase factors in Cartan algebra of the color group could make sense. Electro-weak symmetry breaking would have a geometric correlate in the sense that different values of weak isospin cannot correspond to paths with same values of phase angles DFA between end points.

  4. If the phase factors associated with the M4 and CP2 are assumed to be identical, the existence of geometric representation is guaranteed. This however gives constraints between rest mass, spin, and color (or electro-weak) quantum numbers.

3.3 Some general comments

Some further comments about phase factors are in order.

  1. By number theoretical universality the plane wave factors associated with four-momentum must have values coming as roots of unity (just as for a particle in box consisting of discrete lattice of points). At light-like boundary the quantization conditions reduce to the condition that the value of light-like coordinate is rational of form m/N, if N:th roots of unity are allowed.

  2. In accordance with the finite measurement resolution of four-momentum, four-momentum conservation is replaced by a weaker condition stating that the products of phase factors representing incoming and outgoing four-momenta are identical. This means that positive and negative energy states at opposite boundaries of CD would correspond to complex conjugate representations of the fusion algebra. In particular, the product of phase factors in the decomposition of the conformal field to a product of conformal fields should correspond to the original field value. This would give constraints on the trees physically possible in the operad formed by the fusion algebras. Quite generally, the phases expressible as products of phases exp(inp/p), where p £ N is prime must be allowed in a given resolution and this suggests that the hierarchy of p-adic primes is involved. At the limit of very large N exact momentum conservation should emerge.

  3. Super-conformal invariance gives rise to mass shell conditions relating longitudinal and transversal momentum squared. The massivation of massless particles by Higgs mechanism and p-adic thermodynamics pose additional constraints to these phase factors.

4. What does the improvement of measurement resolution really mean?

To proceed one must give a more precise meaning for the notion of measurement resolution. Two different views about the improvement of measurement resolution emerge. The first one relies on the replacement of braid strands with braids applies in SKM degrees of freedom and the homomorphism maps symplectic fields into their products. The homomorphism based on the averaging of symplectic fields over added points consistent with the extension of fusion algebra described in previous section is very natural in super-canonical degrees of freedom. The directions of these two algebra homomorphisms are different. The question is whether both can be involved with both super-canonical and SKM case. Since the end points of SKM braid strands correspond to both super-canonical and SKM degrees of freedom, it seems that division of labor is the only reasonable option.

  1. Quantum classical correspondence requires that measurement resolution has a purely geometric meaning. A purely geometric manner to interpret the increase of the measurement resolution is as a replacement of a braid strand with a braid in the improved resolution. If one assigns the phase factor assigned with the fusion algebra element with four-momentum, the conservation of the phase factor in the associated homomorphism is a natural constraint. The mapping of a fusion algebra element (strand) to a product of fusion algebra elements (braid) allows to realize this condition. Similar mapping of field value to a product of field values should hold true for conformal parts of the fields. There exists a large number equivalent geometric representations for a given symplectic field value so that one obtains automatically an averaging in conformal degrees of freedom. This interpretation for the improvement of measurement resolution looks especially natural for SKM degrees of freedom for which braids emerge naturally.

  2. One can also consider the replacement of symplecto-conformal field with an average over the points becoming visible in the improved resolution. In super-canonical degrees of freedom this looks especially natural since the assignment of a braid with light-cone boundary is not so natural as with light-like 3-surface. This map does not conserve the phase factor but this could be interpreted as reflecting the fact that the values of the light-like radial coordinate are different for points involved. The proposed extension of the symplectic algebra conforms with this interpretation.

  3. In the super-canonical case the improvement of measurement resolution means improvement of angular resolution at sphere S2. In SKM sector it means improved resolution for the position at partonic 2-surface. For SKM algebra the increase of the measurement resolution related to the braiding takes place inside light-like 3-surface. This operation corresponds naturally to an addition of sub-CD inside which braid strands are replaced with braids. This is like looking with a microscope a particular part of line of generalized Feynman graph inside CD and corresponds to a genuine physical process inside parton. In super-canonical case the replacement of a braid strand with braid (at light-cone boundary) is induced by the replacement of the projection of a point of a partonic 2-surface to S2 with a a collection of points coming from several partonic 2-surfaces. This replaces the point s of S2 associated with CD with a set of points sk of S2 associated with sub-CD. Note that the solid angle spanned by these points can be rather larger so that zoom-up is in question.

  4. The improved measurement resolution means that a point of S2 (X2) at boundary of CD is replaced with a point set of S2 (X2) assignable to sub-CD. The task is to map the point set to a small disk around the point. Light-like geodesics along light-like X3 defines this map naturally in both cases. In super-canonical case this map means scaling down of the solid angle spanned by the points of S2 associated with sub-CD.

5. How do the operads formed by generalized Feynman diagrams and symplecto-conformal fields relate?

The discussion above leads to following overall view about the situation. The basic operation for both symplectic and Feynman graph operads corresponds to an improvement of measurement resolution. In the case of planar disk operad this means to a replacement of a white region of a map with smaller white regions. In the case of Feynman graph operad this means better space-time resolution leading to a replacement of generalized Feynman graph with a new one containing new sub-CD bringing new vertices into daylight. For braid operad the basic operation means looking a braid strand with a microscope so that it can resolve into a braid: braid becomes a braid of braids. The latter two views are equivalent if sub-CD contains the braid of braids.

The disks D2 of the planar disk operad has natural counterparts in both super-canonical and SKM sector.

  1. For the geometric representations of the symplectic algebra the image points vary in continuous regions of S2 (X2) since the symplectic area of the symplectic triangle is a highly flexible constraint. Posing the condition that any point at the edges of symplectic triangle can be connected to any another edge excludes symplectic triangles with loopy sides so that constraint becomes non-trivial. In fact, since two different elements of the symplectic algebra cannot correspond to the same point for a given geometric representation, each element must correspond to a connected region of S2 (X2). This allows a huge number of representations related by the symplectic transformations S2 in super-canonical case and by the symplectic transformations of CP2 in SKM case. In the case of planar disk operad different representations are related by isotopies of plane.

    This decomposition to disjoint regions naturally correspond to the decomposition of the disk to disjoint regions in the case of planar disk operad and Feynman graph operad (allowing zero energy insertions). Perhaps one might say that N-dimensional elementary symplectic algebra defines an N-coloring of S2 (S2) which is however not the same thing as the 2-coloring possible for the planar operad. TGD based view about Higgs mechanism leads to a decomposition of partonic 2-surface X2 (its light-like orbit X3) into conformal patches. Since also these decompositions correspond to effective discretizations of X2 (X3), these two decompositions would naturally correspond to each other.

  2. In SKM sector disk D2 of the planar disk operad is replaced with the partonic 2-surface X2 and since measurement resolution is a local notion, the topology of X2 does not matter. The improvement of measurement resolution corresponds to the replacement of braid strand with braid and homomorphism is to the direction of improved spatial resolution.

  3. In super-canonical case D2 is replaced with the sphere S2 of light-cone boundary. The improvement of measurement resolution corresponds to introducing points near the original point and the homomorphism maps field to its average. For the operad of generalized Feynman diagrams CD defined by future and past directed light-cones is the basic object. Given CD can be indeed mapped to sphere S2 in a natural manner. The light-like boundaries of CDs are metrically spheres S2. The points of light-cone boundaries can be projected to any sphere at light-cone boundary. Since the symplectic area of the sphere corresponds to solid angle, the choice of the representative for S2 does not matter. The sphere defined by the intersection of future and past light-cones of CD however provides a natural identification of points associated with positive and negative energy parts of the state as points of the same sphere. The points of S2 appearing in n-point function are replaced by point sets in a small disks around the n points.

  4. In both super-canonical and SKM sectors light-like geodesic along X3 mediate the analog of the map gluing smaller disk to a hole of a disk in the case of planar disk operad defining the decomposition of planar tangles. In super-canonical sector the set of points at the sphere corresponding to a sub-CD is mapped by SKM braid to the larger CD and for a typical braid corresponds to a larger angular span at sub-CD. This corresponds to the gluing of D2 along its boundaries to a hole in D2 in disk operad. A scaling transformation allowed by the conformal invariance is in question. This scaling can have a non-trivial effect if the conformal fields have anomalous scaling dimensions.

  5. Homomorphisms between the algebraic structures assignable to the basic structures of the operad (say tangles in the case of planar tangle operad) are an essential part of the power of the operad. These homomorphisms associated with super-canonical and SKM sector code for two views about improvement of measurement resolution and might lead to a highly unique construction of M-matrix elements.

The operad picture gives good hopes of understanding how M-matrices corresponding to a hierarchy of measurement resolutions can be constructed using only discrete data.

  1. In this process the n-point function defining M-matrix element is replaced with a superposition of n-point functions for which the number of points is larger: n® åk=1,...,m nk. The numbers nk vary in the superposition. The points are also obtained by downwards scaling from those of smaller S2. Similar scaling accompanies the composition of tangles in the case of planar disk operad. Algebra homomorphism property gives constraints on the compositeness and should govern to a high degree how the improved measurement resolution affects the amplitude. In the lowest order approximation the M-matrix element is just an n-point function for conformal fields of positive and negative energy parts of the state at this sphere and one would obtain ordinary stringy amplitude in this approximation.

  2. Zero energy ontology means also that each addition in principle brings in a new zero energy insertion as the resolution is improved. Zero energy insertions describe actual physical processes in shorter scales in principle affecting the outcome of the experiment in longer time scales. Since zero energy states can interact with positive (negative) energy particles, zero energy insertions are not completely analogous to vacuum bubbles and cannot be neglected. In an idealized experiment these zero energy states can be assumed to be absent. The homomorphism property must hold true also in the presence of the zero energy insertions. Note that the Feynman graph operad reduces to planar disk operad in absence of zero energy insertions.

The article Category Theory and Quantum TGD gives a summary of the most recent ideas about applications of category theory in TGD framework. See also the new chapter Category Theory and TGD of "Towards S-matrix".

Sunday, October 26, 2008

Category Theory and Quantum TGD: Summary

A brief summary of the recent ideas about the application of category theory to quantum TGD is in order.
  1. The so called 2-plectic structure generalizing the ordinary symplectic structure by replacing symplectic 2-form with 3-form and Hamiltonians with Hamiltonian 1-forms has a natural place in TGD since the dynamics of the light-like 3-surfaces is characterized by Chern-Simons type action.

  2. The notion of planar operad was developed for the classification of hyper-finite factors of type II1 and its mild generalization allows to understand the combinatorics of the generalized Feynman diagrams obtained by gluing 3-D light-like surfaces representing the lines of Feynman diagrams along their 2-D ends representing the vertices.

  3. The fusion rules for the symplectic variant of conformal field theory, whose existence is strongly suggested by quantum TGD, allow rather precise description using the basic notions of category theory and one can identify a series of finite-dimensional nilpotent algebras as discretized versions of field algebras defined by the fusion rules. These primitive fusion algebras can be used to construct more complex algebras by replacing any algebra element by a primitive fusion algebra. Trees with arbitrary numbers of branches in any node characterize the resulting collection of fusion algebras forming an operad. One can say that an exact solution of symplectic scalar field theory is obtained.

  4. Conformal fields and symplectic scalar field can be combined to form symplecto-formal fields. The combination of symplectic operad and Feynman graph operad leads to a construction of Feynman diagrams in terms of n-point functions of conformal field theory. M-matrix elements with a finite measurement resolution are expressed in terms of a hierarchy of symplecto-conformal n-point functions such that the improvement of measurement resolution corresponds to an algebra homomorphism mapping conformal fields in given resolution to composite conformal fields in improved resolution. This expresses the idea that composites behave as independent conformal fields.

See the new chapter Category Theory and TGD. See also the article Category Theory and Quantum TGD.

Tuesday, October 21, 2008

Category Theory and Symplectic QFT

Besides the counterpart of the ordinary Kac-Moody invariance quantum TGD possesses so called super-canonical conformal invariance. This symmetry leads to the proposal that a symplectic variant of conformal field theory should exist. The n-point functions of this theory defined in S2 should be expressible in terms of symplectic areas of triangles assignable to a set of n-points and satisfy the duality rules of conformal field theories guaranteing associativity. The crucial prediction is that symplectic n-point functions vanish whenever two arguments co-incide. This provides a mechanism guaranteing the finiteness of quantum TGD implied by very general arguments relying on non-locality of the theory at the level of 3-D surfaces.

The classical picture suggests that the generators of the fusion algebra formed by fields at different point of S2 have this point as a continuous index. Finite quantum measurement resolution and category theoretic thinking in turn suggest that only the points of S2 corresponding the strands of number theoretic braids are involved. It turns out that the category theoretic option works and leads to explicit hierarchy of fusion algebras forming a good candidate for a representation of so called little disk operad whereas the first option has difficulties.

1. Fusion rules

Symplectic fusion rules are non-local and express the product of fields at two points sk an sl of S2 as an integral over fields at point sr, where integral can be taken over entire S2 or possibly also over a 1-D curve which is symplectic invariant in some sense. Also discretized version of fusion rules makes sense and is expected serve as a correlate for finite measurement resolution.

By using the fusion rules one can reduce n-point functions to convolutions of 3-point functions involving a sequence of triangles such that two subsequent triangles have one vertex in common. For instance, 4-point function reduces to an expression in which one integrates over the positions of the common vertex of two triangles whose other vertices have fixed. For n-point functions one has n-3 freely varying intermediate points in the representation in terms of 3-point functions.

The application of fusion rules assigns to a line segment connecting the two points sk and sl a triangle spanned by sk, sl and sr. This triangle should be symplectic invariant in some sense and its symplectic area Aklm would define the basic variable in terms of which the fusion rule could be expressed as Cklm = f(Aklm), where f is fixed by some constraints. Note that Aklm has also interpretations as solid angle and magnetic flux.

2. What conditions could fix the symplectic triangles?

The basic question is how to identify the symplectic triangles. The basic criterion is certainly the symplectic invariance: if one has found N-D symplectic algebra, symplectic transformations of S2 must provide a new one. This is guaranteed if the areas of the symplectic triangles remain invariant under symplectic transformations. The questions are how to realize this condition and whether it might be replaced with a weaker one. There are two approaches to the problem.

2.1 Physics inspired approach

In the first approach inspired by classical physics symplectic invariance for the edges is interpreted in the sense that they correspond to the orbits of a charged particle in a magnetic field defined by the Kähler form. Symplectic transformation induces only a U(1) gauge transformation and leaves the orbit of the charged particle invariant if the vertices are not affected since symplectic transformations are not allowed to act on the orbit directly in this approach. The general functional form of the structure constants Cklm as a function f(Aklm) of the symplectic area should guarantee fusion rules.

If the action of the symplectic transformations does not affect the areas of the symplectic triangles, the construction is invariant under general symplectic transformations. In the case of uncharged particle this is not the case since the edges are pieces of geodesics: in this case however fusion algebra however trivializes so that one cannot conclude anything. In the case of charged particle one might hope that the area remains invariant under general symplectic transformations whose action is induced from the action on vertices. The equations of motion for a charged particle involve a Kähler metric determined by the symplectic structure and one might hope that this is enough to achieve this miracle. If this is not the case - as it might well be - one might hope that although the areas of the triangles are not preserved, the triangles are mapped to each other in such a manner that the fusion algebra rules remain intact with a proper choice of the function f(Aklm). One could also consider the possibility that the function f(Aklm is dictated from the condition that the it remains invariance under symplectic transformations.

2.2 Category theoretical approach

The second realization is guided by the basic idea of category theoretic thinking: the properties of an object are determined its relationships to other objects. Rather than postulating that the symplectic triangle is something which depends solely on the three points involved via some geometric notion like that of geodesic line of orbit of charged particle in magnetic field, one assumes that the symplectic triangle reflects the properties of the fusion algebra, that is the relations of the symplectic triangle to other symplectic triangles. Thus one must assign to each triplet (s1,s2,s3) of points of S2 a triangle just from the requirement that braided associativity holds true for the fusion algebra.

Symplectic triangles would not be unique in this approach. All symplectic transformations leaving the N points fixed and thus generated by Hamiltonians vanishing at these points would give new gauge equivalent realizations of the fusion algebra and deform the edges of the symplectic triangles without affecting their area. One could even say that symplectic triangulation defines a new kind geometric structure in S2.

The elegant feature of this approach is that one can in principle construct the fusion algebra without any reference to its geometric realization just from the braided associativity and nilpotency conditions and after that search for the geometric realizations. Fusion algebra has also a hierarchy of discrete variant in which the integral over intermediate points in fusion is replaced by a sum over a fixed discrete set of points and this variant is what finite measurement resolution implies. In this case it is relatively easy to see if the geometric realization of a given abstract fusion algebra is possible.

The two approaches do not exclude each other if the motion of charged particle in S2 selects one representative amongst all possible candidates for the edge of the symplectic triangle. Kind of gauge choice would be in question. This aspect encourages to consider seriously also the first option. It however turns out that the physics based approach does not look plausible.

3. Associativity conditions and braiding

The generalized fusion rules follow from the associativity condition for n-point functions modulo phase factor if one requires that the factor assignable to n-point function has interpretation as n-point function. Without this condition associativity would be trivially satisfied by using a product of various bracketing structures for the n fields appearing in the n-point function. In conformal field theories the phase factor defining the associator is expressible in terms of the phase factor associated with permutations represented as braidings and the same is expected to be true also now.

  1. Already in the case of 4-point function there are three different choices corresponding to the 4 possibilities to connect the fixed points sk and the varying point sr by lines. The options are (1-2, 3-4), (1-3,2-4), and (1-4,2-3) and graphically they correspond to s-, t-, and u-channels in string diagrams satisfying also this kind of fusion rules. The basic condition would be that same amplitude results irrespective of the choice made. The duality conditions guarantee associativity in the formation of the n-point amplitudes without any further assumptions. The reason is that the writing explicitly the expression for a particular bracketing of n-point function always leads to some bracketing of one particular 4-point function and if duality conditions hold true, the associativity holds true in general. To be precise, in quantum theory associativity must hold true only in projective sense, that is only modulo a phase factor.

  2. This framework encourages category theoretic approach. Besides different bracketing there are different permutations of the vertices of the triangle. These permutations can induce a phase factor to the amplitude so that braid group representations are enough. If one has representation for the basic braiding operation as a quantum phase q=exp(i2p/N) , the phase factors relating different bracketings reduce to a product of these phase factors since (AB)C is obtained from A(BC) by a cyclic permutation involving to permutations represented as a braiding. Yang-Baxter equations express the reduction of associator to braidings. In the general category theoretical setting associators and braidings correspond to natural isomorphisms leaving category theoretical structure invariant.

  3. By combining the duality rules with the condition that 4-point amplitude vanishes, when any two points co-incide, one obtains from sk=sl and sm=sn the condition stating that the integral of U2(Aklm)f2(xkmr) over the third point sr vanishes. This requires that the phase factor U is non-trivial so that Q must be non-vanishing if one accepts the identification of the phase factor as Bohm-Aharonov phase.

  4. Braiding operation gives naturally rise to a quantum phase. Braiding operation maps Aklm to Aklm-4p since oriented triangles are in question and braiding changes orientation of the original triangle and maps the triangle to its complement. If the f is proportional to the exponent exp(-AklmQ), braiding operation induces a complex phase factor q=exp(-i 4pQ).

  5. For half-integer values of Q the algebra is commutative. For Q = M/N, where M and N have no common factors, only braided commutativity holds true for N ³ 3 just as for quantum groups characterizing also Jones inclusions of HFFs. For N=4 anti-commutativity and associativity hold true. Charge fractionization would correspond to non-trivial braiding and presumably to non-standard values of Planck constant and coverings of M4 or CP2 depending on whether S2 corresponds to a sphere of light-cone boundary or homologically trivial geodesic sphere of CP2.

4. Finite-dimensional version of the fusion algebra

Algebraic discretization due to a finite measurement resolution is an essential part of quantum TGD. In this kind of situation the symplectic fields would be defined in a discrete set of N points of S2: natural candidates are subsets of points of p-adic variants of S2. Rational variant of S2 has as its points points for which trigonometric functions of q and f have rational values and there exists an entire hierarchy of algebraic extensions. The interpretation for the resulting breaking of the rotational symmetry would be a geometric correlate for the choice of quantization axes in quantum measurement and the book like structure of the imbedding space would be direct correlate for this symmetry breaking. This approach gives strong support for the category theory inspired philosophy in which the symplectic triangles are dictated by fusion rules.

4.1 General observations about the finite-dimensional fusion algebra

  1. In this kind of situation one has an algebraic structure with a finite number of field values with integration over intermediate points in fusion rules replaced with a sum. The most natural option is that the sum is over all points involved. Associativity conditions reduce in this case to conditions for a finite set of structure constants vanishing when two indices are identical. The number M(N) of non-vanishing structure constants is obtained from the recursion formula M(N) = (N-1)M(N-1)+ (N-2)M(N-2)+...+ 3M(3) = NM(N-1), M(3)=1 given M(4)=4, M(5)=20, M(6)=120,... With a proper choice of the set of points associativity might be achieved. The structure constants are necessarily complex so that also the complex conjugate of the algebra makes sense.

  2. These algebras resemble nilpotent algebras (xn=0 for some n) and Grassmann algebras (x2=0 always) in the sense that also the products of the generating elements satisfy x2=0 as one can find by using duality conditions on the square of a product x=yz of two generating elements. Also the products of more than N generating elements necessary vanish by braided commutativity so that nilpotency holds true. The interpretation in terms of measurement resolution is that partonic states and vertices can involve at most N fermions in this measurement resolution. Elements anti-commute for q=-1 and commute for q=1 and the possibility to express the product of two generating elements as a sum of generating elements distinguishes these algebras from Grassman algebras. For q=-1 these algebras resemble Lie-algebras with the difference that associativity holds true in this particular case.

  3. I have not been able to find whether this kind of hierarchy of algebras corresponds to some well-known algebraic structure with commutativity and associativity possibly replaced with their braided counterparts. Certainly these algebras would be category theoretical generalization of ordinary algebras for which commutativity and associativity hold true in strict sense.

  4. One could forget the representation of structure constants in terms of triangles and think these algebras as abstract algebras. The defining equations are xi2=0 for generators plus braided commutativity and associativity. Probably there exists solutions to these conditions. One can also hope that one can construct braided algebras from commutative and associative algebras allowing matrix representations. Note that the solution the conditions allow scalings of form Cklm® lklllm Cklm as symmetries.

4.2 Formulation and explicit solution of duality conditions in terms of inner product

Duality conditions can be formulated in terms of an inner product in the function space associated with N points and this allows to find explicit solutions to the conditions.

  1. The idea is to interpret the structure constants Cklm as wave functions Ckl in a discrete space consisting of N points with the standard inner product


    áCkl, Cmnñ =
    å
    r 
    CklrC*mnr .

  2. The associativity conditions for a trivial braiding can be written in terms of the inner product as


    áCkl, C*mnñ = áCkm, C*lnñ = áCkn, C*mlñ.

  3. Irrespective of whether the braiding is trivial or not, one obtains for k=m the orthogonality conditions


    áCkl, C*knñ = 0 .

    For each k one has basis of N-1 wave functions labeled by l ¹ k, and the conditions state that the elements of basis and conjugate basis are orthogonal so that conjugate basis is the dual of the basis. The condition that complex conjugation maps basis to a dual basis is very special and is expected to determine the structure constants highly uniquely.

  4. One can also find explicit solutions to the conditions. The most obvious trial is based on orthogonality of function basis of circle providing representation for ZN-2 and is following:


    Cklm = Eklm×exp(ifk+fl+fm),

    fm = n(m)2p/N-2 .

    Here Eklm is non-vanishing only if the indices have different values. The ansatz reduces the conditions to the form

    år Eklr Emnrexp(i2fr) = år Ekmr Elnrexp(i2fr) = år Eknr Emlrexp(i2fr) .

    In the case of braiding one can allow overall phase factors. Orthogonality conditions reduce to år Eklr Eknrexp(i2fr) = 0 . If the integers n(m), m k, l span the range (0,N-3) ortogonality conditions are satisfied if one has E_klr=1 when the indices are different. This guarantees also duality conditions since the inner products involving k,l,m,n reduce to the same expression år ¹ k,l,m,n exp(i2fr) .

  5. For a more general choice of phases the coefficients must have values differing from unity and it is not clear whether the duality conditions can be satisfied in this case.

4.3 Do fusion algebras form little disk operad?

The improvement of measurement resolution means that one adds further points to an existing set of points defining a discrete fusion algebra so that a small disk surrounding a point is replaced with a little disk containing several points. Hence the hierarchy of fusion algebras might be regarded as a realization of a little disk operad and there would be a hierarchy of homomorphisms of fusion algebras induced by the improvements of measurement resolution. The inclusion homomorphism should map the algebra elements of the added points to the algebra element at the center of the little disk.

A more precise prescription goes as follows.

  1. The replacement of a point with a collection of points in the little disk around it replaces the original algebra element fk0 by a number of new algebra elements fK besides already existing elements fk and brings in new structure constants CKLM, CKLk for k ¹ k0, and CKlm.

  2. The notion of improved measurement resolution allows to conclude


    CKLk=0,   k ¹ k0 ,

    CKlm=Ck0lm .

  3. In the homomorphism of new algebra to the original one the new algebra elements and their products should be mapped as follows:


    fK® fk0 ,

    fKfL® fk02=0 ,

    fKfl® fk0fl .

    Expressing the products in terms of structure constants gives the conditions



    å
    M 
    CKLM=0  , 
    å
    r 
    CKlr =
    å
    r 
    Ck0lr=0 .

    The general ansatz for the structure constants based on roots of unity guarantees that the conditions hold true.

  4. Note that the resulting algebra is more general than that given by the basic ansatz since the improvement of the measurement resolution at a given point can correspond to different value of N as that for the original algebra given by the basic ansatz. Therefore the original ansatz gives only the basic building bricks of more general fusion algebras. By repeated local improvements of the measurement resolution one obtains an infinite hierarchy of algebras labeled by trees in which each improvement of measurement resolution means the splitting of the branch with arbitrary number N of branches. The number of improvements of the measurement resolution defining the height of the tree is one invariant of these algebras. The fusion algebra operad has a fractal structure since each point can be replaced by any fusion algebra.

4.4 How to construct geometric representations of the discrete fusion algebra?

Assuming that solutions to the fusion conditions are found, one could try to find whether they allow geometric representations. Here the category theoretical philosophy shows its power.

  1. Geometric representations for Cklm would result as functions f(Aklm) of the symplectic area for the symplectic triangles assignable to a set of N points of S2.

  2. If the symplectic triangles can be chosen freely apart from the area constraint as the category theoretic philosophy implies, it should be relatively easy to check whether the fusion conditions can be satisfied. The phases of Cklm dictate the areas Aklm rather uniquely if one uses Bohm-Aharonov ansatz for a fixed the value of Q. The selection of the points sk would be rather free for phases near unity since the area of the symplectic triangle associated with a given triplet of points can be made arbitrarily small. Only for the phases far from unity the points sk cannot be too close to each other unless Q is very large. The freedom to chose the points rather freely conforms with the general view about the finite measurement resolution as the origin of discretization.

  3. The remaining conditions are on the moduli |f(Aklm)|. In the discrete situation it is rather easy to satisfy the conditions just by fixing the values of f for the particular triangles involved: |f(Aklm)| = |Cklm|. For the exact solution to the fusion conditions |f(Aklm)|=1 holds true.

  4. Constraints on the functional form of |f(Aklm)| for a fixed value of Q can be deduced from the correlation between the modulus and phase of Cklm without any reference to geometric representations. For the exact solution of fusion conditions there is no correlation.

  5. If the phase of Cklm has Aklm as its argument, the decomposition of the phase factor to a sum of phase factors means that the Aklm is sum of contributions labeled by the vertices. Also the symplectic area defined as a magnetic flux over the triangle is expressible as sum of the quantities ∫ Aμdxμassociated with the edges of the triangle. These fluxes should correspond to the fluxes assigned to the vertices deduced from the phase factors of Y(sk). The fact that vertices are ordered suggest that the phase of Y(sj) fixes the value of ∫ Aμdxμ for an edge of the triangle starting from sk and ending to the next vertex in the ordering. One must find edges giving a closed triangle and this should be possible. The option for which edges correspond to geodesics or to solutions of equations of motion for a charged particle in magnetic field is not flexible enough to achieve this purpose.

  6. The quantization of the phase angles as multiples of 2π/(N-2) in the case of N-dimensional fusion algebra has a beautiful geometric correlate as a quantization of symplecto-magnetic fluxes identifiable as symplectic areas of triangles defining solid angles as multiples of 2π/(N-2). The generalization of the fusion algebra to the p-adic case exists if one allows algebraic extensions containing the phase factors involved. This requires the allowance of phase factors exp(i2π/p), p a prime dividing N-2. Only the exponents exp(i∫ Aμdxμ)= exp(in2π/(N-2)) exist p-adically. The p-adic counterpart of the curve defining the edge of triangle exists if the curve can be defined purely algebraically (say as a solution of polynomial equations with rational coefficients) so that p-adic variant of the curve satisfies same equations.

4.5 Does a generalization to the continuous case exist?

One can consider an approximate generalization of the explicit construction for the discrete version of the fusion algebra by the effective replacement of points sk with small disks which are not allowed to intersect. This would mean that the counterpart E(sk,sl,sm) vanishes whenever the distance between two arguments is below a cutoff a small radius d. Puncturing corresponds physically to the cutoff implied by the finite measurement resolution.

  1. The ansatz for Cklm is obtained by a direct generalization of the finite-dimensional ansatz:


    Cklm = ksk,sl,sm Y(sk)Y(sl)Y(sm) .

    where ksk,sl,sm vanishes whenever the distance of any two arguments is below the cutoff distance and is otherwise equal to 1.

  2. Orthogonality conditions read as


    Y(sk)Y(sl) ó
    õ
    ksk,sl,sr ksk,sn,srY2(sm)dm(sr)


    = Y(sk)Y(sl) ó
    õ


    S2(sk,sl,sn) 
    Y2(sr)dm(sr)=0.

    The resulting condition reads as


    ó
    õ


    S2(sk,sl,sn) 
    Y2(sr)dm(sr)=0

    This condition holds true for any pair sk,sl and this might lead to difficulties.

  3. The general duality conditions are formally satisfied since the expression for all fusion products reduces to


    Y(sk)Y(sl)Y(sm)Y(sn)X ,


    X = ó
    õ


    S2 
    ksk,sl,sm,sn Y(sr)dm(sr)


    = ó
    õ


    S2(sk,sl,sm,sn) 
    Y(sm)dm(sr)


    =- ó
    õ


    D2(si) 
    Y2(sr)dm(sr) , i = k,l,s,m .

    These conditions state that the integral of Y2 any disk of fixed radius d is same apart from phase factor: same result follows also from the orthogonality condition. This condition might be difficult to satisfy exactly and the notion of finite measurement resolution might be needed. For instance, it might be necessary to restrict the consideration to a discrete lattice of points which would lead back to a discretized version of algebra.

The article Category Theory and Quantum TGD gives a summary of the most recent ideas about applications of category theory in TGD framework. See also the new chapter Category Theory and TGD of "Towards S-matrix".