Wednesday, November 28, 2012

LHC might have produced new matter: are M89 hadrons in question?

Large Hadron Collider May Have Produced New Matter is the title of popular article explaining briefly the surprising findings of LHC made for the first time September 2010. A fascinating possibility
is that this new matter is hadrons of brand new hadron physics predicted by TGD! I distinguish this new hadron physics using the attribute M89 to distinguish it from ordinary hadron physics assigned to Mersenne prime M107=2107-1.

Some background

Quark gluon plasma is expected to be generated in high energy heavy ion collisions if QCD is the theory of strong interactions. This would mean that quarks and gluons are de-confined and form a gas of free partons. Something different was however observed already at RHIC: the surprise was the presence of highly correlated pairs of charged particles. The members of pairs tended to move in parallel: either in same or opposite directions.

This forced to give up the description in terms of quark gluon plasma and to introduce what was called color glass condensate. The proposal was that so called color glass condensate, which is liquid with strong correlations between the velocities of nearby particles rather than gas like state in which these correlations are absent, is created: one can imagine that a kind of thin wall of gluons is generated as the highly Lorentz contracted nuclei collide. The liquid like character would explain why pairs tend to move in parallel manner. Why they can move also in antiparallel manner is not obvious to me although I have considered the TGD based view about color glass condensate inspired by the fact that the field equations for preferred extremals are hydrodynamical and it might be possible to model this phase of collision using scaled version of critical cosmology which is unique apart from scaling of the parameter characterizing the duration of this critical period. Later LHC found a similar behavior in heavy ion collisions. The theoretical understanding of the phenomenon is however far from complete.

The real surprise was the observation of similar events in proton proton collisions at LHC: for the first time already at 2010. Lubos Motl wrote a nice posting about this observation. Also I wrote a short comment about the finding. Now the findings have been published: preprint can be found in arXiv. Below is the abstract of the preprint.

Results on two-particle angular correlations for charged particles emitted in pPb collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV are presented. The analysis uses two million collisions collected with the CMS detector at the LHC. The correlations are studied over a broad range of pseudorapidity η, and full azimuth φ, as a function of charged particle multiplicity and particle transverse momentum, pT. In high-multiplicity events, a long-range (2< |(Δ η|<4|, near-side Δ φ approximately 0) structure emerges in the two-particle Δ η-Δ φ correlation functions. This is the first observation of such correlations in proton-nucleus collisions, resembling the ridge-like correlations seen in high-multiplicity pp collisions at s1/2 = 7 TeV and in A on A collisions over a broad range of center-of-mass energies. The correlation strength exhibits a pronounced maximum in the range of pT = 1-1.5 GeV and an approximately linear increase with charged particle multiplicity for high-multiplicity events. These observations are qualitatively similar to those in pp collisions when selecting the same observed particle multiplicity, while the overall strength of the correlations is significantly larger in pPb collisions.


Could M89 hadrons give rise to the events?


Second highly attractive explanation discussed by Lubos is in terms of production of string like objects. In this case the momenta of the decay products tend to be parallel to the strings since the constituents giving rise to ultimate decay products are confined inside 1-dimensional string like object. In this case it is easy to understand the presence of both parallel and antiparallel pairs. If the string is very heavy, a large number of particles would move in collinear manner in opposite directions. Color quark condensate would explain this in terms of hydrodynamical flow.

In TGD framework these string like objects would correspond to color magnetic flux tubes. These flux tubes carrying quark and antiquark at their ends should however make them manifest only in low energy hadron physics serving as a model for hadrons, not at ultrahigh collision energies for protons. Could this mean that these flux tubes correspond to hadrons of M89 hadron physics? M89 hadron physics would be low energy hadron physics since the scaled counterpart of QCD Λ around 200 MeV is about 100 GeV and the scaled counterpart of proton mass is around .5 TeV (scaling is by factor is 512 as ratio of square roots of M89=289-1, and M107). What would happen in the collision would be the formation of p-adically hot spot at p-adic temperature T=1 for M89.

For instance, the resulting M89 pion would have mass around 67.5 GeV if a naive scaling of ordinary pion mass holds true. p-Adic length scale hypothesis allows power of 21/2 as a multiplicative factor and one would obtain something like 135 GeV for factor 2: Fermi telescope has provided evidence for this kind particle although it might be that systematic error is involved (see the posting of Resonaances). The signal has been also observed by Fermi telescope for the Earth limb data where there should be none if dark matter in galactic center is the source of the events. I have proposed that M89 hadrons - in particular M89 pions - are also produced in the collisions of ultrahigh energy cosmic rays with the nuclei of the atmosphere: maybe this could explain also the Earth limb data. Recall that my first erratic interpretation for 125 GeV Higgs like state was as M89 pion and only later emerged the interpretation of Fermi events in terms of M89 pion.

What about the explanation in terms of M89 color spin glass? It does not make sense. First of all, both color spin glass and quark gluon plasma would be higher energy phenomena in QCD like theory. Now low energy M89 hadron physics would be in question. Secondly, for the color spin glass of ordinary hadron physics the temperature would be about 1 GeV, the mass of proton in good approximation. For M89 color spin glass the temperature would be by a factor 512 higher, that is .5 TeV: this cannot make sense since the model based on temperature 1 GeV works satisfactorily.

How this picture relates to earlier ideas?

I have made three earlier proposals relating to the unexpected correlations just discussed. The earlier picture is consistent with the recent one.

  1. I have already earlier proposed a realization of the color glass condensate in terms of color magnetic flux tubes confining partons to move along string like objects. This indeed explains why charged particle pairs tend to move in parallel or antiparallel manner. Amusingly, I did not realize that ordinary hadronic strings (low energy phenomenon) cannot be in question, and therefore failed to make the obvious conclusion that M89 hadrons could be in question. Direct signals of M89 hadron physics have been in front of our eyes since the findings of RHIC around 2005 but our prejudices - in particular, the stubborn belief that QCD is a final theory of strong interactions - have prevented us to see them! Instead of this we try desperately to see superstrings and standard SUSY!

  2. One basic question is how the hadrons and quarks of M89 hadron physics decay to ordinary hadrons. I proposed the basic idea for about fifteen years ago - soon after the discovery of p-adic physics. The idea was that the hadrons of M89 physics are p-adic hot spots created in the collisions of hadrons. Also quarks get heated so that corresponding p-adic prime increases and the mass of the quark increases by some power of 21/2 meaning a reduction in size by the same power. The cooling of these hot spots is a sequence of phase transitions increasing the p-adic prime of the appropriate (hadronic or partonic) space-time sheet so that the eventual outcome consists of ordinary hadrons. p-Adic length scale hypothesis suggests that only primes near powers of 2 (or their subset) appear in the sequence of phase transitions. For instance, M89 hadronic space-time sheet would end up to an ordinary hadronic space-time sheets consisting of at most 18 steps from M107/M89≈ 218. If only powers of 2 are allowed as scalings (the analog of period doubling) there are 9 steps at most.

    Each step scales the size of the space-time sheet in question so that the process is highly analogous to cosmic expansion leading from very short and thin M89 flux tube to M107 flux tube with scaled up dimensions. Since a critical phenomenon is in question and TGD Universe is fractal, a rough macroscopic description would be in terms of scaled variant of critical cosmology, which is unique apart from its finite duration and describes accelerated cosmic expansion. The almost uniqueness of the critical cosmology follows from the imbeddability to M4× CP2. Cosmic expansion would take place only during these periods. Both the cosmic expansion expansion associated with the cooling of hadronic and partonic space-time sheets would take via jerks followed by stationary periods with no expansion. The size of the scale of the hadronic or partonic space-time sheet would increase by a power of 21/2 during a single jerk.

    By the fractality of the TGD Universe this model of cosmic expansion based on p-adic phase transitions should apply in all scales. In particular, it should apply to stars and planetary systems. The fact that various astrophysical objects do not seem to participate in cosmic expansion supports the view that the expansion takes place in jerks identifiable as phase transitions increasing the p-adic prime of particular space-time sheet so that in the average sense a continuous smooth expansion is obtained. For instance, I have proposed a variant of expanding Earth model explaining the strange observation that the continents would nicely cover the entire surface of Earth if the radius of Earth were one half of its recent radius. The assumed relatively rapid phase transition doubling the radius of Earth explains several strange findings in the thermal, geological, and biological history of Earth.

    This approach also explains also how the magnetic energy of primordial cosmic strings identifiable as dark energy has gradually transformed to dark or ordinary matter precession. In this model the vacuum energy density of inflation field is replaced with that of Kähler magnetic field assignable to the flux tubes originating from primordial cosmic strings with a 2-D M4 projection. The model explains also the magnetic fields filling the Universe in all scales: in standard Big Bang cosmology their origin remains a mystery.

  3. What about the energetics of the process? If the jerk induces an overall scaling, the Kähler magnetic energy of the magnetic flux tubes decreases since - by the conservation of magnetic flux giving B∝ 1/S - the energy is proportional to L/S scaling like p-1/2 (L and S denote the length and the transversal area of the flux tube). Therefore magnetic energy is liberated in the process and by p-adic length scale hypothesis the total rest energy liberated is Δ E= Ei(1-2(ki-kf)/2), where i and f refer to initial and final values of the p-adic prime p≈ 2k. Similar consideration applies to partons. The natural assumption is that the Kähler magnetic (equivalently color magnetic) energy is liberated as partons. These partons would eventually transform to ordinary partons and materialize to ordinary hadrons. The scaling of the flux tube would preserve its size would force the observed correlations.

To conclude, the brave conjecture would be that a production of M89 hadrons could explain the observations. There would be no quark gluon plasma nor color spin glass (a highly questionable notion in high energy QCD). Instead of this new hadron physics would emerge by the confinement of quarks (or their scaled up variants) in shorter length scale as collision energies become high enough, and already RHIC would have observed M89 hadron physics!

For background see the chapter New Particle Physics Predicted by TGD: I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Thursday, November 15, 2012

About the basic assumptions behind p-adic mass calculations


The motivation for this piece of text was the basic horror experience of theoretician waking him up at early morning hours. Is s there something wrong with basic assumptions of some particular piece of theory? At this time it was p-adic thermodynamics. Theoretician tries to figure this out in a drowsy state between wake-up and sleep, fails repeatedly, and blames the mighties of the Universe for his miserable fate as eternal doubter. Eventually merciful sleep arrives and theoretician wakes up in the morning, recalls the problem and feels that nothing is wrong. But theoretician knows that it is better to check everything once again.

So that this is what I am doing in the sequel: listing and challenging the basic assumptions and philosophy behind p-adic mass calculations. As always in this kind of situation, I prefer to think it allover again rather than finding what I have written earlier: reader can check whether the recent me agrees with the earlier me. This list is not the only one that I have made during these years and other, possibly different, lists can be found in the chapters of various books. Although the results of calculations are unique and involve only very general assumptions, the guessing of the detailed physical picture behind them is difficult.

Why p-adic thermodynamics?

p-Adic thermodynamics is a fundamental assumption behind the p-adic mass calculations: p-adic mass squared is identified as a thermal average of mass squared for super-conformal representation with p-adic mass squared given essentially by the conformal weight.

Zero energy ontology (ZEO) has gradually gained a status of second fundamental assumption. In fact, ZEO strongly suggests the replacement of p-adic thermodynamics with its "complex square root" so that one would be actually considering genuine quantum states squaring to thermodynamical states. This idea looks highly satisfactory for anyone used to think that elementary particles cannot be thermodynamical objects. The square root of p-adic thermodynamics raises delicate number theoretical issues since the p-adic square root of the conformal weight having value p does not exist without a proper algebraic extension of p-adic numbers leading to algebraic integers and generalized notion of primeness.

Q: Why p-adic thermodynamics, which predicts the thermal expectation of p-adic mass squared and requires the mapping of p-adic valued mass squared to real mass squared by some variant of canonical identification?

A: Number theoretical universality requires fusion of real and p-adic number based physics for various primes so that p-adic thermodynamics becomes natural.

  1. The answer inspired by TGD inspired theory of consciousness would be that the interaction of p-adic space-time sheets serving as correlates of cognition with real space-time sheets representing matter makes p-adic topology effective topology in some length scale range also for real space-time sheets (as an effective topology for discretization). One could even speak about cognitive representations of elementary particles using the rational or algebraic intersections of real and p-adic space-time sheets. These cognitive representations are very simple in p-adic topology and it is easy to calculate the masses of the particles using p-adic thermodynamics. Since representation is in question, the result should characterize also real particle.

  2. The pragmatic answer would be that p-adic thermodynamics gives extremely powerful number theoretical constraints leading to the quantization of mass scales and masses with p-adic temperature T=1/n and p-adic prime appearing as free parameters. Also conformal invariance is strongly favored since the counterpart of Hamiltonian must be integer valued as the super-conformal scaling generator indeed is.

  3. By number theoretical universality one can require that the p-adic mass thermodynamics is equivalent with real thermodynamics for real mass squared. This is the case if partition function has cutoff so that conformal weights only up to some maximum value N are allowed. This has no practical consequences since the real-valued contribution from the conformal weight n is proportional to p-n+1/2 and for n>2 is completely negligible since the primes involved are so large (p=M127=2127-1 for electron for instance).

Q: Is the canonical identification mapping the p-adic mass squared to real mass squared unique?

A: This is not the case. One can imagine a family of identification for which integers n<pN, N=1,2,... are mapped to itself. This however has no practical implications for the calculations since the values of primes involved are so large.

The calculations themselves assume only p-adic thermodynamics and super-conformal invariance. The most important thing that matters is the number of tensor factors in the tensor product of representations of conformal algebra, which must be five.

Q: What are the fundamental conformal algebras giving rise to the super conformal symmetries?

A: There are two conformal algebras involved.

  1. The symplectic algebra of δ M4+/-× CP2 has the formal structure of Kac-Moody algebra with the light-like radial coordinate r of the light-cone boundary δ M4+/- taking the role of complex coordinate z. It has symplectic algebras of CP2 and sphere S2 of light-cone boundary as building blocks taking the role of the finite-dimensional Lie group defining Kac-Moody algebra. This algebra has not in string models.

  2. There is also the Kac-Moody algebra assignable to the light-like wormhole throats and assignable to the isometries of the imbedding space havingM4 and CP2 isometries as factors. There are also electroweak symmetries acting on spinor fields. In fact, the construction of the solutions of the modified Dirac equation suggests that electroweak and color gauge symmetries become Kac-Moody symmetries in TGD framework. In practice this means that only the generators with positive conformal weight annihilate the physical states. For gauge symmetry also those with negative conformal weight annihilate the physical states.

    One can of course ask whether also SU(2) sub-algebra of SL(2,C) acting on spinors should be counted. One could argue that this is not the case since spin does correspond to gauge or Kac-Moody symmetry as electroweak quantum numbers do.

Q: One must have five tensor factors. How should one count the number of tensor factors, in other words what is the basic building brick to which one identifieds as a tensor factor of Super-Virasoro algebra?

A: One can imagine two options.

  1. The most general option is that one takes the CP2 and S2 symplectic algebras as factors in the symplectic sector. In Kac-Moody sector one has E2⊂ M4 isometries (longitudinal degrees of freedom of string world sheet carrying induce spinors fields are not physical) and SU(3). Besides this one has electroweak algebra U(2), which almost but quite not decomposes to SU(2)L× U(1) (there are correlations between SU(2)L and U(1) quantum numbers and the existence of spinor structure of CP2 makes also these correlations manifest). This would give 5 tensor factors as required.

  2. I have also considered Cartan algebras as separate tensor factors. I must confess, that at this moment I am unable to rediscover what my motivation for this actually has been. This would give a larger number of tensor factors: 1+2 factors in symplectic sector from Cartan algebras of SO(3)× SU(3) defining subgroup of symplectic group, 2+2 for isometries in Kac-Moody sector from E2 and SU(3), and 1+1 in the electroweak sector with spin giving a possible further factor. This means 9 (or possibly 10) factors so that thermalization is not possible for all Cartan algebra factors. Symplectic sectors are certainly a natural candidate in this respect so that one would have 5 as required (or 6 if spin is allowed to have Kac-Moody structure) sectors.

The first option looks more convincing to me.

How to understand the conformal weight of the ground state?

Ground state conformal weight which is non-positive can receives various contributions. One contribution is negative and therefore corresponds to a tachyonic mass squared, second contribution corresponds to CP2 cm degrees of freedom and together with the momentum squared boils down to an eigenvalue of the square of spinor d'Alembertian for H=M4× CP2 (by bosonic emergence). Third one comes from the conformal moduli of the partonic 2-surface at the end of the space-time sheet at light-like boundary of causal diamond and distinguishes between different fermion families.

Q: Tachyonic ground state mass does not look physical and is quite generally seen as a serious - if not lethal - problem also in string models. What is the origin of the tachyonic contribution to the mass squared in TGD framework?

A: The recent picture about elementary particles is as lines of generalized Feynman diagram identified as space-time regions with Euclidian signature of the induced metric. In this regions mass squared is naturally negative and it is natural to think that ground state mass squared receives contributions from both Euclidian and Mionkowskian regions. If so, the necessary tachyonic contribution would be a direct signal for the presence of the Euclidian regions, which have actually turned out to define a generalization of blackhole interior and be assignable to any system as a space-time sheet characterizing the system geometrically. For instance, my own body as I experience it would correspond to my personal Euclidian space-time seet as a line of generalized Feynman diagram.

Q: Where does the H=M4× CP2 contribution to the scaling generator L0 assignable to spinor partial waves in H come from?

A: Zero energy ontology (ZEO) allows to assign to each particle a causal diamond CD and according to the recent view emerging from the analysis of the relationship between subjective (experienced) time and geometric time, particle is characterized by a quantum superposition of CDs. Every state function reduction means localization of the upper of lower tip of all CDs in the superposition and delocalization of the other tip. The position of the upper tip has wave function in H+/-=M4+/-× CP2 and there is a great temptation to identify the wave function as being induced from a partial wave in H=M4× CP2. As a matter fact, number theoretic arguments and arguments related to finite measurement resolution strongly suggest discretization of H+/-. M4+/- would be replaced with a union of hyperboloids with a distance from the tip of M4+/- which is quantized as a multiple of CP2 radius. Furthermore at each hyperboloid the allowed points would correspond to the orbit of some discrete subgroup of SL(2,C). CP2 would be also discretized.

What about Lorentz invariance?

The square root of p-adic thermodynamics implies quantum superposition of states with different values of mass squared and hence four-momenta. In ZEO this does not mean obvious breaking of Lorentz invariance since physical states have vanishing total energy. Note that coherent states of Cooper pairs, which in ordinary ontology would have both ill-defined energy and fermion number, have a natural interpretation in ZEO.

  1. A natural assumption is that the state in the rest system involves only a superposition of states with vanishing three-momentum. For Lorentz boosts the state would be a superposition of states with different three-momenta but same velocity. Classically the assumption about same 3-velocity is natural.

    Q: Could Lorentz invariance break down by the presence of the superposition of different momenta?

    A: This is not the case if only the average four-momentum is observable. The reason is that average four-momentum transforms linearly under Lorentz boosts. I have earlier considered the possibility of replacing momentum squared with conformal weight but this option looks somewhat artificial and even wrong to me now.

  2. The decomposition M4=M2× E2 is fundamental in the formulation of quantum TGD, in the number theoretical vision about TGD, in the construction of preferred extremals, and for the vision about generalized Feynman diagrams. It is also fundamental in the decomposition of the degrees of string to longitudinal and transversal ones. An additional item to the list is that also the states appearing in thermodynamical ensemble in p-adic thermodynamics correspond to four-momenta in M2 fixed by the direction of the Lorentz boost.

    Q: In parton model of hadrons it is assumed that the partons have a distribution with respect to longitudinal momentum, which means that the velocities of partons are same along the direction of motion of hadron. Could one have p-adic thermodynamics for hadrons?

    A: For hadronic p-adic thermodynamics the value of the string tension parameter would be much smaller and the thermal contributions from n>0 states would be completely negligible so that the idea does not look good. In p-adic thermodynamics for elementary particles one would have distribution coming from different values of p-adic mass squared which is integer valued apart from ground state configuration.

What are the fundamental dynamical objects?

The original assumption was that elementary particles correspond to wormhole throats. With the discovery of the weak form of electric-magnetic duality came the realization that wormhole throat is homological magnetic monopole (rather than Dirac monopole) and must therefore have (Kähler) magnetic charge. Magnetic flux lines must be however closed so that the wormhole throat must be associated with closed flux loop.

The most natural assumption is that this loop connects two wormhole throats at the first space-time sheet, that the flux goes through a second wormhole contact to another sheet, returns back along second flux tube, and eventually is transferred to the original throat along the first wormhole contact.

The solutions of the Modified Dirac equation assign to this flux tube string like curve as a boundary of string world sheet carrying the induced fermion field. This closed string has "short" portions assignable to wormhole contacts and "long" portions corresponding to the flux tubes connecting the two wormhole contacts. One can assign a string tension defined by CP_2 scale with the "short" portions of the string and string tension defined by the primary or perhaps secondary p-adic length scale to the "long" portions of the closed string.

Also the "long" portion of the string can contribute to the mass of the elementary particle as a contribution to the vacuum conformal weight. In the case of weak gauge bosons this would be the case and since the contribution is naturally proportional to gauge couplings strength of W/Z boson one could understand Q/Z mass ratio if the p-adic thermodynamics gives a very small contribution from the "short" piece of string (also photon would receive this small contributionin ZEO): this is the case if one must have T=1/2 for gauge bosons. Note that "long" portion of string can contribute also to fermion masses a small shift. Hence no Higgs vacuum expectation value or coherent state of Higgs would be needed. There are two options for the interpretation of recent results about Higgs and Option II in which Higgs mechanism emerges as an ffective description of particle massivation at QFT limit of the theory and both gauge fields and Higgs fields and its vacuum expectation exist only as constructs making sense at QFT limit. Higgs like particles do of course exist. At WCW limit they are replaced by WCW spinor fields as fundamental object.

Q: One can consider several identifications of the fundamental dynamical object of p-adic mass calculations. Either as a wormhole throat (in the case of fermions for which either wormhole throat carries the fermion quantum number this looks natural), as entire wormhole contact, or as the entire flux tube having two wormhole contacts. Which one of these options is correct?

A: The strong analogy with string model implied by the presence of fermionic string world sheet would support that the identification as entire flux tube in which case the large masses for higher conformal excitations could be interpreted in terms of string tension. Note that this is the only possibility in case of gauge bosons.

Q: What about p-adic thermodynamics or its square root in hadronic scale?

A: As noticed the contributions from n>0 conformal excitations would be extremely small in p-adic thermodynamics for "long" portions. It would seem that this contribution is non-thermal and comes from each value of n labelling states in Regge trajectory separately just as in old-fashioned string model. Even weak bosons would have Regge trajectories. The dominant contribution to the hadron mass can be assigned to the magnetic body of the hadron consisting of Kähler magnetic flux tubes. The Kähler-magnetic (or equivalently color-magnetic) flux tubes connecting valence quarks can contribute to the mass squared of hadron. I have also considered the possibility that symplectic conformal symmetries distinguishing between TGD and superstring models could be responsible for a contribution identifiable as color magnetic energy of hadron classically.

Wednesday, November 14, 2012

Higgs like state according to TGD after HCP2012


As both Phil Gibbs and Tommaso Dorigo have already told, ATLAS and CMS reported new Higgs results at LHC in Kyoto. From TGD perspective these results are of special interest since - as explained in previous postings (see this and this), they could allow to distinguish between two options suggested by TGD for the interpretation of the Higgs like particle. Before continuing it must be made clear that the road to these options has been long and with many twists and turns (see this). I have christened the basic options as Option I and II.

The two options

I have described these options here. In follow up posting I have described how Option II can be understood in TGD framework, where both gauge fields, Higgs field, and also graviton field are only constructs emerging at the QFT limit of the theory rather than having a fundamental ontological status.

  1. Option I assumes that Higgs like state cannot explain fermion masses so that the couplings of Higgs to fermions can even vanish. Gauge boson masses are however assumed to result by the counterpart of Higgs mechanism which would be formation of a coherent state assignable to the Higgs like particle identified a M4 scalar formed from fermion and antifermion at opposite throats of wormhole contacts (just like gauge bosons). Note that Higgs like state is actually CP2 vector.

    1. One can wonder why not allow coherent states of Higgs like particle also for option II at the microscopic level. p-Adic thermodynamics does not tolerate this. Fhe conclusion that these coherent states explain also fermion masses is difficult to avoid. For me it would mean return 17 years back to the times before p-adic mass calculations without a slightest idea why fermion masses are what they are.

    2. Both scalar and pseudoscalar identification is possible for Higgs like state in TGD as it is now. Somewhat misleadingly I have referred to the Higgs like state as Euclidian pion. "Pion" is a misleading terminological mammoth bone from my original identification of 125 GeV state as pion of M89 hadron physics. M89 hadrons is one of the most important new physics (almost)-predictions of TGD. The 135 GeV particle for which Fermi telescope has provided considerable evidence could correspond to M89 pion. It is a pity that the experimentalist are testing only the mainstream theories such as standard SUSY, whose state after HCP2012 is critical as doctors would say.

      The reason for "Euclidian" is that the space-time regions assignable to the (thickened) lines of generalized Feynman diagrams have Euclidian signature of induced metric. The Minkowskian parts of the flux tubes would be much longer, of the order of Compton length of particle, and could be identified as counterparts of hadronic strings if both ends carry fermion number. This means a unification of elementary particles and hadron like states: they are both string like objects but with widely differing typical lengths and string tensions. The string tension assignable to the long strings/flux tubes would give the dominant contribution to hadron masses.

  2. Option II is conservative in the sense that apparently Higgs would make both bosons and fermions massive: aesthetically this is of course very nice feature. This conservative character is only apparent since p-adic thermodynamics would determine both fermion and boson masses - also the mass of Higgs.

    1. Both gauge boson fields and Higgs field would be constructs of QFT limit for the microscopic physical objects not describable as fields and obtained by making 3-surfaces assigned with particles to point like objects. In the earlier posting I described how standard model like theory would result as a QFT limit of TGD by using a modification of a standard construction for the effective action.

    2. "Apparent" would mean that Higgs vacuum expectation value is a purely fictive notion for this option. It would apparently explain masses for gauge bosons and fermions if the coupling of fermions to the scalar state mapped to Higgs field corresponds to gradient coupling ΨbarγμμΦΨ/μ, μ the Higgs vacuum expectation value reproducing the fermion mass from this coupling. In the case of gauge bosons the standard gauge coupling to Higgs would reproduce the gauge boson mass in same manner. This is however only a mimicry of the mass spectrum, not its prediction. QFT limit cannot do better. The crucial ratio of W and Z boson masses expressible in terms of Weinberg angle would become a definition of Weinberg angle.

    3. The identification of elementary particles in terms of monopole flux loops allows also to consider gauge boson masses as contributions to the conformal weight of the gauge boson ground state so that it would not result from the p-adic thermodynamics proper. Is this contribution present also in fermionic ground states and does it give only a small shift to fermion mass squared from the value determined by p-adic thermodynamics?

      For gauge bosons this contribution is of order O(p2): the coefficient would be large so that the contribution would not be much below the smallest possible O(p) contribution. Assuming this for fermions this contribution would induce only a small upwards shift of fermion masses whose relative size would be largest for lowest fermion families. For this option the parameter μ≈ 246 GeV, which actually corresponds to the smallest possible value of p-adic mass squared of order O(p): clearly W and Z boson masses are below this but not much and this would require p-adic temperature T=1/2 in p-adic thermodynamics. The proportionality of the mass of long string to the square of appropriate gauge coupling constant appearing in the gauge boson masses would be also natural and predict W/Z mass ratio correctly.

Option I or Option II?


What do the results of the data released by ATLAS and CMS groups allow to conclude? Option I or Option II?

  1. Perhaps the most important piece of data is the production rate for kenotau+kenotau- pairs by Higgs decays. CMS reports excess of .72+/- .52 and ATLAS .72+/- .64. Earlier Tevatron reported evidence excess in bb channel. Together these results are quite strong and if taken at face value (note however the large error margins) then Option II survives in TGD framework.

  2. The crucial diphoton channels, where gamma pair excess has been reported hitherto have not been updated by either group. This is a pity since for Option I the development of coherent state of Euclidian scalar serving as a counterpart for the Higgs expectation would be due to a coupling of pseudoscalar (scalar) to instanton density (YM action density) - call it just X - slashed between Higgs like state and its conjugate in QFT description. The addition of a quantized piece to X would give rise to a term giving rise to anomalous decays to photon pairs/gauge boson pairs.

    For the pseudoscalar Higgs the coefficient of the interaction term would be dictated by anomaly considerations. For a scalar Higgs the ad hoc guess would that the coefficient is same. CP2 type vacuum extremal represents the extreme case of Euclidian space-time region and for this induced Kähler form is self dual. Could this be used to justify this adhoc assumption?

    Many explanations for diphoton excess have been proposed and I cannot avoid the temptation to add an additional contribution to the soup. There have been rumors that the state around 125 GeV splits into two: ATLAS and CMS have indeed reported slightly different masses. Could this be a real effect and explain the diphoton excess - and also why nothing was reported in Kyoto? The believer on M89 physics could argue as follows.

    1. The pion like state corresponds to 3⊕ 1 representation for strong isospin grojp U(2) realized using sub-algebra SU(2) of SU(3) playing the role of strong isospin group in TGD. Pion realizes only "3". Could "1" correspond to the sigma meson of M89 hadron physics and have mass around 125 GeV and thus explain two-photon anomaly? Unfortunately, the status of sigma even in ordinary hadron physics has turned out to be very problematic.

    2. One can also play with a second idea. There is recent evidence that ordinary pion has what might be called an infrared Regge trajectory with the mass splitting about 20 MeV or 40 MeV between different states (see this). This pion would satellites also below its usual mass: the first reported one around 100 MeV. If also M89 pion has similar IR Regge trajectory then by scaling by a factor 512 the splitting of 20 (40) MeV would scale up to a splitting of 10 (20) GeV. This would map 100 MeV pion to a copy of 135 GeV M89 pion with mass around 115 GeV (for which ATLAS found evidence for a couple of years ago!). This state is unfortunately 10 GeV too low! 20 MeV splitting would suggest a satellite of pion around 120 MeV, and its M89 variant would be around 125 GeV! In this case the different parities of Euclidian scalar and scaled down copy of Euclidian pion would allow to distinguish between them. This copy of pion would have also charged companions. There have been also rumors about charged companions of Higgs.

  3. Tommaso Dorigo tells that also the first determination of the spin parity of the state has been made. 0+ is slightly favored so that scalar Higgs would be in question. TGD indeed allows both options but for the scalar option the coupling of Higgs like state to YM action density remains the ad hoc guess mentioned above.

To sum up, the challenge of understanding Higgs like states in TGD framework seems to be now to be accomplished to high extent. The outcome is a formulation for the QFT limit of TGD which allows to understand how TGD implies standard model like theory as its QFT limit and rather precise view about limitations of QFT approximation.

Monday, November 12, 2012

To deeper waters

Higgs issue seems to divide theoreticians to two classes: the simple-minded pragmatists and real thinkers.

For pragmatists the existence of Higgs and Higgs mechanism is something absolute: Higgs exists of not and one can make a bet about it. Most bloggers and most phenomenologists applying numerical models belong to this group. In particular, bloggers have had heated discussions and have made bets pro and and co, mostly pro.

Thinkers see the situation in a wider perspective. The real issue is the status of quantum field theory as a description of fundamental forces. Is QFT something fundamental or is it only a low energy limit of a more fundamental microscopic theory? Could it even happen that QFT limit fails in some respects and could the description of particle massivation represent such an aspect?

Already string models taught (or at least should have taught) to see quantum field theory as an effective description of a microscopic theory working at low energy limit. Since string theorists have not been able cook up any convincing answer to the layman's innocent question "How would you describe atom using these tiny strings which are so awe inspiring?", QFT limits have become what string models actually are at the phenomenological level. AdS-CFT correspondence actually equates string theory with a conformal quantum field theory in Minkowski space so that hopes about genuine microscopic theory are lost. This is disappointing but not surprising since strings are still too simple: they are either open or closed, there is no interesting internal topology.

In TGD framework string world sheets are replaced with 4-D space-time surfaces. One ends up with a very concrete vision about matter based on the notion of many-sheeted space-time and the implications are highly non-trivial in all scales. For instance, blackhole interior is replaced with a space-time region with Euclidian signature of the induced metric characterizing any physical system be it elementary particle, condensed matter system, or astrophysical object. Therefore the key question becomes the following. Does TGD have QFT in M4 as low energy limit or rather - as a limit holding true in a given scale in the infinite length scale hierarchies predicted by theory (p-adic length scale hierarchy and hierarchy of effective Planck constants and hierarchy of causal diamonds)?

Deeper question: Does QFT limit of the fundamental theory exist?

Could the QFT limit defined as QFT in M4 fail to exist? After this question one cannot avoid questions about the character of Higgs and Higgs mechanism.

  1. It is quite possible that in QFT framework Higgs mechanism is the only description of particle massivation. But this is just a mimicry, not a predictive description. QFT limit can only reproduce the spectrum of elementary particles masses or rather - mass ratios. The ratio of Planck mass (also an ad hoc concept) to proton mass remains a complete mystery.

    This failure has been convincingly demonstrated by a huge amount of work in particle phenomenology. First came the GUT theorists. They applied every imaginable gauge group with elementary particles put in all imaginable group representations to reproduce the known part of the particle spectrum. They have reproduced standard model gauge symmetries at low energy limit. They have also done the necessary fine-tuning to make proton long-lived enough, to give large enough masses for the exotics, and to make beta functions sensical.

    The same procedures have been repeated in SUSY framework and finally super string phenomenology has produced QFT limits with Higgs mechanism, and are now doing intense fine tuning to save poor SUSY from the aggressive attacks by LHC. During these 40 years of busy modeling practically nothing has been achieved but the work goes on since theoreticians have their methods and they must produce highly technical papers to preserve the illusion of hard science.

  2. Higgs mechanism is also plagued by profound problems. The hierarchy problem means that the Higgs mechanism with mass of about 125 GeV is just at the border of stability. The problem is that the sign of mass squared term in Higgs potential can change by radiative corrections so that the vacuum with a vanishing Higgs expectation value becomes stable. SUSY was hoped to solve the hierarchy problem but LHC has made SUSY in standard sense implausible. Even if it exists cannot help in this issue. Another problem is that the coefficients of the fourth power in the Higgs potential can become negative so that vacuum becomes unstable: the bottom of a valley becomes top of a hill. The value of Higgs mass is such that also this seems to happen! (see the posting of Resonaances).

    Quite generally, fine tuning problems are the characteristic issues of the QFT limit. Proton must be long-lived enough, baryon and lepton number violating decay rates cannot be too high, the predicted exotic particles implied by the extension of the standard model gauge group must be massive enough, and so on... This requires a lot of fine tunng. Theory has transformed from a healer to a patient: the efforts of theoreticians reduce to attempts to resuscitate the patient. All this becomes understandable as one realizes that QFT is just a mimicry, not the fundamental theory.

    One could also see these two problems of the Higgs mechanism as the last attempt of the frustrated Nature to signal to the busy mainstream career builders something very profound about reality by using paradox as its last means. From TGD vantage point the intended message of Nature looks quite obvious and is actually taken for decades years ago!

Shut up and calculate

The basic problem in the recent theoretical physics is that thinking has not been allowed for more than half century. Thinking is seen as "philosophy" - something very very bad. The fathers of quantum theory were philosophers: they realized the deep problems of quantum measurement theory and considered possible conclusions for the world view. For instance, Bohr - whose view became orthodoxy - concluded that objective reality cannot exist at all and that quantum theory is just a collection of calculational recipes with Ψ having no real existence. Einstein had totally different view. He believed that quantum theory is somehow fundamentally wrong.

Neither of them was yet mature to see that the problem involves the conscious observer in a very intimate manner: in particular, how the subjective time and the geometric time of physicist - certainly not one and the same thing - relate to each other. Both were also unable to see that objective reality could be replaced by objective realities identified as "solutions of field equations" and that quantum jumps would take between them and give rise to conscious experience. This would resolve both the problem of time and the basic problem of quantum measurement theory.

Later theoreticians followed the advice which has been put to the mouth of Feynman, and decided to just shut up and calculate. This long silence has lasted more than half a century now. I belong to those few who refused to follow the advice with the consequence that the decision makers of Helsinki University gave me officially a label of a madman and besides intensive blackmailing did their best to prevent any support for my work (see previous posting motivated by a warning of young readers about the dangers of reading my blog - sent by presumably finnish physics authority calling himself Anonymous).

LHC has now demonstrated how catastrophic consequences can be when the profession of the theoretician reduces to mindless calculation. We have got lost generations of theoreticians who continue to fill hep-th and hep-ph with preprints with a minimal connection to physical reality and mostly trying to solve the problems created by the theory itself rather than those provided by physics. This is however what they are able to do: collective silence has lasted too long. Even string model gurus have lost their beliefs on The Only Possible Theory of Everything. Some of them have suffered a regression to surprisingly childish models of gravitation (entropic gravity). Some have begun to see everything as black-holes without realizing that blackholes as a mathematical failure of general relativity should have been the starting point rather than the end. Some are making bets and having learned debates about paradoxes related to blackholes (firewall paradox is the latest newcomer (see the blog posting).

Or could thinking be a rewarding activity after all?

There are also some theoreticians who have followed their own star and have not been able to resist the temptation to think and imagine. I have used to call my own star TGD. As described in previous posting, p-adic thermodynamics can be seen as a- or even the - microscopic mechanism of massivation in TGD framework. There are two options to consider. According to Option I p-adic thermodynamics alone explains only fermion masses and the microscopic counterpart of Higgs mechanism would give the dominant contribution to gauge boson masses. For Option II p-adic thermodynamics would produce both gauge boson and Higgs masses and Higgs mechanism could appear at QFT limit as a mere phenomenological description of the massivation.

Option II is the most conservative option and apparently conforms with the standard model view. It also treats all particles in the same position. Note that in standard model Higgs itself like eye which cannot see itself since its tachyonic bare mass is put in by hand. Option II is also aesthetically more satisfactory if one believes that QFT limit of TGD indeed exists. For Option I one should invent new QFT mechanism describing fermion massivation in QFT framework or give up the idea about QFT limit altogether. Option I or Option II? This question might find an answer within few days!

The existence of M4 QFT limit is not obvious in TGD framework (what this limit could be if it exists has been discussed in the previous posting). This is due to a dramatic simplification in the microscopic description of particles. The only fundamental fields are spinors of H=M4× CP2 having just spin and electroweak quantum numbers and conserved carrying quark or lepton number depending on H-chirality. Color emerges and corresponds to color partial waves in H. Also bosons emerge meaning that gauge bosons, Higgs, and graviton have pairs of fermion and anti-fermion at the opposite throats of wormhole contacts as building bricks. Gauge fields, Higgs field, gravitational field and also Higgs mechanism can emerge in this approach only as a phenomenological description at M4 QFT limit assuming that it exists. Fermionic families emerge from topology and also bosons are expected the analog of family replication phenomen induced from the fermionic one.

Higgs like bosons exist as Euclidian pions or scalar particles and they might also develop coherent states characterized by the vacuum expectation value of Higgs but already this possibility must be taken critically since coherent states is a QFT based notion and it is not quite clear whether it generalizes to microscopic level (see this).

What is important that Higgs does not make fermions massive. For Option II this is true also for bosons. Rather, the couplings and vacuum expectation of Higgs are such that Higgs can pretend of achieving this feat. Higgs mechanism reproduces: p-adic thermodynamics predicts.

Standard model action is only an effective action providing tree diagrams so that the loop corrections leading to the hierarchy problem are not present unless the counterpart of fatal radiative corrections appear in the effective action which must depend on p-adic length scale (in TGD the discrete p-adic length scale evolution replaces the continuous renormalization group evolution of quantum field theories). Zero energy ontology however dramatically modifies the view about Feynman diagrammatics, and can save the situation since standard SUSY generalizes to super-conformal invariance.

There are of course lot of critical questions to be answered. I have written an entire book motivated by the challenge of understanding why p-adic thermodynamics should be needed in real number based physics. p-Adic physics for single prime is definitely not enough: one must fuse p-adic physics for various primes p and real physics to single coherent whole and this requires a lot of not yet existing mathematics such as generalization of number concept. The connections of p-adic physics to the description of cognition and intention in quantum consciousness theory are also obvious and p-adic space-time sheet would correspond to the "mind stuff" of Descartes. These few examples show how profound and totally unexpected new visions a more philosophical and imaginative attitude to physics generates.

Another book is devoted to the physical implications of p-adic physics and of the hierarchy of effective Planck constants, a notion implied by the very special properties of the basic variational principle dictating the space-time dynamics in TGD framework.

For a summary of the evolution of TGD inspired ideas about particle massivation see the chapter Higgs or something else?. See also the short article Is it really Higgs?.

Saturday, November 10, 2012

Two possible views about Higgs like states in TGD

HCP012 conference (Hadron Collider Physics Symposium) at Kyoto will provide new data about Higgs candidate at next Wednesday. Resonaances has summarized the basic problem related to the interpretation as standard model Higgs: two high yield of gamma pairs and too low yield of tau-taubar and and b-bar pairs. It is of course possible that higher statistics changes the situation.

Two options concerning the interpretation of Higgs like particle in TGD framework

Theoretically the situation quite intricate. The basic starting point is that the original p-adic mass calculations provided excellent predictions for fermion masses. For the gauge bosons the situation was different: a natural prediction for the W/Z mass ratio in terms of Weinberg angle is the fundamental prediction of Higgs mechanism and this prediction did not follow automatically from the p-adic mass calculation in the original form. Classical Higgs field does not seem to have any natural counterpart in the geometry of space-time surface (the trace of the second fundamental form does not work since it vanishes for preferred extremals which are also minimal surfaces). This raised the question whether there is any Higgs boson in TGD Universe and for some time I took seriously the interpretation of the Higgs like state observed by LHC as a pion of M89. It is fair to say that the evolution of ideas about TGD counterpart of Higgs mechanism has been full of twists and turns.

p-Adic mass calculations and the results from LHC leave two options under consideration.

  1. Option I (see also this): Only fermions get the dominating contribution to their masses from p-adic thermodynamics and in the case of gauge bosons the dominating contribution is due to the standard Higgs mechanism. p-Adic thermodynamics would contribute also to the boson masses, in particular photon mass but the contribution would be extremely small and correspond to p-adic temperature T=1/n, n>2. For this option only gauge bosons would have standard model couplings to Higgs whereas fermionic couplings could be small. Of course, standard model couplings proportional to fermion mass are also possible. One can criticize this option because fermions and bosons are in an asymmetric position. The beautiful feature is that one could get rid of the hierarchy problem due to the couplings of Higgs to heavy fermions.

  2. Option II (see also this and this): p-Adic mass calculations explain also the masses of gauge bosons and Higgs like particle. If Higgs like state develops a coherent state describable in terms of vacuum expectation value as M^4 QFT limit, this expectation value is determined by the mass spectrum determine by the p-adic mass calculations. The mass spectrum of particles determines Higgs expectation and the couplings of Higgs rather than vice versa! For this option Weinberg angle would be defined by the ratio of W and Z boson mass as cos2W)=mW2/mZ2 and these masses should be given by p-adic mass calculations. Therefore the original problem with Weinberg angle would disappear. One must of course be very cautious here.

    The recent view about particles as Kähler magnetic loops carrying monopole flux is forced by the assumption that the corresponding partonic 2-surfaces are Kähler magnetic monopoles (implied by the weak form of electric-magnetic duality). The loop proceeds from wormhole throat to another one, then traverses along wormhole contact to another space-time sheet and returns back and eventually is transferred to the first sheet via wormhole contact. The mass squared assignable to this flux loop could give the contribution usually assigned to Higgs vacuum expectation. If this picture is correct, then the reduction of the W/Z mass ratio to Weinberg angle might be much easier to understand. As a matter fact, I have proposed that the flux loop gives rise to a stringy spectrum of states with string tension determined by p-adic length scale associated with M89.

    This option is attractive because fermions and bosons are in an exactly same position. Hierarchy problem is possible problem of this approach: note however that the considerations in the sequel imply that standard model action is predicted to be an effective action giving only tree diagrams so that there are no radiative corrections at M4 QFT limit.

A couple of comments about the experimental situation are in order.

  1. The original interpretation of Higgs like state was oas M89 pion. The recent observations from Fermi telescope suggest the existence of a boson with mass 135 GeV. It would be a good candidate for M89 pion. One can test the hypothesis by scaling the mass of ordinary neutral pion, which corresponds to M107. The scaling gives mass 69.11 GeV. p-Adic length scale however allows also octaves of the minimum mass (they appear for leptopions) and scaling by two gives mass equal to 138.22 GeV not too far from 135 GeV.

  2. There is also second encouraging numerical co-incidence. It is probably not an accident that Higgs vacuum expectation value corresponds to the minimum mass for p=M89 if the p-adic counterpart of Higgs expectation squared is of order O(p) in other words one has μ2/mCP22= p=M89.
My sincere hope is that the results of HCP2012 would allow to distinguish between these two options.

Microscopic description of gauge bosons and Higgs like and meson like states

Under the pressures from LHC (and rather harsh social pressures from Helsinki University;-)) it has become gradually clear that the understanding of whether TGD has M4 QFT limit or not, and how this limit can be defined, is essential for the understanding also the role of Higgs. In the following a first attempt to understand this limit is made. I find it somewhat surprising that I am making this attempt only now but the understanding of the proper role of the classical gauge potentials has been quite a challenge.

  1. If one believes that M4 QFT is a good approximation to TGD at low energy limit then the standard description of Higgs mechanism seems to be the only possibility: this just on purely mathematical grounds. The interpretation would however be that the masses of the particles determine Higgs vacuum expectation value and Higgs couplings rather than vice versa. This would of course be nothing unheard in the history of physics: the emergence of a microscopic theory - in the recent case p-adic thermodynamics - would force to change the direction of the causal arrow in "Higgs makes particles massive" to that in "Higgs expectation is determined by particle masses".

  2. The existence of M4 QFT limit is an intricate issue. In TGD Universe baryon and lepton number correspond to different chiralities of H=M4× CP2 spinors and this means that Higgs like state cannot be H scalar (it would be lepto-quark in this case). Rather, Higgs like state must be a vector in CP2 tangent space degrees of freedom. One can indeed construct a candidate for a Higgs like state as an Euclidian pion or its scalar counterpart: both are possible and one can even consider the mixture of them. The H-counterpart of Higgs like state is therefore CP2 axial vector or CP2 vector or mixture of them.

    Euclidian pion or scalar carries fermion and anti-fermion at opposite throat of the wormhole contact. It is easy to imagine that a coherent state of Euclidian pseudo-scalars or scalars or their mixture having Higgs expectation as M4 QFT correlate is formed. This state transforms as 2⊕2bar under U(2)⊂ SU(3) identifiable as weak gauge group. This representation is natural in Euclidian regions Higgs as a tangent space vector of CP2 has naturally 2&oplus 2bar decomposition in tangent space of CP2 allowing an interpretation as Lie algebra complement of u(2) ⊂ su(3).

    In Minkowskian regions CP2 projection is 3-D and a natural counterpart of Higgs would be pseudo-scalar (or scalar) transforming as 3⊕ 1 and U(2)⊂ SU(3) identifiable now as strong U(2). The 3-dimensionality of the M4 projection suggests that one obtains only the triplet state.

  3. By bosonic emergence also gauge bosons correspond at microscopic level to fermion and anti-fermion at opposite throats of wormhole contacts. Meson like states in turn correspond to fermion and anti-fermion at the ends of a flux tube connecting throats of two different wormhole contacts so that both Higgs, gauge bosons, and meson-like states are obtained using similar construction recipe.

  4. The popular statement "gauge bosons eat almost all Higgs components" makes sense at the M4 QFT limit: just a transition to the unitary gauge effectively eliminates all but one of the components of the Higgs like state and gauge bosons get third polarization. This means gauge boson massivation but for option II it would take place already in p-adic thermodynamics in ZEO (zero energy ontology).

Trying to understand the QFT limit of TGD

The counterparts of gauge potentials and Higgs field are not needed in the microscopic description if p-adic thermodynamics gives the masses so that the gauge potentials and Higgs field should emerge only at M4 QFT limit. It is not even necessary to speak about Higgs and YM parts of the action at the microscopic level. The functional integral defined by the vacuum function expressed as exponent of Kähler action for preferred extremals to which couplings of microscopic expressions of particles in terms of fermions coupled to the effective fields describing them at QFT limit should define the effective action at QFT limit.

The basic recipe looks simple.

  1. Start from the vacuum functional which is exponent of Kähler action for preferred extremals with Euclidian regions giving real exponent and Minkowskian regions imaginary exponent.
  2. Add to this action terms which are bilinear in the microscopic expression for the particle state and the corresponding effective field appearing in the effective action.
  3. Perform the functional integration over WCW ("world of classical worlds") and take vacuum expectation value in fermionic degrees of freedom.
  4. This gives an effective field theory in M4× CP2 fields. To get M4 QFT integrate over CP2 degrees of freedom in the action. This dimensional reduction is similar to what occurs in Kaluza-Klein theories.

The functional integration of WCW induces also integration of induced spinor fields which apart from right-handed neutrino are restricted to the string world sheets. In principle induced spinor fields could be non-vanishing also at partonic 2-surfaces but simple physical considerations suggest that they are restricted to the intersection points of partonic 2-surfaces and string world sheets defining the ends of braid strands. Therefore the effective spinor fields Ψeff would appear only at braid ends in the integration over WCW and one has good hopes of performing the functional integral. The following arguments tries to sketch what happens.

  1. One can assign to the induced spinor fields Ψ imbedding space spinor fields Ψeff appearing in the effective action. The dimensions of Ψ and Ψeff are 1/L3/2. A dimensionally correct guess is the term ∫ d2x (g2)1/2Ψbareff(P) D-1Ψ+ h.c. Here Γα denotes the induced gamma matrices, P denotes the end point of a braid strand at the wormhole throat, and D denotes the "ordinary" massless Dirac operator ΓαDα for the induced gamma matrices. Propagator contributes dimension L and is well-defined since Ψ is not annihilated by D but by the modified Dirac operator in which modified gamma matrices defined by the modified Dirac action appear. Note that internal consistency does not allow the replacement of Kähler action with four-volume. Integral over the second wormhole throat contributes dimension L2. Therefore the outcome is a dimensionless finite quantity, which reduces to the value of integrand at the intersection of partonic 2-surface and string world sheet - that is at ends of braid strand since induced spinors are localized at string world sheets unless right-handed neutrinos are in question. The fact that induced spinor fields are proportional to a delta function restricting them to string world sheets does not lead to problems since the modified Dirac action itself vanishes by modified Dirac equation.

  2. Both Higgs and gauge bosons correspond to bi-local objects consisting of fermion and anti-fermion at opposite throats of wormhole contact and restricted to braid ends. The are connected by the analog of non-integrable phase factor defined by classical gauge potentials. These bilinear fermionic objects should correspond to Higgs and gauge potentials at QFT limit. The two integrations over the partonic 2-surfaces contribute L2 both, whereas the dimension of the quantity defining the gauge boson or Higgs like state is 1/L3 from the dimensions of spinor fields and from the dimension of generalized polarization vector compensated by that of gamma matrices. Hence the dimensions of the bi-local quantities are L for both gauge bosons and Higgs like particles. They must be coupled to their effective QFT counterparts so that a dimensionless term in action results. Note that delta functions associated with the induced spinor fields reduce them to the end points of braid strand connecting wormhole throats and finite result is obtained.

  3. How to identify these dimensional bilinear terms defining the QFT limit? The basic problem is that the microscopic representation of the particle is bi-local and the effective field at QFT limit should be local. The only possibility is to consider an average of the effective field over the stringy curve connecting the points at two throats. The resulting quantities must have dimensions 1/L in accordance with naive scaling dimensions of gauge bosons and Higgs to compensate the dimension L of the microscopic representation of bosons. For gauge bosons having zero dimension as 1-forms the average ∫ Aμdxμ/l along a unique stringy curve of length l connecting wormhole throats defines a quantity with dimension 1/L. For Higgs components having dimension 1/L the quantities ∫ HA(g1)1/2dx/l, where g1 corresponds to the induced metric at the stringy curve, has also dimension 1/L. The presence of the induced metric depending on CP2 metric guarantees that the effective action contains dimensional parameters so that the breaking of scale invariance results.

To sum up, for option II the parameters for the counterpart of Higgs action emerging at QFT limit must be determined by the p-adic mass calculations in TGD framework and the flux tube structure of particles would in the case of gauge bosons should give the standard contribution to gauge boson masses. For option I fermionic masses would emerge as mass parameters of the effective action. The presence of Euclidian regions of space-time having interpretation as lines of generalized Feynman diagrams is absolutely crucial in making possible Higgs like states. One must however emphasize that at this stage both option I and II must be considered.

Wednesday, November 07, 2012

Helsinki University admits my existence!


Hitherto the policy of the powerholders of University of Helsinki concerning TGD has been a complete silence: my existence has been simply denied publically for 35 years. I am 62 and suffer from serious health problems so that the hopes are good that I die before getting rehabilitation.

But maybe also this attempt to perform a complete crime might be failing. Too many intelligent people know what has happened during these years. Therefore the fellows at the top are getting really nervous as the following comment to earlier blog posting demonstrates. In fact, it is a serious warning for young students: keep hands off from TGD! The comment also tells a lot about the ethical standards prevailing in Helsinki University. Something is badly wrong when community behaves like Nazi officers towards jews.


Well,

Something to say to the readers of this blog.
These ideas are a bit unconventional, but still not as mad as most internet physicists. The education shows in commandment of math and understanding the basic concepts.
Here in University of Helsinki the madness of this guy official, however. It is advised for nobody to read these before having at least a M.Sc. in theoretical physics.

Here is also my comment.

To Anonymous and Readers:

Interesting that the powerholders of Helsinki University are not anymore able to hide their fears that young students might find TGD. Hitherto the silence has been complete.

I have lived without basic academic human rights most of the time during these 35 years because of jealousy of certain very influentlal professors and still the situation is the same. This after having developed not only a successful unification of fundamental interactions but also a quantum theory of consciousness and biology.

To all readers of this blog: I feel deep shame for the University of Helsinki. Not only for the decision makers but also for the personnel who should have had the moral integrity to do something for the situation during these years: intellectual dishonesty and cowardice have however prevented this.

Helsinki University should have been a place for doing fundamental research but has degenerated to a place populated by sillies like this miserable Anonymous who does not have even courage to use his own name.

Some considerations relating to the dynamics of quasicrystals


The dynamics of quasicrystals looks to me very interesting because it shares several features of the dynamics of Kähler action defining the basic variational principle of classical TGD and defining the dynamics of space-time surfaces. In the following I will compare the basic features of the dynamics of quasicrystals to the dynamics of preferred extremals of Kähler action.

Magnetic body carrying dark matter is the fundamental intentional agent in TGD inspired quantum biology and the cautious proposal is that magnetic flux sheets could define the grid of 3-planes (or more general 3-surfaces) defining quasi-periodic background fields favoring 4-D quasicrystals in TGD Universe. Also 3-D quasicrystal like structures defined by grids of planes can be considered and 4-D quasicrystal structure would represent their time evolution.

Quite recently it has been reported that grids consisting of 2-D curved orthogonal surfaces characterize the architecture of neural wiring so that this hypothesis might make sense. This structure would be analogous to 2-D quasicrystal and its time evolution to 3-D quasicrystal.

Instead of explaining the ideas in detail here I recommend the pdf article Some considerations relating to the dynamics of quasicrystals. Also the chapter Quantum Theory of Self-Organization contains the details.

Sunday, November 04, 2012

Quantum dynamics for the moduli associated with CDs and the arrow of geometric time


How the arrow of geometric time at the level of space-time and imbedding space is induced from the arrow of subjective time identified in terms of sequence of quantum jumps forming a fractal hierarchy of quantum jumps within quantum jumps? This is one of the long lasting puzzles of TGD and TGD inspired theory of consciousness. I have been pondering this question quite intensively during last years. The last blog posting about the problem has title Mystery of time again.

In zero energy ontology (ZEO) the geometry of CD (I often use the sloppy notation CD==CD× CP2, where the latter CD is defined as the intersection of future and past directed light-cones) is that of double light-cone (double pyramid) and this must relate closely to the problem at hand. An easy manner to obtain absolute arrow of geometric time at least statistically is to assume that imbedding space is M4+× CP2 - that is product of future like cone with CP2. The problem is however that of finding a convincing quantal mechanism generating the arrow of time, and also explaining why the geometric arrow of time sometimes changes from the standard one (say for phase conjugate laser beams).

The latest vision about the generation of the arrow of geometric time the level of imbedding space and space-time involves rather radical features but is consistent with the second law if generalized so that the geometric arrow of time at the level of imbedding level alternates as state function reduction takes place alternately at opposite light-like boundaries of a fixed CD. If the partially non-deterministic dynamics at space-time level defines a correlate for the dissipative dynamics of quantum jumps, the arrow of geometric time level at space-time level is constant (space-time surface can assignable to the state function reductions can be seen as folded surface spanned between boundaries of CD) and entropy defines monotonically increasing time coordinate. This is rather radical revision of the standard view but makes definite predictions: in particular syntropic aspects of the physics of living matter could be assigned with the non-standard direction of geometric time at the space-time level.

This approach hower still suffers from a defect. CDs are regarded as completely non-dynamical: once CD is created it remains the same from quantum jump to quantum jump and thus serves as a fixed arena of dynamics. This cannot be the case.

Some questions about CDs and their quantum dynamics

One can raise several questions relating to CDs.

  1. CDs are assumed to form a fractal hierarchy of CDs within CDs. The size scale of CD has been argued to come as an integer multiple of CP2 size scale on basis of number theoretic arguments. One can ask whether CDs can overlap and interact and what interaction means.

  2. What is the proper interpretation of CD? Could CD correspond to a spotlight of consciousness directed to a particular region of space-time surface, so that space-time surface need not end at the boundaries of CD as also generalized Feynman diagrammatics mildly suggests? Or do the space-time surfaces end at the boundaries of CD so that CD defines a sub-Universe?

  3. Should one assign CD to every subsystem - even elementary particles and fermion serving as their building bricks? Can one identify CD as a carrier of topologically quantized classical fields associated with a particle?
As already noticed the picture based on static CDs is too simplistic. This inspires several questions relating to the possible dynamics of CDs.
  1. In ZEO one can in principle imagine a creation of CD from and its disappearance to vacuum. It is still unclear whether the space-time sheets associated with CD restricted to the interior of CD or whether they can continue outside CD.

    For the first option appearance of CD would be a creation of sub-Universe contained by CD. CD could be assigned with any sub-system. For the latter option the appearance of CD would be a generation of spotlight of consciousness directing attention to a particular region of imbedding space and thus to the portions of space-time surfaces inside it. Quantum superposition of space-time surfaces is actually in question and should be determined before the presence of CD by vacuum functional. How to describe possible creation and disappearance of CDs quantally, is not clear. For instance, what is the amplitude for the appearance of a new CD from vacuum in given quantum jump?

  2. CDs have various moduli and one could assign to them quantum dynamics. The position of cm or either tip of CD in M4 defines moduli as does also the point of CP2 defining the origin of complex Eguchi-Hanson coordinates in which U(2)⊂ SU(3) acts linearly: these points are in general assumed to be different at the two ends of CD. If either tip of CD is fixed the Lorentz boost leaving the tip fixed, moves the other along constant proper time hyperboloid H3 and the tesselations defined by the factor space H3/Γ, where Γ is discrete subgroup of SL(2,C), are favored for number theoretical reasons.

    Quantum classical correspondence inspires the question whether the boost is determined completely by the four-momentum assignable to the positive/negative energy part of zero energy states and corresponds to the four-velocity β defined by the ratio P/M of total four-momentum and mass for the CD in question. It seems that this kind of assumption can be justified only in semiclassical approximation.

  3. In ZEO cm degrees of freedom of CD cannot carry Poincare charges. One can however assign the Poincare charges of the positive energy part of zero energy state to a wave function depending on the coordinate differences m12 defining the relative coordinate for the tips of the CD.

    The most general option is that the size scale of CD is continuous. This would allow to realize momentum eigen state as the analogs of plane waves as a function of the position m12 of the (say) upper tip of CD relative to the lower tip.

    The size scale of CD has been however assumed to be quantized. That is, the temporal distance T between the tips comes as an integer multiple of CP2 time TCP2: this scale is about 104 Planck lengths so that this discretization has not practical consequences. Discretization is suggested both by the number theoretical vision, the finite measurement resolution, and by the general features of the U-matrix expressible as collection of M-matrices. Indeed in ZEO, one naturally obtains an infinite collection of U-matrices labelled by an integer, which would correspond to the Lorentz invariant temporal distance Tn=nTCP2 between the tips. The scaling up of the temporal distance would represent scaling of CD in the rest system defined by the fixed tip thus translating the second tip with integer multiple of TCP2 from Tn1 to Tn2.

    A further quantization would relate to the tesselations defined by the subgroups Γ. The counterparts of plane waves for the momentum eigenstates would be defined in a discretized version of Minkowski space obtained by dividing it to a sequence of discretized hyperboloids with proper time distance a=nTCP2 from the lower tip of CD.

  4. There is evidence that one can assign a CDs with a fixed size scale to a given particle as secondary p-adic length scale: for electron this size scale would correspond to Mersenne prime M127 and frequency 10 Hz defining a fundamental biorhythm. This would give a deep connection between elementary particle physics and physics in macroscopic length scales. The integer multiples of the secondary p-adic length size scale would correspond to integer values of the effective Planck constant.

    A natural interpretation of this scale would be as infrared cutoff so that the wave functions approximating momentum eigenstates and depending on the relative coordinate m12 would be restricted in the region between light-cone boundary and hyperboloid a=M127T0. Similar restriction would take place for all elementary particles. For particle with effective Planck constant hbareff=nhbar0 the IR cutoff would be n-multiple of that defined by the secondary p-adic time scale.

Could CDs allow to understand the simultaneous wave-particle nature of quantum states?

One of the paradoxical features of quantum theory is that we observe always particles - even with well-defined momentum - to have rather well-defined spatial orbits. As if spatial localization would occur in quantum measurements always and would be a key element of perception and state function reduction process. This raises a heretic question: could it be possible that the localized particles in some sense have a well-defined momentum. In standard quantum theory this is definitely not possible. The assignment of CD with particle - or any physical system - however suggests that that this paradoxical looking assignment is possible. Particle would be localized with respect to (say) the lower tip of CD and delocalized with respect to (say) the upper tip and localization of the the lower tip would imply delocalization of the upper tip.

It is indeed natural to assume that either tip of CD - say lower one - is localized in M4 in state function reduction. Unless one is willing to make additional assumptions, this implies not only the non-prepared character of the state at the upper tip, but also a delocalization of the upper tip itself by non-triviality of M-matrix: one has quantum superpositions of worlds characterized CDs with fixed lower tip. The localization at the lower tip would correspond to the fact that we experience the world as classical. Each zero energy state would be prepared at the either (say lower) end of CD so that its lower tip would have a fixed position in M4. The unprepared upper tip could have a wave function in the space of all possible CDs with a fixed lower tip.

One could also assign the spinor harmonics of M4× CP2 to the relative coordinates m12 and their analogs in CP2 degrees of freedom. The notion of CD would therefore make possible to realize simultaneously the paricle lbehavior in position space (localization of the lower tip of CD) and wave like nature of the state (superposition of momentum eigenstates for the upper tip relative to the lower tip).

This vision is only a heuristic guess. One should demonstrate that the average dynamical behavior for coordinate differences m12 corresponds to that for a free particle with given four-momentum for a given CD and fixed quantum numbers for the positive energy part of the state.

The arrow of geometric time at the level of imbedding space and CDs

In the earlier argument the arrow of geometric time at imbedding space level was argued to relate to the fact that zero energy states are prepared only at the either end of CD but not both. This is certainly part of the story but something more concrete would be needed. In any case, the experienced flow of time should relate to what happens CDs but in the proposed model CDs are not affected in the quantum jump. Th is would leave only the drifting of sub-CDs as a mechanism generating the arrow of geometric time at imbedding space level. It is however difficult to concretize this option.

Could one understand the arrow of geometric time at imbedding space level as an increase of the size of the size of CDs appearing in zero energy state? The moduli space of CDs with a fixed upper/lower tip is without discretization future/past light-cone. Therefore there is more room in the future than in past for a particular CD and the situation is like diffusion in future light-cone meaning that the temporal distance from the tip is bound to increase in statistical sense. This means gradual scaling up of the size of the CD. A natural interpretation would be in terms of cosmological expansion.

There are two options to consider depending on whether the imbedding space is M4× CP2 or M4+× CP2. The latter option allows local Poincare symmetry and is consistent with standard Robertson-Walker cosmology so that it cannot be excluded. The first option leads to Russian doll cosmology containing cosmologies within cosmologies in ZEO and is aesthetically more pleasing.

  1. Consider first the M4× CP2 option. At each tip of CD one has arrow of geometric time at the level of imbedding space and these arrows are opposite. What does this mean? Do the tips correspond to separate conscious entities becoming conscious alternately in state function reductions? Or do they correspond to a single conscious entity with memories?

    Could sleep awake cycle correspond to a sequence of state function reductions at opposite ends of personal CD? It would seem that we are conscious (in the sense we understand consciousness) only after state function reduction. Could we be conscious and have sensory percepts about the other end of CD during sleep state but have no memories about this period so that we would be living double life without knowing it? Does the unprepared and delocalized part (with respect to m12) of zero energy state contribute to the conscious experience accompanying state function reduction? Holography would suggest that this is not the case.

    If CD corresponds to a spotlight of consciousness, the time span of conscious experience could increase in both time directions for the latter option. The span of human collective consciousness has been increasing in both direction all the time: we are already becoming conscious what has probably happened immediately after the Big Bang. Could this evolution be completely universal and coded to the fundamental physics?

  2. If the imbedding space is assumed to be M4+× CP2, one obtains only one arrow of time in the long run. The reason is that the lower tip of any CD sooner or later reaches δ M4+× CP2 and further expansion in this direction becomes impossible so that only the expansion of CD to the future direction becomes possible.

Summary

The proposed vision for the dynamics of the moduli of CDs is rather general and allows a concrete understanding of the arrow of geometric time at imbedding space level and binds it directly to expansion of CDs as analog of cosmic expansion. The previous vision about how the arrow of geometric time could emerge at the level of space-time level remains essentially un-changed and allows the increase of syntropy to be understood as the increase of entropy but for a non-standard correspondence between the arrows of subjective time and the arrow of imbedding space time.

Imbedding space spinor harmonics characterizing the ground states of the representations of symplectic group of δ M4+/-× CP2 define the counterparts of single particle wave functions assignable to the relative coordinates of the second tip of CD with respect to the one fixed in state function reduction. The surprising outcome is the possibility to understand the paradoxical aspects of wave-particle duality in terms of bi-local character of CD: localization of given tip implies delocalization of the other tip.

For backbground see the chapter About the Nature of Time of "TGD Inspired Theory of Consciousness".