https://matpitka.blogspot.com/2012/08/

Friday, August 31, 2012

Errata



Feynman has said that the life of scientist is the state of endless confusion. This is due to the skeptic attitude which means that every statement is always more or less plausible, never absolutely true. It would be so easy to just believe but this luxury is not for scientists.

In my own case the endless confusion is also due to my remarkable ability to make blunders. This ability must be unique to me since I have heard not a single scientist telling about possession of this gift, and many of them insist that I am not a scientist at all;-). The mechanism leading to the perception of the big blunder is always the same. Some days of depressive mood without any obvious reason. Then a sudden realization - almost as a rule during my daily walk - that I have done something horribly stupid. The discovery is followed by a deep feeling of shame. Probably something similar to that felt by all those innumerable discoverers of perpetuum mobiles who are told in detail how energy conservation and second law explain why their construct did not work in this demo and will never do so. Have I lived for vain? Has any-one read the article or chapter containing the blunder? How many people have discovered how stupid an error I have made? Or could I hope that no-one actually reads my scribblings? And if some-one does, could it be that this some-one understands nothing!

The long and tortuous task related to the understanding of Higgs mechanism and SUSY in TGD framework have been a rich source of blunders and erratic interpretations and I collected these Odysseia to two chapters (see this and this) which reveal all the miserable details of this journeys in theory landscape and at least show how to not to do it;-). I can of course console my self by saying that this is not solely due to my stupidity. In TGD all the particles follow as predictions of the theory: no fields are put in by hand as in the usual quantum field theory approach, where one can write Feynman rules immediately. It is obvious that if one starts just from the idea that space-times are 4-D surfaces in certain 8-D space, it takes some time and some blunders to deduce the particle spectrum and Feynman rules.

The latest electrifying experience during my daily walk was related to Higgs mechanism. In the previous shock for a month or so ago I had understood that 125 GeV particle identified usually as Higgs like boson very probably is not M89 pion as I had believed originally although it could be pion-like state. I had already earlier realized that TGD predicts two kinds of mesonlike states. Minkowskian mesons are counterparts of ordinary mesons assignable to long flux tubes with Minkowskian signature of the induced metric connecting different wormhole contacts at distance define by hadron size in question. For M89 hadrons it would be weak length scale. Euclidian mesons correspond to very short flux tubes connecting opposite throats of wormhole contacts. The latter are natural candidates for Higgs like bosons and can in principle be scalars or pseudoscalars. I assumed pseudoscalar but it does not actually affect the basic predictions.

There is a beautiful connection with color symmetry and strong u(2) identifiable as weak u(2). Color algebra decomposes to u(2) subalgebra and its complement transforming like 3+1 and 2+ 2bar respectively under u(2) having interpretation as imbedding of the electroweak u(2) acting as holonomies of CP2. One can construct fermionic bilinears transforming according to both representations and 2+2bar is the natural candidate for Euclidian pion identified as Higgs like boson. 3+1 - as a matter fact 3 alone (for a reason which can be understood is a natural candidate for the Minkowskian pion.

I of course identified Higgs like boson in terms of 2+2bar Euclidian pion. I had to construct the TGD counterpart of Higgs mechanism generating vacuum expectation of Euclidian pion giving domination contributions to weak boson masses and providing them their longitudinal polarizations. Here I made incredibly stupid mistake: for some mysterious reason I treated the Euclidian pion as 3+1 rather than 2+2bar! Certainly, a cognitive remnant from the original identification of 125 GeV boson as Minkowskian pion. I did not even notice that 3+1 gives rise to a modification of Higgs mechanism in which both photon and Z boson would remain massless. Amusingly, I have proposed for years ago that a phase in which both photon and Z are massless in cellular length scales exists and is highly relevant for the physics of cell membrane. Maybe this 3+1 Higgs is there and characterizes different phase.

After discovering the error it took one day to develop a correct - or better to say "more correct" - view about Higgs mechanism. Within few hours I felt myself happy. Truth is a marvelous healer.

  1. The final solution (I hope so!) came from the observation that TGD predicts two kinds of meson like states corresponding to Kähler magnetic flux tubes connecting wormhole throats of separate wormhole contacts and to throats of same wormhole contact. The first meson like states are "long" and have Minkowskian signature. Second meson like states are very "short" and have Euclidian signature. It would be Euclidian pions which take the role of Higgs like state. It is essential that these states transform as 2+2bar under u(2) subset su(3). In the first approximation Higgs potential is exactly the same as in standard model and the massivation of gauge bosons proceeds in the same manner as in standard model. Hierarchy problem due to the instability of the tachyonic mass term for Higgs like particle is avoided since the direct couplings to fermions proportional to fermion masses are not needed because p-adic thermodynamics makes fermions massive. Also the predictions for the decay rates remain the same as for standard model Higgs if tachyonic mass squared is taken as a non-dynamical parameter.

  2. What is new is that one can also find a microscopic description for the tachyonic mass term in terms of a bilinear coupling to a superposition a×YM+ b× I of YM action density and instanton term most naturally restricted to the induced Kähler form. This term also predicts that - besides the ordinary decays to electroweak gauge boson pairs mediated by same action as in the case of the ordinary Higgs - there are also decays mediated by linear perturbative couplings to a×YM+ b× I. Instanton density brings in also CP breaking possibly related to the poorly understood CP breaking in hadronic physics. The quantitative estimate gives a result consistent with experimental data for reasonable magnitude of parameters. Also a connection with the dark matter researches reporting signal at 130 GeV and possibly also at 110 GeV suggests itself: maybe also these signals also correspond to Minkowskian and Euclidian pion-like states.
Euclidian pion does not explain the massivation of fermions which has explanation in terms of p-adic thermodynamics. The couplings to fermions are radiative and cannot be proportional to masses. Besides solving the hierarchy problem, this identification could explain the failure to find the decays to τ pairs and explain the excess of two-gamma decays. Also a connection with the dark matter researches reporting signal at 130 GeV and possibly also at 110 GeV suggests itself: maybe also these signals also correspond to Minkowskian and Euclidian M89 pions.

For the details of the Odysseia see the new chapter Higgs or something else? of "p-Adic length scale hypothesis and dark matter hierarchy", and for the latest vision about Higgs the article Is it really Higgs?.

Thursday, August 23, 2012

The decays of Higgs like particle to bbar pairs observed?


As I have explained, In TGD framework Higgs is replaced with pseudoscalar which I call Euclidian pion.

  1. The model predicts correctly the anomalously high decay rate of 125 GeV boson to gamma pairs if it corresponds to Euclidian pion. One obtains simpler variant of standard model Higgs mechanism in the gauge boson sector but without the hierarchy problem since Euclidian pion need not (but can) have linear couplings to fermions explaining fermion masses in standard model framework (p-adic thermodynamics therefore solves the hierarchy problem and gives also for Higgs its non-tachyonic mass in the simplest model).

  2. Higgs vacuum expectation characterizing coherent vacuum state results from instanton density, which is non-vanishing only in Euclidian regions of space-time surface representing lines of generalized Feynman graphs.

  3. In Minkowskian regions field equation imply that CP2 projection of the space-time surface is 3-D. Therefore instanton density vanishes so that M89 pion is not eaten by weak bosons. For the same reason M89 pion, which is analogous to vector particle in CP2 tangent space has only three polarizations in CP2directions so that the fourth component of pion disappears and one obtains just the 3-component pion.

  4. What is amazing that the new vision about Higgs like particle allows to say something highly interesting about the new E(38) boson (which of course could be fake) by simple p-adic scaling arguments (see this). By scaling back to M89 one ends up to ask whether weak bosons reside at M89 Regge trajectories with a predictable slope - as one might also expect if one accepts that they correspond to pairs of monopole magnetic flux tubes at paralle space-time sheets. Of course, these are just questions and probably its does not take too much time to answer to these questions negatively by a simple reduction ad absurdum argument.

The crucial prediction is that the couplings of Euclidian pion are induced radiatively and need not (but can) depend linearly on boson mass and can be and are expected to be smaller than for standard model Higgs. The fact that the decays to tau pairs have not been observed at LHC (see this) supports this picture but much more data is needed before one can make any conclusions.

The following three pictures represent the lowest Feynman graphs contributing to the decays of standard model Higgs, the branching ratios of Higgs to various channels, and the decay width of Higgs as a function of Higgs mass. As far decay rates are considered, the optimal choice in the attempts to observe fermionic couplings of Higgs like particle is in the case of standard model Higgs is to search for decays to b pairs. From the second figure listed above one finds that the branching ratio of standard model Higgs to b pair is more than 10 times higher than that to tau pair. This mostly due to the proportionality of the branching ratio to the square of quark mass. The ratio of bbar and ttaubar branching ratios receives from this a factor [m(b)/m(τ)]2=(4/1.7)2=6.1.

D0 Collaboration indeed claims that it has observed decays of 125 GeV Higgs candidate to bbar. The Tevatron signal is relatively weak: it only has a statistical significance of 3 sigma, much below the 5 sigma serving as a standard for discovery in absence of systematic errors. I attach the abstract of the article of D0 group below.

We combine searches by the CDF and D0 Collaborations for the associated production of a Higgs boson with a W or Z boson and subsequent decay of the Higgs boson to a bottom-antibottom quark pair. The data, originating from Fermilab Tevatron pp̅ collisions at s1/2=1.96  TeV, correspond to integrated luminosities of up to 9.7  fb-1. The searches are conducted for a Higgs boson with mass in the range 100–150  GeV/c2. We observe an excess of events in the data compared with the background predictions, which is most significant in the mass range between 120 and 135  GeV/c2. The largest local significance is 3.3 standard deviations, corresponding to a global significance of 3.1 standard deviations. We interpret this as evidence for the presence of a new particle consistent with the standard model Higgs boson, which is produced in association with a weak vector boson and decays to a bottom-antibottom quark pair.

Suppose that D0 observation is real. If the Higgs like particle is standard model Higgs, the failure to observe tau pair production could be explained as statistical fluctuation. If the Higgs like particle is Euclidian pion and its decays to fermion pairs depend only weakly on fermion mass, the decay rates to lepton pairs and lighter quark pairs should be faster than for standard model Higgs. One encounters again problem with tau pairs. My bet is that D0 observation is fake and involves delicate psychological factors such as the belief that the new particle is indeed standard model Higgs.

For obvious reasons I feel like being a dancer on the rope! Do I ever get to the other side of the abyss? Or is my fate same as that of SUSY builders and superstringers?;-)

For more details about TGD vision concerning Higgs like particle see the article.

Realization of large N SUSY in TGD

The generators large N SUSY algebras are obtained by taking fermionic currents for second quantized fermions and replacing either fermion field or its conjugate with its particular mode. The resulting super currents are conserved and define super charges. By replacing both fermion and its conjugate with modes one obtains c number valued currents. Therefore N=∞ SUSY - presumably equivalent with super-conformal invariance - or its finite N cutoff is realized in TGD framework and the challenge is to understand the realization in more detail.

Super-space viz. Grassmann algebra valued fields

Standard SUSY induces super-space extending space-time by adding anti-commuting coordinates as a formal tool. Many mathematicians are not enthusiastic about this approach because of the purely formal nature of anti-commuting coordinates. Also I regard them as a non-sense geometrically and there is actually no need to introduce them as the following little argument shows.

Grassmann parameters (anti-commuting theta parameters) are generators of Grassmann algebra and the natural object replacing super-space is this Grassmann algebra with coefficients of Grassmann algebra basis appearing as ordinary real or complex coordinates. This is just an ordinary space with additional algebraic structure: the mysterious anti-commuting coordinates are not needed. To me this notion is one of the conceptual monsters created by the over-pragmatic thinking of theoreticians.

This allows allows to replace field space with super field space, which is completely well-defined object mathematically, and leave space-time untouched. Linear field space is simply replaced with its Grassmann algebra. For non-linear field space this replacement does not work. This allows to formulate the notion of linear super-field just in the same manner as it is done usually.

The generators of super-symmetries in super-space formulation reduce to super translations , which anti-commute to translations. The super generators Qα and Qbardotβ of super Poincare algebra are Weyl spinors commuting with momenta and anti-commuting to momenta:

{Qα,Qbardotβ}=2σμαdotβPμ .

One particular representation of super generators acting on super fields is given by

Dα=i∂/∂θα ,

Ddotα=i∂/∂θbardotα+ θβσμβdotαμ

Here the index raising for 2-spinors is carried out using antisymmetric 2-tensor εαβ. Super-space interpretation is not necessary since one can interpret this action as an action on Grassmann algebra valued field mixing components with different fermion numbers.

Chiral superfields are defined as fields annihilated by Ddotα. Chiral fields are of form Ψ(xμ+iθbarσμθ, θ). The dependence on θbardotα comes only from its presence in the translated Minkowski coordinate annihilated by Ddotα. Super-space enthusiast would say that by a translation of M4 coordinates chiral fields reduce to fields, which depend on θ only.

The space of fermionic Fock states at partonic 2-surface as TGD counterpart of chiral super field

As already noticed, another manner to realize SUSY in terms of representations the super algebra of conserved super-charges. In TGD framework these super charges are naturally associated with the modified Dirac equation, and anti-commuting coordinates and super-fields do not appear anywhere. One can however ask whether one could identify a mathematical structure replacing the notion of chiral super field.

I have proposed that generalized chiral super-fields could effectively replace induced spinor fields and that second quantized fermionic oscillator operators define the analog of SUSY algebra. One would have N=∞ if all the conformal excitations of the induced spinor field restricted on 2-surface are present. For right-handed neutrino the modes are labeled by two integers and delocalized to the interior of Euclidian or Minkowskian regions of space-time sheet.

The obvious guess is that chiral super-field generalizes to the field having as its components many-fermions states at partonic 2-surfaces with theta parameters and their conjugates in one-one correspondence with fermionic creation operators and their hermitian conjugates.

  1. Fermionic creation operators - in classical theory corresponding anti-commuting Grassmann parameters - replace theta parameters. Theta parameters and their conjugates are not in one-one correspondence with spinor components but with the fermionic creation operators and their hermitian conjugates. One can say that the super-field in question is defined in the "world of classical worlds" (WCW) rather than in space-time. Fermionic Fock state at the partonic 2-surface is the value of the chiral super field at particular point of WCW.

  2. The matrix defined by the σμμ is replaced with a matrix defined by the modified Dirac operator D between spinor modes acting in the solution space of the modified Dirac equation. Since modified Dirac operator annihilates the modes of the induced spinor field, super covariant derivatives reduce to ordinary derivatives with respect the theta parameters labeling the modes. Hence the chiral super field is a field that depends on θm or conjugates θbarm only. In second quantization the modes of the chiral super-field are many-fermion states assigned to partonic 2-surfaces and string world sheets. Note that this is the only possibility since the notion of super-coordinate does not make sense now.

  3. It would seem that the notion of super-field does not bring anything new. This is not the case. First of all, the spinor fields are restricted to 2-surfaces. Second point is that one cannot assign to the fermions of the many-fermion states separate non-parallel or even parallel four-momenta. The many-fermion state behaves like elementary particle. This has non-trivial implications for propagators and a simple argument leads to the proposal that propagator for N-fermion partonic state is proportional to 1/pN. This would mean that only the states with fermion number equal to 1 or 2 behave like ordinary elementary particles.

How the fermionic anti-commutation relations are determined?

Understanding the fermionic anti-commutation relations is not trivial since all fermion fields except right-handed neutrino are assumed to be localized at 2-surfaces. Since fermionic conserved currents must give rise to well-defined charges as 3-D integrals the spinor modes must be proportional to a square root of delta function in normal directions. Furthermore, the modified Dirac operator must act only in the directions tangential to the 2-surface in order that the modified Dirac equation can be satisfied.

The square root of delta function can be formally defined by starting from the expansion of delta function in discrete basis for a particle in 1-D box. The product of two functions in x-space is convolution of Fourier transforms and the coefficients of Fourier transform of delta function are apart from a constant multiplier equal to 1: δ (x)= K∑n exp(inx/2π L). Therefore the Fourier transform of square root of delta function is obtained by normalizing the Fourier transform of delta function by N1/2, where N→ ∞ is the number of plane waves. In other words: (δ (x))1/2= (K/N)1/2 nexp(inx/2π L).

Canonical quantization defines the standard approach to the second quantization of the Dirac equation.

  1. One restricts the consideration to time=constant slices of space-time surface. Now the 3-surfaces at the ends of CD are natural slices. The intersection of string world sheet with these surfaces is 1-D whereas partonic 2-surfaces have 2-D Euclidian intersection with them.

  2. The canonical momentum density is defined by

    Πα= ∂ L/∂t Ψbarα= ΓtΨ ,

    Γt= ∂ LK/∂ (∂thkk .

    LK denotes Kähler action density: consistency requires DμΓμ=0, and this is guaranteed only by using the modified gamma matrices defined by Kähler action. Note that Γt contains also the (g4)1/2 factor. Induced gamma matrices would require action defined by four-volume. t is time coordinate varying in direction tangential to 2-surface.

  3. The standard equal time canonical anti-commutation relations state

    α,Ψbarβ}= δ3(x,y)δαβ .

Can these conditions be applied both at string world sheets and partonic 2-surfaces.

  1. String world sheets do not pose problems. The restriction of the modes to string world sheets means that the square root of delta function in the normal direction of string world sheet takes care of the normal dimensions and the dynamical part of anti-commutation relations is 1-dimensional just as in the case of strings.

  2. Partonic 2-surfaces are problematic. The (g4)1/2 factor in Γt implies that Γt approaches zero at partonic 2-surfaces since they belong to light-like wormhole throats at which the signature of the induced metric changes. Energy momentum tensor appearing in Γt involves two index raisings by induced metric so that it can grow without limit as one approaches partonic two-surface. Therefore it is quite possible that the limit is finite and the boundary conditions defined by the weak form of electric magnetic duality might imply that the limit is finite. The open question is whether one can apply canonical quantization at partonic 2-surfaces. One can also ask whether one can define induced spinor fields at wormhole throats only at the ends of string world sheets so that partonic 2-surface would be effectively discretized. This cautious conclusion emerged in the earlier study of the modified Dirac equation.

  3. Suppose that one can assume spinor modes at partonic 2-surfaces. 2-D conformal invariance suggests that the situation reduces to effectively one-dimensional also at the partonic two-surfaces. If so, one should pose the anti-commutation relations at some 1-D curves of the partonic 2-surface only. This is the only sensical option. The point is that the action of the modified Dirac operator is tangential so that also the canonical momentum current must be tangential and one can fix anti-commutations only at some set of curves of the partonic 2-surface.

One can of course worry what happens at the limit of vacuum extremals. The problem is that Γt vanishes for space-time surfaces reducing to vacuum extremals at the 2-surfaces carrying fermions so that the anti-commutations are inconsistent. Should one require - as done earlier- that the anti-commutation relations make sense at this limit and cannot therefore have the standard form but involve the scalar magnetic flux formed from the induced Kähler form by permuting it with the 2-D permutations symbl? The restriction to preferred extremals, which are always non-vacuum extremals, might allow to avoid this kind of problems automatically.

In the case of right-handed neutrino the situation is genuinely 3-dimensional and in this case non-vacuum extremal property must hold true in the regions where the modes of νR are non-vanishing. The same mechanism would save from problems also at the partonic 2-surfaces. The dynamics of induced spinor fields must avoid classical vacuum. Could this relate to color confinement? Could hadrons be surrounded by an insulating layer of Kähler vacuum?

For details and background see the new chapter The recent vision about preferred extremals and solutions of the modified Dirac equation of "Physics as Infinite-dimensional Geometry", or the article with the same title.

Tuesday, August 21, 2012

Low mass exotic mesonic structures as evidence for dark scaled down variants of weak bosons?


During last years reports about low mass exotic mesonic structures have appeared. It is interesting to combine these bits of data with the recent view about TGD analog of Higgs mechanism and find whether new predictions become possible. The basic idea is to derive understanding of the low mass exotic structures from LHC data by scaling and understanding of LHC data from data about mesonic structures by scaling back.


  1. The article Search for low-mass exotic mesonic structures: II. attempts to understand the experimental results by Taticheff and Tomasi-Gustafsson mentions evidence for exotic mesonic structures. The motivation came
    from the observation of a narrow range of dimuon masses in Σ+→ pP0, P0→ μ-μ+ in the decays of P0 with mass of 214.3 +/- .5 MeV: muon mass is 105.7 MeV giving 2mμ=211.4 MeV. Mesonlike exotic states with masses M = 62, 80, 100, 181, 198, 215, 227.5, and 235 MeV are reported. This fine structure of states with mass difference 20-40 MeV between nearby states is reported for also for some baryons.

  2. The preprint Observation of the E(38) boson by Kh.U. Abraamyan et al reports the observation of what they call E(38) boson decaying to gamma pair observed in d(2.0 GeV/n)+C,d(3.0 GeV/n)+Cu and p(4.6 GeV)+C reactions in experiments carried in JINR Nuclotron.
If these results can be replicated they mean a revolution in nuclear and hadron physics. What strongly suggests itself is a fine structure for ordinary hadron states in much smaller energy scale than characterizing hadronic states. Unfortunately the main stream, in particular the theoreticians interested in beyond standard model physics, regard the physics of strong interactions and weak interactions as closed chapters of physics, and are not interested on results obtained in nuclear collisions.

In TGD framework situation is different. The basic characteristic of TGD Universe is fractality. This predicts new physics in all scales although standard model symmetries are fundamental unlike in GUTs and are reduced to number theory. p-Adic length scale hypothesis characterizes the fractality.

  1. In TGD Universe p-adic length scale hypothesis predicts the possibility of scaled versions of both strong and weak interactions. The basic objection against new light bosons is that the decay widths of weak bosons do not allow them. A possible manner to circumvent the objection is that the new light states correspond to dark matter in the sense that the value of Planck constant is not the standard one but its integer multiple.

    The assumption that only particles with the same value of Planck constant can appear in the vertex, would explain why weak bosons do not decay directly to light dark particles. One must however allow the transformation of gauge bosons to their dark counterparts. The 2-particle vertex is characterized by a coupling having dimensions of mass squared in the case of bosons, and p-adic length scale hypothesis suggests that the primary p-adic mass scale characterizes the parameter (the secondary p-adic mass scale is lower by factor p-1/2 and would give extremely small transformation rate).

  2. Ordinary strong interactions correspond to Mersenne prime Mn, n=2107-1, in the sense that hadronic space-time sheets correspond to this p-adic prime. Light quarks correspond to space-time sheets identifiable as color magnetic flux tubes, which are much larger than hadron itself. M89 hadron physics has hadronic mass scale 512 times higher than ordinary hadron physics and should be observed at LHC. There exist some pieces of evidence for the mesons of this hadron physics but masked by the Higgsteria.

    The original proposal that 125 GeV state could correspond to pion of M89 physics was wrong. The modified proposal replaces the Minkowskian pion (that is ordinary pion) with its Euclidian variant assignable to a flux tube connecting opposite throats of wormhole contact. Euclidian pion would provide masses for intermediate gauge bosons via the analog of Higgs mechanism involving instanton density non-vanishing only in Euclidian regions but giving a negligible contribution to fermion masses: this would solve the hierarchy problem motivating space-time N=1 SUSY not possible in TGD Universe. The expectation is that Minkowskian M89 pion (the real one!) has mass around 140 GeV assigned to CDF bump.

  3. In the leptonic sector there is evidence for leptohadron physics for all charged leptons labelled by Mersenne primes M127, MG,113 (Gaussian Mersenne), and M107. One can ask whether the above mentioned resonance P0 decaying to μ- μ+ pair could correspond to pion of muon-hadron physics consisting of a pair of color octet excitations of muon. Its production would presumably take place via production of virtual gluon pair decaying to a pair of color octet muons.

  4. The meson-like exotic states seem to be arranged along Regge trajectories but with string tension lower than that for the ordinary Regge trajectories with string tension T=.9 GeV2. String tension increases slowly with mass of meson like state and has three values T/GeV2∈ {1/390 , 1/149.7, 1/32.5} in the piecewise linear fit discussed in the article. The TGD inspired proposal has been that IR Regge trajectories assignable to the color magnetic flux tubes accompanying quarks are in question. For instance, in hadrons u and d quarks - understood as constituent quarks - would have k=113 quarks and string tension would be by naive scaling by a factor 2107-113=1/64 lower: as a matter of fact, the largest value of the string tension is twice this value. For current quark with mass scale around 5 MeV the string tension would be by a factor of order 2107-121=2-16 lower.
If one accepts the proposal that the 125 GeV Higgs like state discovered at LHC corresponds to Euclidian pion, one can ask whether the new states could contain a scaled down counterpart of Euclidian pion and whether even scaled down dark counterparts of weak bosons might be involved. These "weak" interaction would be actually of same strength as em interactions below the hadronic length and even above that faster than weak interactions by a factor of 236 coming from the scaling of the factor 1/mW4 in the simplest scattering involving weak boson exchange.
  1. The naive estimate for the mass of M107 Euclidian pion is r× 125 GeV , r=2(89-107)/2=2-9: this would give m (πE,107)=244 MeV. The highest state in the IR Regge trajectory mentioned in the article has mass 235 MeV. The weak bosons of M107 weak physics would have masses obtained by using the same scaling factor. This would give 156 MeV for W107 and 176 MeV for Z107. It seems that these states with these masses do not belong to the reported list M = 62, 80, 100, 181, 198, 215, 227.5, and 235 MeV of masses. For k=109, which is also prime, one obtains states with m (πE,109)=122 MeV, m (W109)=m (Z109) =88 MeV for vanishing value of Weinberg angle. Also these states seem to be absent from the spectrum listed above.

  2. In the original version of dark matter hierarchy the scalings hbar→ rhbar of Planck constant were restricted to r=211, which is in a reasonable approximation equal to proton/electron mass ratio. If one replaces k=107 with k=111, which corresponds to a scaling of M89 masses by a factor 2-11, one obtains scaling of M107 masses downwards by a factor 1/4.

    Euclidian pion πE,111 would have mass 61 MeV: this is near to the mass 62 MeV reported as the mass of the lowest lying mesonlike state at IR Regge trajectory. W111 and Z111 would have masses 39 MeV and 44 MeV for the standard value of Weinberg angle. Z decays to gamma pairs radiatively via intermediate W pair: could Z111 correspond to E(38) with mass 39 MeV? If the Weinberg angle is near to zero, the masses of W111 and Z111 are degenerate, and one would have 39 MeV mass for both. The accuracy of the mass determination for E(38) is 3 MeV so that the mass would be consistent with the identification as Z111. Note that small Weinberg angle means that the ratio g'/g for U(1) and SU(2) couplings is small (U(1) part of ew gauge potential corresponds to Kähler potential for CP2 in TGD framework).

  3. These observations inspire the question whether k=111=3× 37 scaled variant of weak physics could be involved. One can of course ask why the Gaussian Mersenne MG,k, k=113, assigned to nuclear space-time sheet, would not be realized in dark nuclear physics too. For this option masses would be scaled down a further factor of 1/2 to 19.5 MeV for weak bosons and to 30.5 MeV for Euclidian pion. Could it be that dark nuclear physics must correspond to different p-adic length scale differing by a factor 2 from that associated with ordinary nuclear physics? What is interesting is that one of the most long standing interpretational problems of quantum TGD was the fact that the classical theory predicts long ranged classical weak fields: the proposed solution of the problem was that the space-time sheets carrying these fields correspond to a non-standard value of Planck constant.

  4. The assumption that Euclidian pion has Regge trajectory conforms with the picture about elementary particle as a flux tube pair at parallel space-time sheets with wormhole contacts at its ends so that a closed monopole flux results. The length of the flux tube connecting different wormhole contacts would be defined by the p-adic length scale defining the weak physics in question. The interpretation of states in terms of IR Regge trajectory gives T111=1/390 GeV2 at the lower end of the spectrum if excited states.

    One can estimate the weak string tension from the mass squared difference for the states with masses 60 MeV and 80 MeV as Δ M2= T111 giving 2.8× 10-3 GeV2. The lowest value for the experimental estimate is 2.6× 10-3 GeV2: the two values are consistent with each other.

What does one obtain if one scales back the indications for scaled down variant of weak physics?
  1. Scaling back to M89 would gives string tension T89=222T111= 10.8 × 10-3 TeV2. This predicts that first excited state of Euclidion pion has mass about 162 GeV: a bump around this mass value corresponds to one of the many wrong alarms in Higgs hunting. There are indications for an oscillatory bump like structure in LHC data giving the ratio of the observed to predicted production cross section as a function of Higgs mass. This bump structure could reflect the actual presence of IR Regge trajectory for Euclidian pion inducing oscillatory behavior to the production cross section.

  2. The obvious question is whether also the intermediate gauge bosons should have Regge trajectories so that the TGD counterpart Higgs mechanism would take place for each state in Regge trajectory separately. The flux tube structure made unavoidable by Kähler magnetic charges of wormhole throats indeed suggests Regge trajectories. For M89 weak physics the first excited state of W boson would be 144.5 GeV if one assumes the value of T89 estimated above.
Clearly, a lot of new physics is predicted and it begins to look that fractality - one of the key predictions of TGD - might be realized both in the sense of hierarchy of Planck constants (scaled variants with same mass) and p-adic length scale hypothesis (scaled variants with varying masses). Both hierarchies would represent dark matter if one assumes that the values of Planck constant and p-adic length scale are same in given vertex. The testing of predictions is not however expected to be easy since one must understand how ordinary matter transforms to dark matter and vice versa. Consider only the fact, that only recently the exotic meson like states have been observed and modern nuclear physics regarded often as more or less trivial low energy phenomenology was born born about 80 years ago when Chadwick discovered neutron.

Addition: 38 MeV particle candidate has raises a lot of attention. Also Lubos Motl and Tommaso Dorigo have commented. I expected Lubos to colorfully debunk the preprint but at this time he did not. Dorigo represents criticism as an experimentalist: I am unable to say anything about this and I am ready to accept that the 38 MeV particle is statistical fake. Tommaso also ridicules the paper on basis of some formal problems such as ridiculously high precision - probably due to taking the numbers directly from MATLAB analysis. Tommaso even wanted to make a $1000 bet that the claim will never be confirmed and accepted by the particle physics community.

It is only human that the attitude of a researcher at LHC to researcher in some little nuclear physics laboratory is like that of a professor to a first year student. I have myself learned during these decades that this attitude torpedoes all attempts to intelligent communication. I share Tommaso's belief that particle physics community under no circumstances will take the claim about 38 MeV particle seriously. Already because it is inconsistent with the basic dogmas such as the decay widths of gauge bosons. If the particle is real, one is forced to dramatically modify the basic dogmas of theoretical particle physics and the community is certainly not mature for this and prefers to continue to test whether proton decays, whether standard SUSY might be there after all, whether black holes might be produced at LHC, and so on.

For some reason the claims of Tatitcheff and Tomaso-Gustafsson about exotic particles in the same mass scale are not commented at all in blogs. Single claim for anomaly cannot of course be taken seriously but if there are two independent claims of this kind, and if some general theoretical framework can relate them, one can ask whether something interesting might be involved. There are of course numerous older anomalies, which have been simply put under the rug - say the anomalies to which I assign the common umbrella term "leptohadron physics" Despite all this refined statistical methods we can see only what we want to see: LHC can see a signature only if it searches for it!

For background see the new chapter Higgs or something else? of "p-Adic length scale hypothesis and dark matter hierarchy", and the article Is it really Higgs?.

Friday, August 17, 2012

Emergent Braided Matter of Quantum Geometry


Ulla gave in the comment section of previous posting a link to an article by Bilson-Thompson , Hackett , Kauffman, and Wan with title "Emergent Braided Matter of Quantum Geometry". The article summarizes the recent state of an attempt to replace space-time continuum with discrete structure involving braids. The article satisfies high technical standards - note that one of the authors is Louis Kauffman, a leading knot theorist. The mathematician inside me is however skeptic for several reasons.

  1. Continuous symmetries - in particular Lorentz and Poincare invariance are lost. One must however get continuous space-time, which we use to organize our observational data and this requires ad hoc assumptions to get "long wave length limit" of the theory.

    The notion of space-time dimension becomes questionable. In algebraic topology continuous space is replaced with a web of simplexes of various dimensions embedded into the space. The simplexes with maximum number of vertices define the dimension of manifold. Genuine discretization would not allow the imbedding and one would lose all information about manifold. Everything would reduce to combinatorics and trying to get continuous space-time from mere combinatorics is like squaring of circle.

    The notion of distance realized in terms of Riemann geometry is fundamental for (quantum) physics. Without it one has just topological quantum field theory. Braids indeed appear naturally in topological quantum theory. The notion of metric must be introduced in ad hoc manner if space-time is discretized.

  2. The proposed approach identifies particles as 3-braids (this does not work and sin network is essential for the proposed generalization). The introduction of braiding is in conflict with the original idea about discreteness at fundamental level. Braiding requires the imbedding of the spin network into some continuous space. Continuous imbedding space would of course make possible to introduce also the notion of length and also "long wave length limit" could be more than a trick of magician. The continuous space-time seems to pop up irresistibly even in these noble attempts to get rid of it!

  3. The identification of braid invariants with standard model quantum numbers might look like an innocent operation. Braid invariants are however discrete topological invariants whereas standard model quantum numbers are group theoretical invariants. The latter ones are much more refined requiring continuum topology, differential structure, and Riemann metric. These quantum numbers are always with respect to some choice of quantization axes unlike topological quantum numbers. This makes the idea of assigning gauge interactions to topological invariants highly implausible.
These objections do not mean that braids could not be important in fundamental physics and they indeed are in a central role in the fundamental physics predicted by TGD. In TGD one does not give up the notion of continuum but introduces finite measurement resolution as a fundamental notion described in terms of inclusions of hyper-finite factors at quantum level and by discretization at space-time level.

Finite measurement resolution is seen as a property of state rather than a limitation preventing to know everything about it. The solutions of the modified Dirac equation indeed lead to this notion automatically: by conservation of electric charge they are localized to 2-D surfaces (string orbits) of space-time surface defining orbits of space-like braids and partonic 2-surfaces. Their ends at 3-D light-like wormhole throats define light-like braids.

At quantum level inclusion of hyper-finite factor is part of definition of state and leads to "quantum quantum theory" with non-commutative WCW ("world of classical worlds") spinors. Infinite-dimensional space of quantum states is replaced with space of "quantum quantum states" with finite fractional dimension. More concretely:

  1. Finite measurement resolution (and finite resolution of cognition) replaces partonic 2-surfaces with the collections of ends of light-like orbits of wormhole throats. They correspond to external particles of generalized Feynman diagrams and define also their vertices as surfaces at which the wormhole throat orbits are glued to together along their ends like lines of Feynman diagram. Braid strands carry quantum numbers of fermion or antifermion (quark or lepton). String world sheets result when space-like braid strands connecting braid ends at different wormhole throats move.

    The orbits of the ends of space-like braid strands move along 3-D light-like orbit of wormhole throat defining what might be called light-like braid strands. Space-like and light-like braidings become part of fundamental physics. This physics is new physics: there is no attempt to interpret it as standard model physics.

    This new physics becomes interesting only when the number of fermions and antifermions at partonic 2-surface is at least three as also in the original variant of Bilson-Thompson model. Standard model particles in TGD framework contain only single fermion or antifermion per wormhole throat and thus correspond to 1-braids. The experimental braid physics at elementary particle level there would belong to - I cannot say how distant - future in TGD Universe;-).

  2. Partonic 2-surfaces could however have macroscopic size and one can ask whether anyonic systems could correspond to large partonic 2-surfaces containing many-electron states and having non-standard value of Planck constant scaling up the size of the wormhole contact. The electrons would move with parallel momenta and it is not clear whether this can make sense. What is interesting that the fermionic oscillator operator algebra at partonic 2-surface defines SUSY algebra with large value of N and therefore anyonic system would represent SUSY with large N.

To my opinion the basic problematic assumption of "Emergent Braided Matter of Quantum Geometry" is the assumption that discretization is something fundamental. Mathematician would see spin networks without imbedding as something analogous to the attempt to square the circle. The irony is that by introducing braiding the authors are forced to give up the motivating assumption! Why not try to deduce braids from a theory starting directly from space-time?

Wednesday, August 15, 2012

Twistor revolution, strings, and Blackhat


Matt Strassler has an excellent posting allowing what I would call insider view to the development of particle theory during last two decades: recall that twistor revolution began around 1990 (revolutions in theoretical physics do not take place during one night!). Twistor revolutionary usually talk about N=4 SUSY or N=8 SUGRA and one might there work to be something totally unrelated to what particle physicists at CERN doing the hard computations are doing. One however learns from Strassler's posting that this is not at all the case.

The groundbreaking paper by Witten and later paper by Witten together with Britto, Cachachi, Feng led to BFCW recursion formula for tree amplitudes which has been later generalized by Nima Arkani-Hamed and others to planar amplitudes with loops. Only non-planar amplitudes are still out of reach. These papers had dramatic effects at the level of practical computation since one could replace the extremely tedious Feynman diagrammatics with twistorial/Grassmannian amplitudes constructible using only on mass shell amplitudes and recursion.

The outcome was a program called Blackhat used to analyze data at LHC, for instance analyze the data to find missing energy serving as signature of new particle not directly visible at LHC. Theoreticians Bern, Kosower, and Dixon contributed to the development of this program with their insights. To say that these persons wrote just programs, is completely misleading. They invented new methods of calculation and one can encounter these names also in twistor papers: for instance, Bern has worked with twistorialization of N=8 quantum gravity and suggested that this theory is finite. If this were the case it would prove that philosophical insights (in this case Einstein's, who was not a master mathematician in technical sense) are much more important than calculations techniques.

Strassler also expresses clearly his view about superstrings. String theory as TOE was a failure but the mathematical insights provided by it have been tremendous. Therefore we should get gradually rid of division to pro-strings and anti-strings camps. I agree. Strings can appear in fundamental physics in many manners and the super string view is only one vision about how. This vision failed as failed also standard SUSY. A creative theoretician can however imagine alternative visions about how string like objects could appear in fundamental physics and what SUSY could be, and also what color symmetry really is. In particular, a view about SUSY and interpretation of quark color not forcing proton to decay would be extremely interesting if possible at all.

My personal belief is that TGD view about strings is nearer to the truth. In this vision 4-D space-time surface is the fundamental notion and 2-D surfaces - string like objects - emerge. "Emerge" means that they are derived objects rather than fundamental ones. These 2-surfaces are also topologically extremely interesting since one can assign knotting and linking to them and in 4-D context also 2-knots become possible: this for 4-D space-time: dimension D=4 for space-time is a basic prediction of TGD too. The recent results about preferred extremals and solutions of modified Dirac equation imply that perturbative TGD can be formulated in terms of the 2-D surfaces so that one indeed obtains a representation of scattering amplitudes in terms of string like objects- in excellent approximation at least (right-handed neutrino is expected to cause very small effects requiring 4-dimensionality).

Tuesday, August 14, 2012

M8-H duality, preferred extremals, criticality, and Mandelbrot fractals


M8-H duality (see this) represents an intriguing connection between number theory and TGD but the mathematics involved is extremely abstract and difficult so that I can only represent conjectures. In the following the basic duality is used to formulate a general conjecture for the construction of preferred extremals by iterative procedure. What is remarkable and extremely surprising is that the iteration gives rise to the analogs of Mandelbrot fractals and space-time surfaces can be seen as fractals defined as fixed sets of iteration. The analogy with Mandelbrot set can be also seen as a geometric correlate for quantum criticality.

M8-H duality


M8-H duality states the following. Consider a distribution of two planes M2(x) integrating to a 2-surface N2 with the property that a fixed 1-plane M1 defining time axis globally is contained in each M2(x) and therefore in N2. M1 defines real axis of octonionic plane M8 and M2(x) a local hyper-complex plane. Quaternionic subspaces with this property can be parameterized by points of CP2. Define quaternionic surfaces in M8 as 4-surfaces, whose tangent plane is quaternionic at each point x and contains the local hyper-complex plane M2(x) and is therefore labelled by a point s(x)∈ CP2. One can write these surfaces as union over 2-D surfaces associated with points of N2:

X4= ∪x∈ N2 X2(x)⊂ E6 .


These surfaces can be mapped to surfaces of M4× CP2 via the correspondence (m(x),e(x))→ (m,s(T(X4(x)). Also the image surface contains at given point x the preferred plane M2(x) ⊃ M1. One can also write these surfaces as union over 2-D surfaces associated with points of N2:

X4= ∪x∈ N2 X2(x)⊂ E2× CP2 .

One can also ask what are the conditions under which one can map surfaces X4= ∪x∈ N2 X2⊂ E2× CP2 to 4-surfaces in M8. The map would be given by (m,s)→ (m,T4(s) and the surface would be of the form as already described. The surface X4 must be such that the distribution of 4-D tangent planes defined in M8 is integrable and this gives complicated integrability conditions. One might hope that the conditions might hold true for preferred extremals satisfying some additional conditions.

One must make clear that these conditions do not allow most general possible surface. The point is that for preferred extremals with Euclidian signature of metric the M4 projection is 3-dimensional and involves light like projection. Here the fact that light-like line L⊂ M2 spans M2 in the sense that the complement of its orthogonal complement in M8 is M2. Therefore one could consider also more general solution ansatz for which one has

X4= ∪x∈ Lx⊂ N2 X3(x)⊂ E2× CP2 .

One can also consider co-quaternionic surfaces as surfaces for which tangent space is in the dual of a quaternionic subspace containing the preferred M2(x).



The integrability conditions

The integrability conditions are associated with the expression of tangent vectors of T(X4) as a linear combination of coordinate gradients ∇ mk, where mk denote the coordinates of M8. Consider the 4 tangent vectors ei) for the quaternionic tangent plane (containing M2(x)) regarded as vectors of M8. ei) have components ei)k, i=1,..,4, k=1,...,8. One must be able to express ei) as linear combinations of coordinate gradients ∇ mk:

ei)k= ei)ααmk .

Here xα and ek denote coordinates for X4 and M8. By forming inner products of of ei) one finds that matrix ei)α represents the components of vierbein at X4. One can invert this matrix to get ei)α satisfying

ei)αei)βαβ

and

ei)αej)αij.

One can solve the coordinate gradients ∇ mk from above equation to get

αmk = ei)αei)k== Eαk .

The integrability conditions follow from the gradient property and state

DαEkβ= DβEkα .

One obtains 8× 6=48 conditions in the general case. The slicing to a union of two-surfaces labeled by M2(x) reduces the number of conditions since the number of coordinates mk reduces from 8 to 6 and one has 36 integrability conditions but still them is much larger than the number of free variables- essentially the six transversal coordinates mk.

For co-quaternionic surfaces one can formulate integrability conditions now as conditions for the existence of integrable distribution of orthogonal complements for tangent planes and it seems that the conditions are formally similar.

How to solve the integrability conditions and field equations for preferred extremals?


The basic idea has been that the integrability condition characterize preferred extremals so that they can be said to be quaternionic in a well-defined sense. Could one imagine solving the integrability conditions by some simple ansatz utilizing the core idea of M8-H duality? What comes in mind is that M8 represents tangent space of M4× CP2 so that one can assign to any point (m,s) of 4-surface X4⊂ M4× CP2 a tangent plane T4(x) in its tangent space M8 identifiable as subspace of complexified octonions in the proposed manner. Assume that s∈ CP2 corresponds to a fixed tangent plane containing M2x, and that all planes M2x are mapped to the same standard fixed hyper-octonionic plane M2⊂ M8, which does not depend on x. This guarantees that s corresponds to a unique quaternionic tangent plane for given M2(x).

Consider the map Tοs. The map takes the tangent plane T4 at point (m,e)∈ M4× E4 and maps it to (m,s1=s(T4))∈ M4× CP2. The obvious identification of quaternionic tangent plane at (m,s1) would be as T4. One would have Tοs=Id. One could do this for all points of the quaternion surface X4⊂ E4 and hope of getting smooth 4-surface X4⊂ H as a result. This is the case if the integrability conditions at various points (m,s(T4)(x))∈ H are satisfied. One could equally well start from a quaternionic surface of H and end up with integrability conditions in M8 discussed above. The geometric meaning would be that the quaternionic surface in H is image of quaternionic surface in M8 under this map.

Could one somehow generalize this construction so that one could iterate the map Tοs to get Tοs=Id at the limit? If so, quaternionic space-time surfaces would be obtained as limits of iteration for rather arbitrary space-time surface in either M8 or H. One can also consider limit cycles, even limiting manifolds with finite-dimension which would give quaternionic surfaces. This would give a connection with chaos theory.

  1. One could try to proceed by discretizing the situation in M8 and H. One does not fix quaternionic surface at either side but just considers for a fixed m2∈ M2(x) a discrete collection X {T4i⊃ M2(x) of quaternionic planes in M8. The points e2,i⊂ E2⊂ M2× E2=M4 are not fixed. One can also assume that the points si=s(T4i) of CP2 defined by the collection of planes form in a good approximation a cubic lattice in CP2 but this is not absolutely essential. Complex Eguchi-Hanson coordinates ξi are natural choice for the coordinates of CP2. Assume also that the distances between the nearest CP2 points are below some upper limit.

  2. Consider now the iteration. One can map the collection X to H by mapping it to the set s(X) of pairs ((m2,si). Next one must select some candidates for the points e2,i∈ E2⊂ M4 somehow. One can define a piece-wise linear surface in M4× CP2 consisting of 4-planes defined by the nearest neighbors of given point (m2,e2,i,si). The coordinates e2,i for E2⊂ M4 can be chosen rather freely. The collection (e2,i,i) defines a piece-wise linear surface in H consisting of four-cubes in the simplest case. One can hope that for certain choices of e2,i the four-cubes are quaternionic and that there is some further criterion allowing to choose the points e2,i uniquely. The tangent planes contain by construction M2(x) so that the product of remaining two spanning tangent space vectors (e3,e4) must give an element of M2 in order to achieve quaternionicity. Another natural condition would be that the resulting tangent planes are not only quaternionic but also as near as possible to the planes T4i. These conditions allow to find e2,i giving rise to geometrically determined quaternionic tangent planes as near as possible to those determined by si.

  3. What to do next? Should one replace the quaternionic planes T4i with geometrically determined quaternionic planes as near as possible to them and map them to points si slightly different from the original one and repeat the procedure? This would not add new points to the approximation, and this is an unsatisfactory feature.

  4. Second possibility is based on the addition of the quaternionic tangent planes obtained in this manner to the original collection of quaternionic planes. Therefore the number of points in discretization increases and the added points of CP2 are as near as possible to existing ones. One can again determine the points e2,i in such a manner that the resulting geometrically determined quaternionic tangent planes are as near as possible to the original ones. This guarantees that the algorithm converges.

  5. The iteration can be stopped when desired accuracy is achieved: in other words the geometrically determined quaternionic tangent planes are near enough to those determined by the points si. Also limit cycles are possible and would be assignable to the transversal coordinates e2i varying periodically during iteration. One can quite well allow this kind of cycles, and they would mean that e2 coordinate as a function of CP2 coordinates characterizing the tangent plane is many-valued. This is certainly very probable for solutions representable locally as graphs M4→ CP2. In this case the tangent planes associated with distant points in E2 would be strongly correlated which must have non-trivial physical implications. The iteration makes sense also p-adically and it might be that in some cases only p-adic iteration converges for some value of p.
It is not obvious whether the proposed procedure gives rise to a smooth or even continuous 4-surface. The conditions for this are geometric analogs of the above described algebraic integrability conditions for the map assigning to the surface in M4× CP2 a surface in M8. Therefore M8-H duality could express the integrability conditions and preferred extremals would be 4-surfaces having counterparts also in the tangent space M8 of H.

One might hope that the self-referentiality condition sοT=Id for the CP2 projection of (m,s) or its fractal generalization could solve the complicated integrability conditions for the map T. The image of the space-time surface in tangent space M8 in turn could be interpreted as a description of space-time surface using coordinates defined by the local tangent space M8. Also the analogy for the duality between position and momentum suggests itself.

Is there any hope that this kind of construction could make sense? Or could one demonstrate that it fails? If s would fix completely the tangent plane it would be probably easy to kill the conjecture but this is not the case. Same s corresponds for different planes M2x to different point tangent plane. Presumably they are related by a local G2 or SO(7) rotation. Note that the construction can be formulated without any reference to the representation of the imbedding space gamma matrices in terms of octonions. Complexified octonions are enough in the tangent space of M8.

Connection with Mandelbrot fractal and fractals as fixed sets for iteration

The occurrence of iteration in the construction of preferred extremals suggests a deep connection with the standard construction of 2-D fractals by iteration - about which Mandelbrot fractal is the canonical example. X2(x) (or X3(x)) could be identified as a union of orbits for the iteration of sοT. The appearance of the iteration map in the construction of solutions of field equation would answer positively to a long standing question whether the extremely beautiful mathematics of 2-D fractals could have some application at the level of fundamental physics according to TGD.

X2 (or X3) would be completely analogous to Mandelbrot set in the sense that it would be boundary separating points in two different basis of attraction. In the case of Mandelbrot set iteration would take points at the other side of boundary to origin on the other side and to infinity. The points of Mandelbrot set are permuted by the iteration. In the recent case sοT maps X2 (or X3) to itself. This map need not be diffeomorphism or even continuous map. The criticality of X2 (or X3) could be seen as a geometric correlate for quantum criticality.

In fact, iteration plays a very general role in the construction of fractals. Very general fractals can be defined as fixed sets of iteration and simple rules for iteration produce impressive representations for fractals appearing in Nature. Therefore it would be highly satisfactory if space-time surfaces would be in well-defined sense fixed sets of iteration. This would be also numerically beautiful aspect since fixed sets of iteration can be obtained as infinite limit of iteration for almost arbitrary initial set.

What is intriguing and challenging is that there are several very attractive approaches to the construction of preferred extremals and the challenge of unifying them still remains.

For details and background see the new chapter The recent vision about preferred extremals and solutions of the modified Dirac equation of "Physics as Infinite-Dimensional Geometry", or the article with the same title.

Monday, August 13, 2012

Instrumentalism and transformative research


Sabine Hossenfelder has a thought provoking posting about what she calls transformative research. I wrote a comment bringing in a concept that has helped in my attempts to understand what went wrong in the basic philosophy underlying our collapsing techno-civilization and what might be the basic reason for the deep stagnation in theoretical physics and lack of progress even at the level of fundamentals at some other branches of science such as theoretical biology and theoretical neuro-science. I attach my comment below.


Hi Bee,

thank you for a nice analysis. I have pondered a lot about the possibility of supporting transformative research.

It would be wonderful if transformative research could be somehow identified, supported, and even induced by some actions. I am skeptic about the possibility of this kind of control. Conservatism characterizes academic environment but maybe at deeper level this conservatism derives from the basic character of science that I would call instrumentalism, seeing the external world as an instrument to achieve goals.

Instrumentalism permeates every corner of science. For researchers the specialization is an instrument for building a career with a good social status. Scientific institutions in turn use scientists as instruments and drive them to work like mad in the hope of getting the ultimate recognition. Our technology is manipulation of Nature seen as a passive storage of various resources to be harnessed. Global economy has become the tool of large scale venturers to achieve personal profits using the latest information technology for their purposes. Instrumentalism is the key ideology of our technological civilization born in industrial revolution and probably living its last decades before the final collapse caused by instrumentalism taken to extreme.

Even the well-intentioned idea that it might be possible to fasten the progress of science by some mechanisms encouraging transformative research could be seen as one aspect of instrumentalism.

I see transformative research as the spiritual aspect of science. Whatever spiritual is, it is not instrumental and is also beyond control. When I enjoy the beauty of sunrise I just experience: I am not planning how to make money by arranging guided tours for those who want to pay for experiencing the sunrise and perhaps buy a drink to enhance the experience. Transformative research cannot be controlled. Transformative research is initiated by an non-predictable spark of genius which just occurs. What follows is mostly hard work and is indeed rather predictable process. I however find it hard to believe that academic community would identify some-one doing this kind of hard work as a transformative researcher deserving support: the labeling as academic village fool is a more probable reaction;-). Our luck is that the support is un-necessary. Transformative idea creates so immense motivation that the only manner that academic world can stop the process is by killing the transformative researcher;-).


Addition: An example of transformative research is the work of Vladimir Voevodsky in Motivic Homotopy Theory. Voevodsky has given a wonderful lecture understandable even to a layman like me. He represents the basic ideas of homotopy reducing the construction of zeroth homotopy groups counting the disjoint components of space - something very concrete - for a hierarchy of spaces obtained as space, loop space associated with it (not quite so concrete thing;-), loop space associated with that, and so on... Then he notices that the zeroth homotopy group can be expressed in terms of purely categorical notions, and then applies the resulting formalism to what he calls algebraic systems where the basic notions have very different content. Beautiful!

Friday, August 10, 2012

Cautious conclusions concerning gauge boson massivation

The discussion of TGD counterpart of Higgs mechanism gives support for the following general picture.

  1. p-Adic thermodynamics contributes to the masses of all particles including photon and gluons: in these cases the contributions are however small. For fermions they dominate. For weak bosons the contribution from Euclidian Higgs is dominating as the correct group theoretical prediction for the W/Z mass ratio demonstrates. The mere spin 1 character for gauge bosons implies that they are massive in 4-D sense. Euclidian pion does not have tachyonic mass term in the analog of Higgs potential and this saves from radiative instability which standard N=1 SUSY was hoped to solve. Therefore the usual space-time SUSY associted with imbedding space in TGD framework is not needed, and there are strong arguments suggesting that it is not present. For space-time regarded as 4-surfaces one obtains 2-D super-conformal invariance for fermions localized at 2-surfaces and for right-handed neutrino it extends to 4-D superconformal symmetry generalizing ordinary SUSY to infinite-D symmetry.


  2. The basic predictions to LHC are following. Euclidian pion at 125 GeV has no charged partners and will be found to decay to fermion pairs in a manner inconsistent with Higgs interpretation and its pseudoscalar nature will be established. M89 hadron physics will be discovered. Fermi satellite has produced evidence for a particle with mass around 140 GeV and this particle could correspond to the pion of $M_{89}$ physics. This particle should be observed also at LHC and CDF reported already earlier evidence for it. There has been also indications for other mesons of M89 physics from LHC discussed here.

  3. The new view about Higgs allows to see several conjectures related to ZEO in new light.

    1. The basic conjecture related to the perturbation theory is that wormhole throats are massless on mass shell states in imbedding space sense: this would hold true also for virtual particles and brings in mind what happens in twistor program. The recent progress in the construction of n-point functions leads to explicit general formulas for them expressing them in terms of a functional integral over four-surfaces. The deformation of the space-time surface fixes the deformation of basis for induced spinor fields and one obtains a perturbation theory in which correlation functions for imbedding space coordinates and fermionic propagator defined by the inverse of the modified Dirac operator appear as building bricks and the electroweak gauge coupling of the modified Dirac operator define the basic vertex. This operator is indeed 2-D for all other fermions than right-handed neutrino.

    2. The functional integral gives some expressions for amplitudes which resemble twistor amplitudes in the sense that the vertices define polygons and external fermions are massless although gauge bosons as their bound states are massive. This suggests perturbation at imbedding space level such that fermionic propagator is defined by longitudinal part of M4 momentum. Integration over possible choices M2⊂ M4 for CD would give Lorentz invariance and transform propagator terms to something else. As a matter of fact, Yangian invariance suggests general expressions very similar to those obtained in N=4 SUSY for amplitudes in Grassmannian approach.

    3. Another conjecture is that gauge conditions for gauge bosons hold true for longitudinal (M2-) momentum and automatically allow 3 polarization states. This allows to consider the possibility that all gauge bosons are massless in 4-D sense. By above argument this conjecture must be wrong. Could one do without M2 altogether? A strong argument favoring longitudinal massivation is from p-adic thermodynamics for fermions. If p-adic thermodynamics determines longitudinal mass squared as a thermal expectation value such that 4-D momentum always light-like (this is important for twistor approach) one can assume that Super Virasoro conditions hold true for the fermion states. There are also number theoretic arguments and supporting the role of preferred M2. Also the condition that the choice of quantization axes has WCW correlates favors M2 as also the construction of the generalized Feynman graphs analogous to non-planar diagrams as generalization of knot diagrams.
The ZEO conjectures involving M2 remain open. It the conjecture that Yangian invariance realized in terms of Grassmannians makes senseit could allow to deduce the outcome of the functional integral over four-surfaces and one could hope that TGD can be transformed to a calculable theory.

For details and background see the new chapter Higgs or something else?, and the article Is it really Higgs?.

Thursday, August 09, 2012

Why standard space-time SUSY is not possible in TGD framework?

The questions whether TGD allows space-time SUSY in standard sense and whether Higgs like particle is needed and exists in TGD framework have been long-standing issues in TGD. My original answer was strong "No" in both cases but the answers depend on what one means with space-time SUSY and what functions Higgs is expected to serve: does it give masses for both fermions and bosons or only weak bosons as in TGD where even photon gets very small mass by p-adic thermodynamics.

If space-time means space-time surface, standard space-time SUSY extends to infinite-dimensional super-conformal symmetry generalizing in the case of right-handed neutrinos to generalization of 2-D conformal symmetry to 4-D context. For other fermions states localized to 2-D surfaces by the conservation of electric charge, one has 2-D super-conformal symmetry. Space-time super symmetry in ordinary sense means super symmetry at imbedding space level. The progress in the understanding of preferred extremals of Kähler action and of solutions of the modified Dirac equations were the decisive steps together with the results from LHC which suggests that space-time SUSY in standard sense is absent and that Higgs is not quite it is expected to be.

LHC suggests that one does not have N=1 SUSY in standard sense and TGD view about gauge boson massivation in terms of pseudoscalar coupling to instanton density does not need SUSY to stabilize the radiative corrections to the tachyonic mass term which is now absent from the beginning since it would break conformal invariance. The mexican hat is not there!


Why one cannot have standard space-time SUSY in TGD framework? Let us begin by listing all arguments popping in mind.

  1. Could covariantly constant νR represents a gauge degree of freedom? This is plausible since the corresponding fermion current is non-vanishing.

  2. The original argument for absence of space-time SUSY years ago was indirect: M4× CP2 does not allow Majorana spinors so that N=1 SUSY is excluded.

  3. One can however consider N=2 SUSY by including both helicities possible for covariantly constant νR. For νR the four-momentum vanishes so that one cannot distinguish the modes assigned to the creation operator and its conjugate via complex conjugation of the spinor. Rather, one oscillator operator and its conjugate correspond to the two different helicities of right-handed neutrino with respect to the direction determined by the momentum of the particle. The spinors can be chosen to be real in this basis. This indeed gives rise to an irreducible representation of spin 1/2 SUSY algebra with right-handed neutrino creation operator acting as a ladder operator. This is however N=1 algebra and right-handed neutrino in this particular basis behaves effectively like Majorana spinor. One can argue that the system is mathematically inconsistent. By choosing the spin projection axis differently the spinor basis becomes complex. In the new basis one would have N=2 , which however reduces to N=1 in the real basis.

  4. Or could it be that fermion and sfermion do exist but cannot be related by SUSY? In standard SUSY fermions and sfermions forming irreducible representations of super Poincare algebra are combined to components of superfield very much like finite-dimensional representations of Lorentz group are combined to those of Poincare. In TGD framework νR generates in space-time interior generalization of 2-D super-conformal symmetry but covarianlty constant νR cannot give rise to space-time SUSY.

    This would be very natural since right-handed neutrinos do not have any electroweak interactions and are delocalized into the interior of the space-time surface unlike other particles localized at 2-surfaces. It is difficult to imagine how fermion and νR could behave as a single coherent unit reflecting itself in the characteristic spin and momentum dependence of vertices implied by SUSY. Rather, it would seem that fermion and sfermion should behave identically with respect to electroweak interactions.

The third argument looks rather convincing and can be developed to a precise argument.

  1. If sfermion is to represent elementary bosons, the products of fermionic oscillator operators with the oscillator operators assignable to the covariantly constant right handed neutrinos must define might-be bosonic oscillator operators as bn= ana and bn= ana One can calculate the commutator for the product of operators. If fermionic oscillator operators commute, so do the corresponding bosonic operators. The commutator [bn,bn] is however proportional to occupation number for νR in N=1 SUSY representation and vanishes for the second state of the representation. Therefore N=1 SUSY is a pure gauge symmetry.

  2. One can however have both irreducible representations of SUSY: for them either fermion or sfermion has a non-vanishing norm. One would have both fermions and sfermions but they would not belong to the same SUSY multiplet, and one cannot expect SUSY symmetries of 3-particle vertices.

  3. For instance, γ FF vertex is closely related to γ Fs Fs in standard SUSY (subscript "s" refers to sfermion). Now one expects this vertex to decompose to a product of γ FFs vertex and amplitude for the creation of νRνsR from vacuum so that the characteristic momentum and spin dependent factors distinguishing between the couplings of photon to scalar and and fermion are absent. Both states behave like fermions. The amplitude for the creation of νRνsR from vacuum is naturally equal to unity as an occupation number operator by crossing symmetry. The presence of right-handed neutrinos would be invisible if this picture is correct. Whether this invisible label can have some consequences is not quite clear: one could argue that the decay rates of weak bosons to fermion pairs are doubled unless one introduces 2-1/2 factors to couplings.

Addition: Lubos tells in his postings about SUSY theoreticians Li, Maxin, Nanopoulos, and Walker. They promise the discovery of proton decay, stop squark, and lightest supersymmetric particle from 2012 LHC data, perhaps even within few months! I cannot promise any of these sensational events. Neither stop nor neutralino has place in TGD Universe, and even proton refuses to decay;-). Even worse, there is no purpose of life for standard SUSY in TGD Universe: in its fanatic ontological minimalism TGD Universe does not tolerate free lunchers;-).

For details and background see the new chapter SUSY in TGD Universe of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy", and the article The recent vision about preferred extremals and solutions of the modified Dirac equation.

Two new chapters


I realized that the chapter about TGD inspired new physics had grown to gigantic proportions, and decided to split it in three parts by separating the considerations related to Higgs and SUSY to their own chapters. Below are the abstracts of the new chapters. First the abstract about the evolution of the ideas related to Higgs.

Whether Higgs like particle is needed or not in TGD framework and whether TGD predicts such a particle has been one of the longstanding issues of TGD and my views have fluctuated during years between various views. What is clear that fermion massivation is due to p-adic thermodynamics in TGD framework but the group theoretic character of gauge boson massivation suggests that Higgs like particle giving masses only for weak bosons is needed. In this chapter the evolution of ideas is described and in various sections often mutually conflicting arguments are represented. The fate of the most recent identification of Higgs as "Euclidian pion" of M89 hadron physics providing masses for gauge bosons depends on the data provided by LHC during next years.

For details see the new chapter Higgs or something else?.

Here is the abstract of the new chapter about the evolution of the ideas related to SUSY.

The view about space-time supersymmetry differs from the standard view in many respects. First of all, the super symmetries are not associated with Majorana spinors. Super generators correspond to the fermionic oscillator operators assignable to leptonic and quark-like induced spinors and there is in principle infinite number of them so that formally one would have N=∞ SUSY.

Quite recent developments in the understanding of the modified Dirac equation (I am writing this 2012) have led to a considerable understanding of the special role of right-handed neutrino. Whereas all other fermions are localized to 2-D string world sheets and partonic 2-surfaces by the condition that electromagnetic charge defined in spinorial sense is conserved, right-handed neutrino is delocalized at entire space-time surface and there is unbroken 4-dimensional counterpart of 2-D super-conformal symmetry associated with it. The rapid experimental progress at LHC during 2011-2012 has more or less eliminated standard SUSY and this gives a powerful constraint in the attempts to understand what TGD SUSY could be.

As conjectured earlier, TGD indeed has also 2-D badly broken SUSY generated by all fermion modes of the modified Dirac equation and labelled by conformal weight. This SUSY could be also interpreted super-conformal symmetry. This SUSY could be also interpreted super-conformal symmetry. It also gives rise to extension of 2-D super-conformal symmetry to 4-D super-conformal symmetry much larger than the ordinary super-conformal symmetry: this space-time SUSY applies at the level of space-time surfaces but what about TGD counterpart of conventional space-time SUSY at the level of M4 and imbedding space? Could covariantly constant right-handed neutrino generate it?

What remains to be understood is the role of the covariantly constant right-handed neutrino spinor carrying no momentum: it behaves like Majorana spinor and its helicity is not constrained by Dirac equation. It is not clear whether the states defined by 2-D parton and by parton plus 4-D delocalized right-handed neutrino can be distinguished experimentally if right-handed neutrino does not carry four-momentum. This would be a trivial explanation for the failure to find evidence for SUSY at LHC.

In fact, this argument can be developed to a more precise one: both fermions and sfermions exists and form representations of SUSY with second state having zero norm. Therefore fermion and sfermion candidates exist but belong to different representations of SUSY, and right-handed neutrinos remain invisible in the dynamics and the characteristic spin and momentum dependent vertex factors distinguishing between particle and sparticle are absent. The loss of space-time SUSY is not a catastrophe since it is not needed to stabilize Higgs in TGD framework since the variant of Higgs mechanism based on Higgs like pseudo-scalar is based Higgs potential containing no tachyonic Higgs mass term and is free of the problems related to radiative instability of the tachyonic Higgs mass term.

In this chapter I discuss the evolution of the vision about SUSY in TGD framework. There is no attempt to represent a final outcome in a concise form because I do not have such a final view yet. I represent the arguments developed during years in roughly chronological order so that reader can see how the development has taken place. The arguments are not necessarily internally consistent and can be inaccurate.

For details see the chapter SUSY in TGD Universe.

Sunday, August 05, 2012

Pseudo-scalar Higgs as Euclidian pion?

The observations of previous postings (see this and this) and earlier work suggest that pion field in TGD framework is analogous to Higgs field.

This raises questions. Assuming that QFT in M4 is a reasonable approximation, does a modification of standard model Higgs mechanism allow to approximate TGD description? What aspects of Higgs mechanism remain intact when Higgs is replaced with pseudo-scalar? Those assignable to electro-weak bosons? The key idea allowing to answer these questions is that "Higgsy" pion and ordinary M89 pion are not one and the same thing: the first one corresponds to Euclidian flux tube and the latter one to Minkowskian flux tube. Hegel would say that one begins with thesis about Higgs, represents anti-thesis replacing Higgs with pion, and ends up with a synthesis in which Higgs is transformed to pseudo-scalar Higgs, "Higgsy" pion, or Higgs like state if you wish! Higgs certainly loses its key role in the massivation of fermions.

Can one assume that M4 QFT limit exists?

The above approach assumes implicitly - as all comparisons of TGD with experiment - that M4 QFT limit of TGD exists. The analysis of the assumptions involved with this limit helps also to understand what happens in generation of "Higgsy" pions.

  1. QFT limit involves the assumption that quantum fields and also classical fields superpose in linear approximation. This is certainly not true at given space-time sheet since the number of field like is only four by General Coordinate Invariance. The resolution of the problem is simple: only the effects of fields carried by space-time sheets superpose and this takes place in multiple topological condensation of the particle on several space-time sheets simultaneously. Therefore M4 QFT limit can make sense only for many-sheeted space-time.

  2. The light-like 3-surfaces representing lines of Feynman graphs effectively reduce to braid strands and are just at the light-like boundary between Minkowskian and Euclidian regions so that the fermions at braid strands can experience the presence of the instanton density also in the more fundamental description. The constancy of the instanton density can hold true in a good approximation at braid strands. Certainly the M4 QFT limit treats Euclidian regions as 1-dimensional lines so that instanton density is replaced with its average.

  3. In particular, the instanton density can be non-vanishing for M4 limit since E and B at different space-time sheets can superpose at QFT limit although only their effects superpose in the microscopic theory. At given space-time sheet I can be non-vanishing only in Euclidian regions representing lines of generalized Feynman graphs.

  4. The mechanism leading to the creation of pion like states is assumed to be the presence of strong non-orthogonal electric and magnetic fields accompanying colliding charged particles (see this): this of course in M4 QFT approximation. Microscopically this corresponds to the presence of separate space-time sheets for the colliding particles. The generation of "Higgsy" pion condensate or pion like states must involve formation of wormhole contacts representing the "Higgsy" pions. These wormhole contacts must connect the space-time sheets containing strong electric and magnetic fields.

Higgs like pseudo-scalar as Euclidian pion?

The recent view about the construction of preferred extremals predicts that in Minkowskian space-time regions the CP2 projection is at most 3-D. In Euclidian regions M4 projection satisfies similar condition. As a consequence, the instanton density vanishes in Minkowskian regions and pion can generate vacuum expectation only in Euclidian regions. Long Minkowskian flux tubes connecting wormhole contacts would correspond to pion like states and short Euclidian flux tubes connecting opposite wormhole throats to "Higgsy" pions.

  1. If pseudo-scalar pion like state develops a vacuum expectation value the QFT limit, it provides weak gauge bosons with longitudinal components just as in the case of ordinary Higgs mechanism. Pseudo-scalar boson vacuum expectation contributes to the masses of weak bosons and predicts correctly the ratio of W and Z masses. If p-adic thermodynamics gives a contribution to weak boson masses it must be small as observed already earlier. Higgs like pion cannot give dominant contributions to fermion masses but small radiative correction to fermion masses are possible.

    Photon would be massless in 4-D sense unlike weak bosons. If ZEO picture is correct, photon would have small longitudinal mass and should have a third polarization. One must of course remain critical concerning the proposal that longitudinal M2 momentum replaces momentum in gauge conditions. Certainly only longitudinal momentum can appear in propagators.

  2. If 3 components of Euclidian pion are eaten by weak gauge bosons, only single neutral pion-like state remain. This is not a problem if ordinary pion corresponds to Minkowskian flux tube. Accordingly, the 126 GeV boson would correspond to the remaining component Higgs like Euclidian pion and the boson with mass around 140 GeV for which CDF has provided some evidence to the Minkowskian M89 pion (see this) and which might have also shown itself in dark matter searches (see this and this).

  3. By the previous construction one can consider two candidates for pion like pseudo-scalars as states whose form apart from parallel translation factor is ‾Ψ1 jAkγkΨ2. Here jA is generator of color isometry either in U(2) sub-algebra or its complement. The state in U(2) algebra transforms as 3+1 under U(2) and the state in its complement like 2+2* under U(2).

    These states are analogous of CP2 polarizations, whose number can be at most four. One must select either of these polarization basis. 2+2* is an unique candidate for the Higgs like pion and can be be naturally assigned with the Euclidian regions having Hermitian structure. 3+1 in turn can be assigned naturally to Minkowskian regions having Hamilton-Jacobi structure.

    Ordinary pion has however only three components. If one takes seriously the construction of preferred extremals the solution of the problem is simple: CP2 projection is at most 3-dimensional so that only 3 polarizations in CP2 direction are possible and only the triplet remains. This corresponds exactly to what happens in sigma model combining describing pion field as field having values at 3-sphere.

  4. Minkowskian and Euclidian signatures correspond naturally to the decompositions 3+1 and 2+2*, which could be assigned to quaternionic and co-quaternionic subspaces of SU(3) Lie algebra or imbedding space with tangent vectors realized in terms of the octonionic representation of gamma matrices.
One can proceed further by making objections.
  1. What about kaon, which has a natural 2+2* composition but can be also understood as 3+1 state? Is kaon is Euclidian pion which has not suffered Higgs mechanism? Kaons consists of usbar, dsbar and their antiparticles. Could this non-diagonal character of kaon states explain why all four states are possible? Or could kaon corresponds to Minkowskian triplet plus singlet remaining from the Euclidian variant of kaon? If so, then neutral kaons having very nearly the same mass - so called short lived and long lived kaons - would correspond to Minkowskian and Euclidian variants of kaon. Why the masses if these states should be so near each other? Could this relate closely to CP breaking for non-diagonal mesons involving mixing of Euclidian and Minkowskian neutral kaons? Why CP symmetry requires mass degeneracy?

  2. Are also M107 electroweak gauge bosons? Could they correspond to dark variant of electro-weak bosons with non-standard value of Planck constant? This would predict the existence of additional - possibly dark - pion-like state lighter than ordinary pion. The Euclidian neutral pion would have mass about (125/140)× 135 ∼ 125 MeV from scaling argument. Interestingly, there is evidence for satellites of pion: they include also state which are lighter than pion. The reported masses of these states would be M = 62, 80, 100, 181, 198, 215, 227.5, and 235 MeV. 125 MeV state is not included. The interpretation of these states is as IR Regge trajectories in TGD framework.

How the vacuum expectation of the pseudo-scalar pion is generated?

Euclidian regions have 4-D CP2 projection so that the instanton density is non-vanishing and Euclidian pion generates vacuum expectation. In the following an attempt to understand details of this process is made using the unique Higgs potential consistent with conformal invariance.

  1. One should realize the linear coupling of Higgs like pion to instanton density. The problem is that Tr(F^Fπ) since π does not make sense as such since π is defined in terms of gamma matrices of CP2 and F in terms of sigma matrices. One can however map gamma matrices to sigma matrices in a natural manner by using the quaternionic structure of CP2. γ0 corresponding to e0 is mapped to unit matrix and γi to the corresponding sigma matrix: γi→ εijkσjk. This map is natural for the quaternionic representation of gamma matrices. What is crucial is the dimension D=4 of CP2 and the fact that it has U(2) holonomy.


  2. Vacuum expectation value derives from the linear coupling of pion to instanton density. If instanton density is purely electromagnetic, one obtains correct pseudo-scalar Higgs vacuum expectation commuting with photon.


  3. If the action density contains only the mass term m2π/2 plus instanton term 1/32π2fππ I,
    where I is the instanton density, one obtains the standard PCAC relation between the vacuum expectation of the pion field and instanton density.

    π0= (1/32π2fπmπ2) I .

    This relation appears also in the model for leptopion production. In the standard model the mass term must be tachyonic. This leads to so called hierarchy problem hierarchy problem. The source of the problem are the couplings of Higgs to fermions proportional to the mass of fermion. The radiative corrections to Higgs mass squared are positive and proportional to fermion mass so that top quark gives the dominating contribution. This implies that the sign of the mass squared can become positive and the state with vanishing vacuum expectation value of Higgs field becomes the ground state. In the recent situation this is not a problem since fermions couple to the pion like state only radiatively.

  4. When one adds to the fields appearing in the classical instanton term the quantum counter parts of electroweak gauge fields, one obtains an action giving rise to the anomaly term inducing the anomalous decays to gamma pairs and also other weak boson pairs. The relative phase between the instanton term and kinetic term of pion like state is highly relevant to the decay rate. If the relative phase corresponds to imaginary unit then the rate is just the sum of the anomalous and non-anomalous rates since interference is absent.

What is the window to M89 hadron physics?

Concerning the experimental testing of the theory one should have a clear answer to the question concerning the window to M89 hadron physics. One can imagine several alternative windows.

  1. Two gluon states transforming to M89 gluons could be one possibility proposed earlier. The model contains a dimensional parameter characterizing the amplitude for the transformation of M107 gluon to M89 gluon. Dimensional parameters are not however well-come.

  2. Instanton density as the portal to new hadron physics would be second option but works only in the Euclidian signature. One can however argue that M89 Euclidian pions represent just electroweak physics and cannot act as a portal.

  3. Electroweak gauge bosons correspond to closed flux tubes decomposing to long and short parts. Two short flux tubes associated with the two wormhole contacts connecting the opposite throats define the "Higgsy" pions. Two long flux tubes connect two wormhole contacts at distance of order weak length scale and define M89 pions and mesons in the more general case. In the case of weak bosons the second end of long flux tube contains neutrino pair neutralizing the weak isospin so that the range of weak interactions is given the length of the long flux tube. For M89 the weak isospins at the ends need not sum up to zero and also other states that neutrino pair are allowed, in particular single fermion states. This allows an interpretation as electroweak "de-confinement" transition producing M89 mesons and possibly also baryons. This kind of transition would be rather natural and would not requite any specific mechanisms.

For a TGD based discussion of the general theoretical background for Higgs and possible TGD inspired interpretation of the new particle as pionlike state of scaled variant of hadron physics see Is it really Higgs?. See also the chapter " New particle physics predicted by TGD: Part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".