https://matpitka.blogspot.com/2011/10/

## Sunday, October 30, 2011

### Quantum Arithmetics and the Relationship between Real and p-Adic Physics

p-Adic physics involves two only partially understood questions.

1. Is there a duality between real and p-adic physics? What is its precice mathematic formulation? In particular, what is the concrete map p-adic physics in long scales (in real sense) to real physics in short scales? Can one find a rigorous mathematical formulation of canonical identification induced by the map p→ 1/p in pinary expansion of p-adic number such that it is both continuous and respects symmetries.

2. What is the origin of the p-adic length scale hypothesis suggesting that primes near power of two are physically preferred? Why Mersenne primes are especially important?

A possible answer to these questions relies on the following ideas inspired by the model of Shnoll effect. The first piece of the puzzle is the notion of quantum arithmetics formulated in non-rigorous manner already in the model of Shnoll effect.

1. Quantum arithmetics is induced by the map of primes to quantum primes by the standard formula. Quantum integer is obtained by mapping the primes in the prime decomposition of integer to quantum primes. Quantum sum is induced by the ordinary sum by requiring that also sum commutes with the quantization.

2. The construction is especially interesting if the integer defining the quantum phase is prime. One can introduce the notion of quantum rational defined as series in powers of the preferred prime defining quantum phase. The coefficients of the series are quantum rationals for which neither numerator and denominator is divisible by the preferred prime.

3. p-Adic--real duality can be identified as the analog of canonical identification induced by the map p→ 1/p in the pinary expansion of quantum rational. This maps maps p-adic and real physics to each other and real long distances to short ones and vice versa. This map is especially interesting as a map defining cognitive representations.

Quantum arithmetics inspires the notion of quantum matrix group as counterpart of quantum group for which matrix elements are ordinary numbers. Quantum classical correspondence and the notion of finite measurement resolution realized at classical level in terms of discretization suggest that these two views about quantum groups are closely related. The preferred prime p defining the quantum matrix group is identified as p-adic prime and canonical identification p→ 1/p is group homomorphism so that symmetries are respected.

1. The quantum counterparts of special linear groups SL(n,F) exists always. For the covering group SL(2,C) of SO(3,1) this is the case so that 4-dimensional Minkowski space is in a very special position. For orthogonal, unitary, and orthogonal groups the quantum counterpart exists only if quantum arithmetics is characterized by a prime rather than general integer and when the number of powers of p for the generating elements of the quantum matrix group satisfies an upper bound characterizing the matrix group.

2. For the quantum counterparts of SO(3) (SU(2)/ SU(3)) the orthogonality conditions state that at least some multiples of the prime characterizing quantum arithmetics is sum of three (four/six) squares. For SO(3) this condition is strongest and satisfied for all integers, which are not of form n= 22r(8k+7)). The number r3(n) of representations as sum of squares is known and r3(n) is invariant under the scalings n→ 22rn. This means scaling by 2 for the integers appearing in the square sum representation.

3. r3(n) is proportional to the so called class number function h(-n) telling how many non-equivalent decompositions algebraic integers have in the quadratic algebraic extension generated by (-n)1/2.

The findings about quantum SO(3) suggest a possible explanation for p-adic length scale hypothesis and preferred p-adic primes.

1. The basic idea is that the quantum matrix group which is discrete is very large for preferred p-adic primes. If cognitive representations correspond to the representations of quantum matrix group, the representational capacity of cognitive representations is high and this kind of primes are survivors‍ in the algebraic evolution leading to algebraic extensions with increasing dimension.

2. The preferred primes correspond to a large value of r3(n). It is enough that some of their multiples do so (the 22r multiples of these do so automatically). Indeed, for Mersenne primes and integers one has r3(n)=0, which was in conflict with the original expectations. For integers n=2Mm however r3(n) is a local maximum at least for the small integers studied numerically.

3. The requirement that the notion of quantum integer applies also to algebraic integers in quadratic extensions of rationals requires that the preferred primes (p-adic primes) satisfy p=8k+7. Quite generally, for the integers n=22r(8k+7) not representable as sum of three integers the decomposition of ordinary integers to algebraic primes in the quadratic extensions defined by (-n)1/2 is unique. Therefore also the corresponding quantum algebraic integers are unique for preferred ordinary prime if it is prime also in the algebraic extension. If this were not the case two different decompositions of one and same integer would be mapped to different quantum integers. Therefore the generalization of quantum arithmetics defined by any preferred ordinary prime, which does not split to a product of algebraic primes, is well-defined for p=22r(8k+7).

4. This argument was for quadratic extensions but also more complex extensions defined by higher polynomials exist. The allowed extensions should allow unique decomposition of integers to algebraic primes. The prime defining the quantum arithmetics should not decompose to algebraic primes. If the algebraic evolution leadis to algebraic extensions of increasing dimension it gradually selects preferred primes as survivors.
For details and background see the new chapter >Quantum Arithmetics and the Relationship between Real and p-Adic Physics of "Physics as Generalized Number Theory".

## Saturday, October 29, 2011

### More about strange charged trilepton events

I already told about indications for strange charged tri-lepton events at CMS. The inspiration came from a posting CMS sees SUSY-like tri-lepton excesses of Lubos.

Only a few days later both Tommaso and Lubos discussed a quite recent paper telling about charged tri-lepton events observed at CMS.

1. From Tommaso's posting one learns that three charged leptons with total mass near to Z0 mass have been observed. Charge conservation of course requires fourth charged lepton if the particles originate in the decay of Z0 as assumed and Tommaso argues that this lepton has so low energy that it is not detected. This kind of lepton could results in an energy asymmetric decay of photon. The assumption that Z0 is the decaying particle might be however un-necessarily strong: it could be quite well W with almost the same mass. In this case charge conservation allows genuine charged tri-lepton event. The discussion of my earlier posting suggests the decay W→ sW+sZ to be the source of charged tri-lepton events.

2. The authors of the paper propose that the reaction could be initiated by a decay of squark or gluino and necessarily involving R-parity breaking. There are two possibile options for R-parity breaking allowed by proton stability depending on whether it conserves lepton or baryon number. For lepton number violating option intermediate particle is neutralino (lightest sparticle which is stable in R-parity breaking scenarios ) and for baryon number violating scenatior bino or higgsino. The R-parity violating decay of lightest spartner (neutral) would yield slepton-lepton pair and the R-parity violating decay of slepton a lepton pair plus neutrino. This would produce instead single observed lepton charged tri-lepton state. The authors do not give enough details to make possible for a non-professional to deduce what the detailed model for the process really is.

It is interesting to consider the situation in TGD framework in light of the crucial additional data (the three charged leptons have mass rather near to that of Z0 and therefore to that of W).

1. The decay of W → sW +sZ with the decays sW and sZ proceeding in either of the two manners discussed in the previous posting would predict that the total mass of all particles produced is near to W mass (and therefore Z mass) and also why one obtains genuine charged tri-lepton states. The problem is that missing energy in the form of neutrinos and neutral sparticles is present and it is not at all clear why this energy should be small.

2. An option not discussed discussed in the previous posting is the decay W→ sν+L followed by the decay sν→ L+sW followed by sW→ L+sν would not break R-parity and would produce sν. Total energy would correspond to W mass but it is not clear why the missing energy assigned with sν should be small.

3. R-parity violation predicted by TGD however allows also to consider the direct decay sνrarr; L++L- so that there would be no missing energy. One could say that the decay is the reversal of a process in which L++L- annihilates to a sν identifiable as a pair of neutrino and right-handed neutrino at microscopic level. All standard model quantum numbers would be conserved.

In TGD framework R-parity violation is a prediction of the theory and it would not violate either baryon or lepton number conservation. There is no need to assume undetected charged lepton since charge conservation allows charged tri-lepton final state as such without any missing energy. Obviously the TGD based model is by several orders of magnitude simpler than the model based on standard SUSY.

For details and background see the chapter New particle physics predicted by TGD: part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

## Saturday, October 22, 2011

### 3-jet and 9-jet events as a further evidence for M89 hadron physics?

Lubos Motl told about slight 3-jet and 9-jet excesses seen by CMS collaboration in LHC data. There is an article about 3-jet excess titled Search for Three-Jet Resonances in pp Collisions at s1/2 = 7 TeV by CMS collaboration. Figure 3 of the article and also the figure of Lubos blog posting shows what has been found. In 3-jet case the effects exceeds 3-sigma level between 350 GeV and 410 GeV and the center is around 380-390 GeV.

Experimenters see 3-jets as 1.9 sigma evidence for SUSY. It is probably needless to tell that 1.9 sigma evidences come and go and should not be taken seriously. Gluino pair would be produced and each gluino with mass around 385 GeV would decay to three quarks producing three jets. In tri-jet case altogether 3+3=6 jets would be produced in the decays of gluinos. The problem is that there is no missing energy predicted by MSSM scenario without R-parity breaking. Therefore the straightforward proposal of CMS collaboration is that R-parity is broken by a coupling of gluino to 3 quark state so that gluino would effectively have quark number three and gluino can decay to 3 light quarks- say uds.

The basic objection against this idea is that the distribution of 3-jet masses is very wide extending from 75 GeV (slightly below 100 GeV for selected events) to about 700 GeV as one learns from figure 1 of the CMS preprint. Resonance interpretation does not look convincing to me and to my humble opinion this is a noble but desperate attempt to save the standard view about SUSY. After proposing the explanation which follows I realized to my surprise that I had already earlier tried to explain the 390 GeV bump in terms of M89 baryon but found that this explanation failssince the mass is too low to allow this interpretation (see this).

There is also an article about nona-jets titled Has SUSY Gone Undetected in 9-jet Events? A Ten-Fold Enhancement in the LHC Signal Efficiency but I will not discuss this except by noticing that nona-jet events would serve as a unique signature of M89 baryon decays in TGD framework if the proposed model for tri-jets is correct.

Before continuing I want to make clear my motivations for spending time with thinking about this kind events which are probably statistical fluctuations. If I were an opportunist I would concentrate all my efforts to make a maximum noise about the successes of TGD. I am however an explorer rather than career builder and physics is to me a passion- something much more inspiring than personal fame. My urge is to learn what TGD SUSY is and what it predicts and this kind of activity is the best manner to do it.

1. Could one interpret the 3-jet events in terms of TGD SUSY without R-parity breaking?

I already mentioned the very wide range of 3-jet distribution as a basic objection against gluino pair interpretation. But just for curiosity one can also consider a possible interpretation in the framework provided by TGD SUSY.

As I have explained in the article, one could understand the apparent absence of squarks and gluinos in TGD framework in terms of shadronization which would be faster process than the selectro-weak decays of squarks so that the standard signatures of SUSY (jest plus missing energy) would not be produced. The mass scales and even masses of quark and squark could be identical part from a splitting caused by mixing. The decay widths of weak bosons do not however allow light exotic fermions coupling to them and this in the case of ordinary hadron physics this requires that squarks are dark having therefore non-standard value of Planck constant coming as an integer multiple of the ordinary Planck constant (see this). For M89 hadron physics this constraint is not necessary.

One can indeed imagine an explanation for 3-jets in terms of decays of gluino pair in TGD framework without R-parity breaking.

1. Both gluinos would decay as sg→ sq+q* (or charge conjugate of this) and squark in turn decays as sq → q+ sg. This would give quark pair and two virtual gluinos. Virtual gluinos would transform to a quark pair by an exchange of virtual squark: sg→ q+sq*. This would give 3 quark jets and 3 anti-quark jets.

2. Why this option possible also in MSSM is not considered by CMS collaboration? Do the bounds on squark masses make the rate quite too low? The very strong lower bounds on squark masses in MSSM type SUSY were indeed known towards the end of August when the article was published. In TGD framework these bounds are not present since squarks could appear with masses of ordinary quarks if they are dark in TGD sense. Gluinos would be however dark and the amplitude for the phase transition transforming gluon to its dark variant decaying to a gluino pair could make the rate too low.

3. If one takes the estimate for the M89 gluino mass seriously and scales to a very naive mass estimate for M107 gluino by a factor 1/512, one obtains m(sg107)=752 MeV.

As already noticed, I do not take this explanation too seriously: the tri-jet distribution is quite too wide.

2. Could tri-jets be interpreted in terms of decays of M89 quarks to three ordinary quarks?

3+3 jets are observed and they correspond to 3 quarks and antiquarks. If one takes 3-jet excess seriously it seems that one has to assume a fermion decaying to 3 quarks or two quarks and antiquark. All these quarks could be light (u,d,s type quarks).

Could M89 quarks decaying to three M107 (ordinary) quarks (q89→ q107q107q*107) be in question? If this were the case the 9-jets might allow interpretation as decays of M89 proton or neutron with mass which from naive scaling would be 512× .94 GeV ≈ 481 GeV resulting when each quark the nucleon decays to three ordinary quarks. Nona-jets would serve as a unique signature for the production of M89 baryons!

M89 quarks must decay somehow to ordinary quarks.

1. The simplest guess is that the transformation q89 → q107q107q*107 begins with the decay q89→ q107 + g89. Here g89 can be virtual.

2. This would be followed by g89→ q107+q*107. The final state would consist of two quarks and one antiquark giving rise to tri-jet. The decay of M89 gluon could produce all quark families democratically apart from phase space factors larger for light quarks. This would produce 3+3 jets with a slight dominance of light quark 3-jets.

There are two options to consider. The first option corresponds to a production of a pair of on mass shell M89 quarks with mass around 385 GeV (resonance option) and second option to a production of a pair of virtual M89 quarks suggested by the wide distribution of tri-jets.

1. Could the resonance interpretation make sense? Can the average 3-jet mass about 385 GeV correspond to the mass of M89 quark? The formulas m(π89)= 21/2m(u89) (mass squared is additive) together with m(π89)= 144 GeV would give m(u89) ≈ 101.8 GeV. Unfortunately the mass proposed for the gluino is almost 4 times higher. The naive scaling by factor 512 for charmed quark mass m(c107)= 1.29 GeV would give 660.5 GeV, which is quite too high. It seems very difficult to find any reasonable interpretation in terms of decays of on mass shell M89 quarks with mass around 385 GeV.

2. One can however consider completely different interpretation. From figure 1 of the CMS preprint one learns that the distribution of 3-jet masses is very wide beginning around 75 GeV (certainly consistent with 72 GeV, which is one half of the predicted mass 144 GeV of M89 pion) for all triplets and slightly below 100 GeV for selected triplets.

Could one interpret the situation without selection by assuming that a pair of M89 quarks forming a virtual M89 pion is produced just as the naive expectation that the old-fashioned proton-pion picture could make sense at "low" energies (using of course M89 QCD Λ as a natural mass scale) also for M89 physics. The total mass of M89 quark pair would be above 144 GeV and its decay to virtual M89 quark pair would give quark pair with quark masses above 72 GeV. Could the selected events with total 3-jet mass above 100 GeV correspond to the production of a virtual M89 quark pair?

To sum up, if one takes the indications for 3-jets seriously, the interpretation in terms of M89 hadron physics is the most plausible TGD option. I am unable to say anything about the 9-jet article but 9-jets would serve as a unique and very dramatic signature of M89 baryons: the naive prediction for the mass of M89 nucleon is 481 GeV.

For details and background see the article Is the new boson reported by CDF pion of scaled up variant of hadron physics? and the chapter New particle physics predicted by TGD: part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

## Friday, October 21, 2011

### Strange trilepton events at CMS

Lubos Motl reports that CMS sees SUSY-like trilepton excesses. Also Matt Strassler tells about indications that something curious has been detected at the Large Hadron Collider. Probably a statistical fluctuation is in question as so many times earlier. The dream to discover SUSY easily leads to mis-interpretations. Trilepton events however provide an excellent opportunity to learn about SUSY in TGD framework.

The recent view about TGD SUSY briefly

Before continuing it is good to say something about what SUSY in TGD Universe might mean and also about expected masses of squarks and sleptons and intermediate gauge bosons in TGD Universe. The picture is of course preliminary and developing all the time in strong interaction with experimental input from LHC so that there is no guarantee that I agree with this view for the rest of my life.

1. In TGD framework the super-partner of the particle is obtained by adding a the partonic 2-surface a parallelly moving right-handed neutrino or antineutrino so that one has N=1 SUSY. It must be emphasized that one has higher SUSYs but they are badly broken. Allowing both right-handed neutrino and antineutrino one obtains N=2 SUSY and interpreting all fermionic oscillator operators as generators of SUSY one obtains badly broken SUSY with rather large N, which is however finite by finite measurement resolution inducing a cutoff on the number of fermionic oscillator operators.

2. R-parity is broken in TGD SUSY since sparticle can decay to particle and neutrino. Therefore all neutral sparticles manifesting themselves as missing energy in TGD framework eventually decay and produce neutrinos as the eventual missing energy. The decay rates to particles and neutrinos can however be so slow that photino and sneutrinos leave the reactor volume before decaying.

3. The basic assumption is that particle and sparticle obey the same mass formula apart from p-adic mass scale that can be different. For instance, the masses of sleptons are half-octaves of lepton masses. This breaking of SUSY is extremely elegant and is absolutely essential part of ordinary particle massivation too explaining the enormous mass scale differences between neutrinos and top quark in a natural manner.

4. I have proposed that the super-partners of M107 quarks (ordinary quarks) and gluon could have the same mass scale but be dark in TGD sense, in other words have Planck constant which is integer multiple of the ordinary Planck constant. This is required by the fact that intermediate gauge boson decay widths do not allow light exotic particles. This hypothesis could allow to understand the exotic X and Y mesons and also the absence of smesons containing light squarks could be understood. Since shadronization is expected to proceed much faster than selectro-weak decays of squarks, the squarks of M89 hadron physics need not be dark and M89 shadrons might be there. The fruitless search for squarks would be based on wrong signatures if this the case and already now we would have direct evidence for the squarks of M89 hadron physics.

5. Only the decays of electro-weak gauginos and sleptons would produce the standard signatures.

1. Charged sleptons must have large p-adic scales in TGD Universe. Ordinary leptons correspond to Mersenne prime M127, Gaussian Mersenne MG,113, and Mersenne prime M107. If also sleptons obey this rule, they would correspond to the Mersenne primes M89 and Gaussian Mersennes MG,n, n= 79,73. Assuming that particle and sparticle obey the same mass formula apart from different p-adic mass scale, the masses of selectron, smuon, and stau would be about 267 GeV, 13.9 TeV, and 164.6 TeV. Only selectron is expected to be visible at LHC.

2. About the mass scales of sneutrinos it is difficult to say anything definite. A natural guess is that sneutrinos are relatively light so that they would be produced in the decays of sleptons and electro-weak gauginos. Same applies to photino. These particles are good candidates to missing energy unless their decay to particle plus neutrino is fast enough.

3. There seems to be no strong constraints to the mass scales of sW and sZ. The mass scale could be even M89 characterizing W and Z. p-Adic length scale hypothesis predicts that the p-adic mass scale is half octave of intermediate boson mass scale and if the Weinberg angle is same the masses are half octaves of W/Z masses.

6. The most general option inspired by twistorial considerations (absence of IR divergences) and zero energy ontology is that both Higgs like states and Higgsinos and their higher spin generalizations are eaten so that the outcome is spectrum of massive states. This might have something do with the phenomenon in which some supersymmetry generators annihilate physical states. In any case the fermions at wormhole throats are always massless- even the virtual particles identified in terms of wormhole contacts consist of massless wormhole throats which can have also negative energy.

It is important to notice that trilepton events as signals for SUSY have nothing to do with squarks and gluinos for which I have proposed a non-standard interpretation in the previous postings (see this, this ,this) and in the article.

How to interpret the trilepton events in TGD framework?

Trilepton events represent the simplest SUSY signal and would be created in the decays W → sW+sZ. The decays Z→ sW++sW- would give rise to dilepton events. Electro-weak gauginos would in turn decay and yield multi-lepton events. Neither W/Z boson nor the gauginos need to be on mass shell.

In the following I will discuss these decays taking seriously the above listed conjectures about SUSY a la TGD.

1. Obviously the situation reduces to the study of the decays of sW and sZ.

1. For sW the decay channels are sW→ W+sγ and sW→ L+sνL*. W would decay to charged lepton-neutrino pair. One charged lepton would result in both cases.

2. For sZ the decay channels are sZ→ ν+sν* , sZ→ sW++W-, and sZ→ sL-+L+ and charge conjugates of these. For the second decay mode the decays of W+ and sW- produce lepton antilepton pair. For the third decay mode selectron is the most plausible slepton candidate and is expected to have rather large masses in TGD Universe (about 267 GeV and thus off mass-shell). sL→ L+sγ is the most natural decay for slepton.

2. The decay cascade beginning with Z→ sW++sW- would produce 2 charged leptons (more generally even number of charged leptons) plus missing energy. Charged leptons would have opposite charges. No sleptons would be needed as intermediate states and all lepton families would be democratically represented as final states.

3. The decay cascade beginning with W→ sW+sZ would produce 2 or 3 charged leptons plus missing energy.
1. For sZ→ sW++W- option 3 charged leptons would result and there would be a complete family democracy. For this option the rate is expected to be largest.
2. For the option having slepton as intermediate state, the large masses for smuon and stau would favor selectron for 3 lepton events. 3-lepton events would have charge signatures --+ or ++- following from charge conservation alone. The suggested large mass for selectron would however reduce also the rate of 3 lepton events considerably. Note that the reported events have total transversal energy larger than 200 GeV.

4. In MSSM also the sZ→ sχ10+ Z followed by Z→ L +L- is possible so that trilepton state results. Here sχ10 denotes the lightest neutral sboson and is a mixture of sh, sZ , and sγ. If sh is not in the spectrum, then sγ is an excellent candidate for the lightest neutral gaugino. If the Weinberg angle is SUSY invariant the decay producing three charged leptons in this manner is not possible.

5. Photinos would decay to photons and neutrinos producing photons and missing energy. It is not clear whether this decay is fast enough to take place in the reactor volume.

To sum up, the trilepton events are possible and would be produced in the decays sZ→ sW+W and sW→ e+sγ . The trilepton events involving selectron as intermediate state do not look highly plausible in TGD framework if one takes seriously the guess for the slepton mass scales.

For details and background see the chapter New particle physics predicted by TGD: part I of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

## Thursday, October 20, 2011

### AdS/CFT does not work well for heavy ion collisions at LHC

In the previous posting I told about a pleasant surprise in BackReaction blog. Sabine Hossenfelder told about the heavy ion collision results at LHC which have caused a cold shower for AdS/CFT enthusiasts. Or summarizing it in the words of Sabine Hossenfelder:

As the saying goes, a picture speaks a thousand words, but since links and image sources have a tendency to deteriorate over time, let me spell it out for you: The AdS/CFT scaling does not agree with the data at all.

The results

The basic message is that AdS/CFT fails to explain the heavy ion collision data about jets at LHC. The model should be able to predict how partons lose their momentum in quark gluon plasma assumed to be formed by the colliding heavy nuclei. The situation is of course not simple. Plasma corresponds to low energy QCD and strong coupling and is characterized by temperature. Therefore it could allow description in terms of AdS/CFT duality allowing to treat strong coupling phase. Quarks themselves have a high transversal momentum and perturbative QCD applies to them. One has to understand how plasma affects the behavior of partons. This boils to simple question: What is the energy loss of the jet in plasma before it hadronizes.

The prediction of AdS/CFT approach is a scaling law for the energy loss E ∝ L3T, where L is the length that parton travels through the plasma and the temperature T is about 500 MeV is the temperatures of the plasma (at RHIC it was about 350 MeV). The figure in the posting of Sabine Hossenfelder compares the prediction for the ratio RAA of predicted nuclear cross section for jets in lead-lead collisions to those in proton-proton collisions to experimental data normalized in such a manner that if the nucleus behaved like a collection of independent nucleons the ratio would be equal to one.

That the prediction for RAA is too small is not so bad a problem: the real problem is that the curve has quite different shape than the curve representing the experimental data. In the real situation RAA as a function of the average transversal momentum pT of the jets approaches faster to the "nucleus as a collection of independent nucleons" situation than predicted by AdS/CFT approach. Both perturbative QCD and AdS/CFT based model fail badly: their predictions do not actually differ much.

An imaginative theoretician can of course invent a lot of excuses. It might be that the number Nc=3 of quark colors is not large enough so that strong coupling expansion and AdS/CFT fails. Supersymmetry and conformla invariance actually fail. Maybe the plasma temperature is too high (higher that at RHIC where the observed low viscocity of gluon plasma motivated AdS/CFT approach). The presence of both weak coupling regime (high energy partons) and strong coupling regime (the plasma) might have not been treated correctly. One could also defend AdS/CFT by saying that maybe one should take into account higher stringy corrections for strings moving in 10 dimensional AdS5× S5. Why not branes? Why not black holes? And so on....

Could the space-time be 4-dimensional after all?

What is remarkable that a model called "Yet another Jet Energy-loss Model" (YaJEM) based on the simple old Lund model treating gluons as strings in 4-D space-time works best! Also the parameters derived for RHIC do not need large re-adjustment at LHC.

4-D space-time has been out of fashion for decades and now every-one well-informed theoretician talks about emerget space-time. Don't ask what this means. Despite my attempts to understand I (and very probably any-one) do not have a slighest idea. What I know is that string world sheets are 2-dimensional and the only hope to get 4-D space-time is by this magic phenomenon of emergence. In other worlds, 3-brane is what is wanted and it should emerge "non-perturbatively" (do not ask what this means!).

Since there are no stringy authorities nearby, I however dare to raise a heretic question. Could it be that string like objects in 4-D space-time are indeed the natural description? Could strings, branes, blackholes, etc. in 10-D space-time be completely un-necessary stuff needed to keep several generations of misled theoreticians busy? Why not to to start by trying to build abstraction from something which works? Why not start from Lund model or hadronic string model and generalize it?

This is what TGD indeed was when it emerged some day in Octorber year 1977: a generalization of the hadronic string model by replacing string world sheets with space-time sheets. Another motivation for TGD was as a solution to the energy problem of GRT. In this framework the notion of (color) magnetic flux tubes emerges naturally and magnetic flux tubes are one of the basic structures of the theory now applied in all length scales. The improved mathematical understanding of the theory has led to notions like effective 2-dimensionality and stringy worlds sheets and partonic 2-surfaces at 4-D space-time surface of M4×: CP2 as basic structures of the theory. About this I have told in quite recent posting.

What TGD can say about the situation?

In TGD framework a naive interpretation for LHC results would be that the colliding nuclei do not form a complete plasma and this non-ideality becomes stronger as pT increases. As if for higher pT the parton would traverse several blobs rather than only single big one and situation would be between an ideal plasma and to that in which nucleuo form collections of independent nucleons. Could quantum superposition of states with each of them representing a collection of some number of plasma blobs consisting of several nucleons be in question. Single plasma blob would correspond to the ideal situation.

In TGD framework where hadrons themselves correspond to space-time sheets, this interpretation is suggestive. The increase of the temperature of the plasma corresponds to the reduction of αs suggesting that with at T=500 GeV at LHC the plasma is more "blobby" than at T=350 GeV at RHIC. This would conform with the fact that at lower temperature at RHIC the AdS/CFT model works better. Note however that at RHIC the model parameters for AdS/CFT are very different from those at LHC: not a good sign at all.

I have discussed the TGD based explanation of RHIC results for heavy ion collisions and the unexpected behavior of quark-gluon plasma in proton-proton (rather than heavy ion) collisions at LHC here. See also the blog posting about LHC results.

The application of AdS/CFT duality to condensed matter is the most recent fashion. Good luck! This is certainly needed. And a lot of funding too! It is quite a challenge to describe the complexities of condensed matter physics in terms of strings, branes, and black holes in 10 dimensions! But certainly one could invent less clever ways to spend one's time.

### Pleasant surprise

Probably I am not the only one having as a habit to visit blogs as the first activity after wake-up. May be it is due to age but I have become lazy and tend to visit again and again in the same blogs and get usually irritated since the comment sections are usually rather dull. At this time this visit was stimulating rather than irritating. In Not-Even Wrong there were three postings deserving brief comments.

Old men in Solvay centennary

The first posting by Peter Woit was about Solvay centennary conference. I learned that the average age of participants was 61 years. For 100 years ago it was 41 years. Quite a change! But not everything has changed: practically all participants were still males! You might blame me for age-racism but I dare claim that the increase of the average age certainly does not signal about the vitality of the field of theoretical physics!

Braney explanation of neurino super-luminality

Second posting was the newest This Week's Hype telling about Michael Duff's explanation of neutrino super-luminality. Someone in the comment section told that Michael Duff has done this publicly in BBC2: see below. String theorists of course have the first night privilege to good ideas in the feudal community of theoretical physics and as a humble peasant of the community I can only take a philosophical attitude!;-)

Here in England, tonight BBC2 TV just screened a “Faster than Light” program with Michael Duff giving some string theory hype to explain the alleged 60 ns “faster than light” neutrinos. Duff stated that the results could be explained by neutrinos leaving our 4-d brane, taking a super-fast short-cut through the 11-d bulk, and then appearing again on the 4-d brane nearer the detector.

Replace "brane/short-cut" with "spaced-time sheet" along with particles propagate, add the notions of induced metric and light-like geodesic in induced metric distinguishing it light-like geodesics of the imbedding space and here it is.

Or at least almost! Something is still lacking: why the maximal signal velocity at neutrino space-time sheets would be higher than at photon space-time sheets? Why it depends on length scale, why not on energy? And so on...? Here one cannot avoid TGD and induced gauge field concept and TGD unavoidably creeps in. See the details at my blog.

AdS/CFT does not work well in heavy ion collisions at LHC

The third posting was titled String Theory Finds a Bench Mate. The posting mentioned the posting of Sabine Hossenfelder in BackReaction blog mentioning:

As the saying goes, a picture speaks a thousand words, but since links and image sources have a tendency to deteriorate over time, let me spell it out for you: The AdS/CFT scaling does not agree with the data at all.

I went the BackReaction blog and found an excellent posting and surprise-surprise: a comment section full of interesting comments from people who know what they are talking about! This saved my morning. The posting was so good that it deserves a separate posting!

## Wednesday, October 19, 2011

### Cold dark matter in difficulties

Cold dark matter scenario assumes that dark matter consists of exotic particles having extremely weak interactions with ordinary matter and which clump together gravitationally. These concentrations of dark matter would grow and attract ordinary matter forming eventually the galaxies.

Cold dark matter scenario has several problems.

1. Computer simulations support the view that dark matter should be densely packed in galactic nuclei. This prediction is problematic since the constant velocity spectrum of distant stars rotting around galactic nucleus requires that the mass of dark matter within sphere of radius R is proportional to R so that the density of dark matter would decrease as 1/r2. This if one assumes that the distribution of dark matter is spherically symmetric.
2. Observations show that in the inner parts of galactic disk velocity spectrum depend linearly on the radial distance (see this). Dark matter density should be constant in good approximation (assuming spherical symmetry) whereas the cold dark matter model represent is strong peaking of the mass density in the galactic center. This is known as core/cusp problem.
3. Cold dark matter scenario predicts also large number of dwarf galaxies with mass which would be about one thousandth of that for the Milky Way. They are not observed. This is known as missing satellites problem.
4. Cold dark matter scenario predicts signficant amounts of low angular momentum material which is not observed.

Cold dark matter scenario is however in difficulties as one learns from Science Daily article Dark Matter Mystery Deepens. Observational data about the structure of dar matter in dwarf galaxies is however in conflict with this picture. New measurements about two dwarf galaxies tell that dark matter distribution is smooth. Dwarf galaxies are believed to contain 99 per cent of dark matter and are therefore ideal for the attempts to understand dark matter. Dwarf galaxies differ from ordinary ones in that stars inside them move like bees in beehive instead of moving along nice circular orbits. The distribution of the dark matter was found to be uniform over a region with diameter of several hundred light years which corresponds to the size scale of the galactic nucleus. For comparison purposes note that Milky Way has at its center a bar like structure with size between 3300-16000 ly. Notice also that also in ordinary galaxies constant density core is highly suggestive (core/cusp problem) so that dwarf galaxies and ordinary galaxies need not be so different after all.

In TGD framework the simplest model for the galactic dark matter assumes that galaxies are like pearls in a necklace. Necklace would be long magnetic flux tube carrying dark energy identified as magnetic energy and galaxies would be bubbles inside the flux tube which would have thicknened locally. Similar model would apply to start. The basic prediction is that the motion of stars along flux tube is free apart from the gravitational force caused by the visible matter. Constant velocity spectrum for distant stars follows from the logarithmic gravitational potential of the magnetic flux tube and cylindrical symmetry would be absolutely essential and distinguish the model from the cold dark matter scenario.

What can one say about the dwarf galaxies in TGD framework? The thickness of the flux tube is a good guess for the size scale in which dark matter distribution is approximately constant: this for any galaxy (recall that dark and ordinary matter would have formed as dark energy transforms to matter). The scale of hundred light years is roughly by a factor of 1/10 smaller than the size of the center of the Milky Way nucleus. The natural question is whether the dark matter distribution could spherically symmetric and constant in this scale also for ordinary galaxies. If so, the cusp/core problem would disappear and orinary galaxies and dwarf galaxies would not differ in an essential manner as far as dark matter is considered. The problem would be essentially that of cold dark matter scenario.

For details and background see the chapter Cosmic strings of "Physics in Many-Sheeted Space-time".

## Tuesday, October 18, 2011

### Is Kähler action expressible in terms of areas of minimal surfaces?

The general form of ansatz for preferred extremals implies that the Coulombic term in Kähler action vanishes so that it reduces to 3-dimensional surface terms in accordance with general coordinate invariance and holography. The weak form of electric-magnetic duality in turn reduces this term to Chern-Simons terms.

The strong form of General Coordinate Invariance implies effective 2-dimensionality (holding true in finite measurement resolution) so that also a strong form of holography emerges. The expectation is that Chern-Simons terms in turn reduces to 2-dimensional surface terms.

The only physically interesting possibility is that these 2-D surface terms correspond to areas for minimal surfaces defined by string world sheets and partonic 2-surfaces appearing in the solution ansatz for the preferred extremals. String world sheets would give to Kähler action an imaginary contribution having interpretation as Morse function. This contribution would be proportional to their total area and assignable with the Minkowskian regions of the space-time surface. Similar but real string world sheet contribution defining Kähler function comes from the Euclidian space-time regions and should be equal to the contribution of the partonic 2-surfaces. A natural conjecture is that the absolute values of all three areas are identical: this would realize duality between string world sheets and partonic 2-surfaces and duality between Euclidian and Minkowskian space-time regions.

Zero energy ontology combined with the TGD analog of large Nc expansion inspires an educated guess about the coefficient of the minimal surface terms and a beautiful connection with p-adic physics and with the notion of finite measurement resolution emerges. The t'Thooft coupling λ should be proportional to p-adic prime p characterizing particle. This means extremely fast convergence of the counterpart of large Nc expansion in TGD since it becomes completely analogous to the pinary expansion of the partition function in p-adic thermodynamics. Also the twistor description and its dual have a nice interpretation in terms of zero energy ontology. This duality permutes massive wormhole contacts which can have off mass shell with wormhole throats which are always massive (also for the internal lines of the generalized Feynman graphs).

For details and background see the article Is Kähler action expressible in terms of areas of minimal surfaces? or the chapter Yangian Symmetry, Twistors, and TGD of "Towards M-Matrix".

### ICARUS refutes OPERA: really?

Tommaso Dorigo managed to write the hype of his life about super-luminal neutrinos. This kind of accidents are unavoidable and any blogger sooner or later becomes a victim of such an accident. To my great surprise Tommaso described in a completely uncritical and hypeish manner a study by ICARUS group in Gran Sasso and concluded that it definitely refutes OPERA result. This if of course a wrong conclusion and based on the assumption that special and general relativity hold true as such and neutrinos are genuinely superluminal.

Also Sascha Vongehr wrote about ICARUS a a reaction to Tommaso's surprising posting but this was purposely written half-joking hype claiming that ICARUS proves that neutrinos travel the first 18 meters with a velocity at least 10 times higher than c. Sascha also wrote a strong criticism of the recent science establishment. The continual uncritical hyping is leading to the loss of the respectability of science and I cannot but share his views. Also I have written several times about the ethical and moral decline of the science community down to what resembles the feudal system of middle ages in which Big Boys have first night privilege to new ideas: something which I have myself had to experience many times.

What ICARUS did was to measure the energy distribution of muons detecteded in Gran Sasso. This result is used to claim that OPERA result is wrong. The measured energy distribution is compared with the distribution predicted assuming that Cohen-Glashow interpretation is correct. This is an extremely important ad hoc assumption without which the ICARUS demonstration fails completely.

1. Cohen and Glashow assume a genuine super-luminality and argue that this leads to the analog of Cherenkov radiation leading to a loss of neutrino energy: 28.2 GeV at CERN is reduced to averge of 12.1 GeV at Gran Sasso. From this model one can predict the energy distribution of muons in Gran Sasso.
2. The figure of Icarus prepring demonstrates that the distribution assuming now energy loss fits rather well the measured energy distribution of muons. The figure does not show the predicted distribution but the figure text tells that the super-luminal distribution would be much "leaner", which one can interpret as a poor fit.
3. From this ICARUS concludes that neutrinos cannot have exceeded light velocity. The experimental result of course tells only that neutrinos did not lose energy: about the neutrino velocity it says nothing without additional assumptions.

At the risk of boring the reader I repeat: the fatal assumption is that a genuine super-luminality is in question. The probably correct conclusion from this indeed is that neutrinos would lose their energy during their travel by Cherenkov radiation.

In TGD framework situation is different (see this, this, this, and also the article). Neutrinos move in excellent approximation velocity which is equal to the maximal signal velocity but slightly below it and without any energy loss. The maximal signal velocity is however higher for a neutrino carrying space-time sheets than those carrying photons- a basic implication sub-manifold gravity. I have explained this in detail in previous postings and in the article.

The conclusion is that ICARUS experiment supports the TGD based explanation of OPERA result. Note however that at this stage TGD does not predict effective superluminality but only allows and even slightly suggests it and provides also a possible explanation for its energy independence and dependences on length scale and particle. TGD suggests also new tests using relativistic electrons instead of neutrinos.

It is also important to realize that the the apparent neutrino super-luminality -if true- provides only single isolated piece evidence for sub-manifold gravity. The view about space-time as 4-surface permeates the whole physics from Planck scale to cosmology predicting correctly particle spectrum and providing unification of fundamental interactions, it is also in a key role in TGD inspired quantum biology and also in quantum consciousness theory inspired by TGD.

Let us sincerely hope that the conclusion of ICARUS will not be accepted as uncritically as Tommasso did.

For details and background see the article Are neutrinos superluminal and the chapter TGD and GRT of "Physics in Many-Sheeted Space-time".

## Sunday, October 16, 2011

### Mayan calendar and serious astronomers

Mayan calendar inspires some people and makes many astrophysicists very very angry. If you want to raise the blood pressure of astrophysics, just mention the magic year 2012 when Earth, Sun, and galactic center are aligned along single line and something weird things start to happen.

About this alignment I learned yesterday evening from some web page as I was searching for data about the notion of galactic alignment serving as a direct indication that galaxies might be like pearls in a necklace or bubbles inside very long magnetic flux tubes and therefore strongly correlated. Similar correlations should hold true for stars along same flux tube. For instance, angular momenta would tend to be parallel for nearly straight flux tubes.

Now I must raise the blood pressure of serious astronomers whom I deeply respect. Apologies for All Serious Astronomers.

1. When Earth, Moon, and Sun are at the same line, weird things happen to the penduli as economy Nobelist Allais demonstrated in heroic experiments during which he had to remain awake for weeks to write down the recordings about the position of pendulum! TGD explanation is in terms of large hbar gravitons which cause large interference effects when Earth, Moon, and Sun are collinearly aligned (see this).

2. But exactly the same alignment occurs for Earth, Sun, and galactic center in magic year 2012! Could something very weird things indeed happen? Maybe not only penduli but the whole society would behave in very crazy manner. Just by looking what has happened in world economy and theoretical physics during last decades makes anyone in this right mind convinced that world has gone mad. If we believe Mayan calendar we have only to wait only year or so to see how mad the world can become since then Earth, Sun, and galactic center are exactly collinear and the effects maximal. After this we might hope that things will get gradually better or that even something totally new emerges.

Recall, that the posting Cosmic evolution as transformation of dark energy to matter was basically inspired about the precession of equinoxes about which I learned from my friend Pertti Kärkkäinen. This phenomenon was well-known to Mayan astronomers and Mayan calendar relies on it. The situation in which Earth, Sun, and galactic center are at the same line was of very special importance in Mayan calendar and the entire world view behind it. This also explains why the calendar ends at 2012 .

The model for the precession of equinoxes that I develpped (see this) assumes the precession of entire solar system rather than only Earth and it leads to the vision about how galaxies and stars have emerged via a formation of bubbles of ordinary matter inside flux tubes of magnetic energy identifiable as dark energy. Amusing that this idea pops up in the head of some crackpot just when we are approaching year 2012;-)! Kind of cosmic joke?

Dear Respected Astronomers, this is not a conspiracy against good science. I swear that before yesterday evening I didn't have a slightest idea about the role of equinox precession or collinearity of Earth, Sun, and galactic center in Mayan calendar: year 2012 has been for me just something as irritating as it is to every astronomer worth of his monthly salary.

Maybe something great will happen next year! Yes, this is actually true! I remember just now that politicians have promised that during 2012 I will get 120 euros -taxes =about 80 euros more to my monthly income from Kela and social office. This makes a considerable fraction of the minimum outcome thought to be necessary for basic metabolic needs!

## Friday, October 14, 2011

Lubos told about the latest information concerning Higgs search. It is not clear how much these data reflect actual situation. Certainly the mass values must correspond to observed bumps. The statistical significances are expected statistical significances, not based on real data. Hence a special caution is required. At 4.5/fb of data one has following bumps together with their expected statistical significance:
• 119 GeV: 3 sigma
• 144 Gev: 6 sigma(!)
• 240 GeV: 4.5 sigma
• 500 GeV: 4 sigma
It is interesting to try to interpret these numbers in TGD framework.

The interpretation of 144 GeV bump

Consider first the 144 GeV state 6 sigma expected significance, which is usually regarded as a criterion for discovery. Of course this is only expected statistical significance, which cannot be taken seriously.

1. 144 GeV is exactly the predicted mass of the pion of M89 hadron physics which was first observed by CDF and then decided to be a statistical fluctuation. I found myself rather alone while defending the interpretation as M89 pion in viXra log and trying to warn that one should not throw baby with the bath water.
2. From an earlier posting of Lubos one learns that 244 GeV state must be CP odd -just like neutral pion- and should correspond to A0 Higgs of SUSY. Probably this conclusion as well as the claimed CP even property of 119 GeV state follow both from the assumption that these states correspond to SUSY Higgses so that one must not take them seriously.
3. The next step before TGD will be accepted is to discover that this state cannot be Higgs of any kind.

X and Y mesons as meson like bound states or color excitations of quarks or of squarks?

Could the other bumps correspond to the pseudoscalar mesons of M89 hadron physics? For only a week ago I would have answered 'Definitely not'! Situation however changed completely as I realized that the weird X and Y mesons which are charmonium like states could be interpreted as dark scharmonia consisting c squark and cbar squark. Another option is that X and Y type mesons correspond to dark mesons formed from color excitations of quarks in representions 6bar of 15. The structures of the two models are essentially identical and it is not yet possible to distinguish between them. See the blog postings

and

and also the article

Dark squark option led to a beautiful model explaining why light mesons do not seem to have smesonic companions and to an explanation why squarks and gluinos have not been detected at LHC nor before. The reason is simply that shadronization takes place before the decays of squarks to quark and selectro-weak gauge boson. Shadrons in turn decay to hadrons by gluino exchange.

Could the claimed bumps explained by assuming that also M89 quarks have either color excitations or super partners with the same mass scale and the same mechanism is at work for M89 mesons as for ordinary mesons. The same question can be made for the option based on color excitations of quarks in 6bar or 15.

Possible identification of bumps

Consider now the possible identification of the remaining Higgs candidates concentrating for definiteness to the squark option. In the recent case the decay widths of intermediate gauge bosons do not pose any constraints so that there is no need to assume darkM89 squarks.

1. In the earlier framework there was no identification for meson like states below 144 GeV. The discovery of this week was however that squarks could have the same p-adic mass scale as quarks and that one has besides mesons also smesons consisting of squark pair as a consequence. Every meson would be accompanied by a smeson. Gluino exchange however mixes mesons and smesons so that mass eigenstates are mixtures of these stgates. At low energies however the very large non-diagonal element of mass squared matrix can make second mass eigenstate tachyonic. This must happen for mesons consisting of light quarks. This of course for the M107 hadron physics familiar to us.

2. Does same happen in M89 hadron physics? Or is the non-diagonal element of mass squared matric so small that both states remain in the spectrum? Could 119 GeV state and 144 GeV state correspond to the mass eigenstates of supersymmetric M89 hadron physics? If this is the case one could understand also this state.

3. What about 240 GeV state? The proposal has been that selectron corresponds to M89. This would give it the mass 262.14 GeV by direct scaling; m(selectron)= 2(127-89)/2m(electron). This is somewhat larger than 240 GeV.

Could this state correspond to spartner of the ρ89 consisting of M89 squarks. There is already earlier evidence for bumps at 325 GeV interpreted in terms of ρ and ω. The mass squared difference should be same for pionic mass eigenstates and ρ like mass eigenstates. This would predict that the mass of the second ρ like eigenstate is 259 GeV, which is not too far from 240 GeV.

Addition: I just found Tommaso Dorigo's newest posting The Plot Of The Week - The 327 GeV ZZ Anomaly, which in TGD framework could be interpreted in terms of decays of the neutral member of ρ89 isospin triplet or ω89, which is isospin singlet. A small splitting in mass found earlier is expected unless this decay corresponds to ω89. Also WZ anomaly is predicted.

4. What about the interpretation of 500 GeV state? The η meson of M107 hadron physics has mass 957.66 MeV. The scaling by 512 gives 490.3 GeV- not too far from 500 GeV!

The alternative option replaces squarks with their color excitations. The arguments are identical in this case. Many other pseudoscalar mesons states are predicted if either of these options is correct. In the case of squark option one could say that also SUSY in TGD sense has been discovered and has been discovered in ordinary hadron physics for 8 years ago! SUSY would not reveal itself via the usual signatures since shadronization would be faster process than the decay of squarks via emission of selectro-weak bosons.

All this looks too good to be true. I do not know how the expected significances are estimated and how precisely the mass values correspond to experimental data. In any case, if these states turn out to be pseudoscalars, one can say that this is a triump for TGD. Combining this with the neutrino super-luminality which can be explained easily in terms of su

For details and background see the article Do X and Y mesons provide evidence for color excited quarks or squarks? and the chapter New particle physics predicted by TGD of "Dark Matter Hierarchy and p-Adic Length Scale Hypothesis".

## Thursday, October 13, 2011

### Does shadronization explain the failure to discover SUSY?

In the earlier posting I proposed an explanation of the weird X and Y mesons believed to consist of ccbar pairs in terms of bound states of either color excited c and cbar or corresponding squarks. It is necessary to assume that dark matter in TGD sense is in quesetion so that hbar does not have the standard value. The mathematical structure of these models is apart from minor details exactly the same and at least moment I cannot decide which of them I should prefer.

If dark squarks are in question, the prediction would be that quarks and squarks have the same p-adic mass scale, perhaps even identical masses! This sounds completely non-sensical but could make sense if one believes the arguments of the article Do X and Y mesons provide evidence for color excited quarks or squarks?.

The point is that the usual view about how SUSY manifests itself experimentally could be manifestly wrong. It could quite well be that a strong process which might be called shadronization takes place much faster the the decay of squarks and gluinos to quarks and gluinos and electroweak gauginos (selectro-weak process). Shadrons would in turn decay to ordinary hadrons by gluino exchange. One could also speak about R-parity confinement. No neutralinos (which in TGD framework would correspond to neutrinos) would be produced so that the missing energy and jets would not serve as a signature of SUSY. This would explain why no sign of SUSY was found at LHC nor in all the earlier experiments at low energies. X and Y mesons -actually smesons- would however provide a direct signature of SUSY discovered already 8 years ago!

This picture would lead to a completely new view about detection of squarks and gluinos.

1. In the standard scenario the basic processes are production of squark and gluino pair. The creation of squark-antisquark pair is followed by the decay of squark (anti-squark) to quark (antiquark) and neutralino or chargino. If R-parity is conserved, the decay chain eventually gives rise to at least two hadron jets and lightest neutralinos identifiable as missing energy. Gluinos in turn decay to quark and anti-squark (squark and antiquark) and squark (anti-squark) in turn to quark (anti-quark) and neutralino or chargino. At least four hadron jets and missing energy is produced. In TGD framework neutralinos would decay eventually to zinos or photinos and right-handed neutrino transforming to ordinary neutrino (R-parity is not conserved). This process might be however slow.

2. In the recent case quite different scenario relying on color confinement and "shadronization" suggests itself. By definition smesons consist of squarks and antisquark. Sbaryons could consist of two squarks containing right-handed neutrino and its antineutrino ( N=2 SUSY) and one quark and thus have same quantum numbers as baryon.

Also now dark squark or gluino pair would be produced at the first step. These would shadronize. One can indeed argue that the required emisson of winos and zinos and photinos is too slow a process as compared to shadronization. Shadrons (mostly smesons) would in turn decay to hadrons by the exchange of gluinos between squarks. No neutralinos (missing energy) would be produced. This would explain the failure to detect squarks and gluinos at LHC.

This mechanism does not however apply to sleptons so that it seems that the p-adic mass scale of sleptons must be much higher for sleptons than that for squarks as I have indeed proposed.

The identification of X and Y as smesons looks like a viable option and M89 shadronization could explains the failure to find SUSY at LHC if shadronization is a fast process as compared to the selectro-weak decays. M89 squarks need not however be dark since intermediate gauge boson decay widths pose not constraints. The option certainly deserves an experimental testing. One could learn a lot about SUSY in TGD sense (or maybe in some other sense!) by just carefully scanning the existing data at lower energies. For instance, one could try to answer the following questions by analyzing the already existing experimental data.

1. Are X and Y type mesons indeed in 1-1 correspondence with charmonium states? One could develop numerical models allowing to predict the precise masses of scharmonium states and their decay rates to various final states and test the predictions experimentally.

2. Do bbarb mesons have smesonic counterparts with the same mass scale? What about Bc type smesons containing two heavy squarks?

3. Do the mesons containing one heavy quark and one light quark have smesonic counterparts? My light-hearted guess that this is not the case is based on the assumption that the general mass scale of the mass squared matrix is defined by the p-adic mass scale of the heavy quark and the non-diagonal elements are proportional to the color coupling strength at p-adic length scale associated with the light quark and therefore very large: as a consequence the second mass eigenstate would be tachyonic.

4. What implications the strong mixing of light mesons and smesons would have for CP breaking? CP breaking amplitudes would be superpositions of diagrams representing CP breaking for mesons resp. smesons. Could the presence of smesonic contributions perhaps shed light on the poorly understood aspects of CP breaking?

For details and background see the article Do X and Y mesons provide evidence for color excited quarks or squarks? and the chapter The Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

It seems that censorship in blogs is real or that Science2.0 blogs are functioning in very weird manner. I tell about the strange course events knowing that I should not do this since the very purpose of the activity might be to give me a label of paranoid. In the recent situation when neutrino superluminality might turn to give an extremely simple experimental proof for TGD based view about space-time this would be very convenient excuse for the powerholders of science who have silenced me for 33 years now.

• After two hours I found to my great surprise that my comment had mysteriously appeared in the blog of Tommaso. I removed my blog posting thinking that this was just one of those mysterious errors which happen sometimes.
• I checked the situation today again: the my earlier published comment had disappeared! My question to Tommaso about what might be happening did not appear in the comment section. Something very strange is clearly happening.

"You have asked to receive notification of comments on the article,"Neutrinos CAN Go Faster Than Light Without Violating Relativity". You can view the comment at the following url

http://www.science20.com/alpha_meme/neutrinos_can_go_faster_light_without_violating_relativity-82950#comment-83546"

My attempt failed. Later I found that even the comment which I tried to answer had disappeared! This begins to look really surreal.

It seems that something very strange is happening in Science2.0 blogs. It would be sad if censorship so familiar to me has been extended to blogging. I do not believe that either Sascha Vongehr or Tommaso have anything to do with the censorship. Science2.0 just functions improperly. The silliest thing for those behind Science2.0 to do would be "Access denied" type censorship but one must take into account this possibility in the recent heated situation in theoretical particle physics. I would not be surprised if some third party has managed to somehow to interfere with the functioning of Science2.0 blogs. After the intense virus attacks that I have suffered during years I would not be surprised at all if there were individuals ready to perform also this kind of terror.

I add my blog posting which I removed first since these strange events suggest strongly that censorship is at work.

----------------------------------------------

It seems that the attempts to silence me are getting more intensive. I tried to send the message below to Tommaso Dorigo's blog three times but the only response was "Access denied".

I have experiencing all kinds of silencing activities during this year. I of course perfectly well understand the motivations of the hegemony: the situation in the theoretial particle physics is scandalous. There exists a highly-developed successful theory, which year after year has been brutally censored from journals and arXiv, and too many scientists and layman have begun to realize what has happened. Despite these strange "Access denied" messages I sincerely hope that Tommaso's Blog is not participating these activities. On the other hand, I am afraid that the particle physics empire might be beating back by trying to minimize information about the existence of me and TGD from web.

In any case, I glue below the message that I repeatedly tried to send to Tommaso's blog.

-------------------------------------------------

Also my experience from reading Sascha's posting was that there was critical attitude both towards the OPERA result and the easy arguments claiming that the result is a result of an error. What has happened during this year should have taught to us that Big Science suffers from real problems which are to high extent ethical: too much hype, arrogant attitudes preventing real communication, and complete silencing of those who are able to represent something original,....

To my view the job of theoretical physicist is to imagine various possibilities rather than passively wait that LHC tells the truth. It is nice that theoreticians try to demonstrate that OPERA result contains an error but most of the claims I have seen do not have much weight. It is so easy to forget the whole damned thing by using some easy pet argument.

Why not to pretend for a moment that OPERA result is real and look for the consequences from various points of view? Could tachyons be real? If not, does the very notion of maximal signal velocity need an updating? Is it possible to imagine a generalization of the framework provided by special and general relativities consistent with Principle of Relativity (Poincare invariance), General Coordinate Invariance, and Equivalence Principle?

I have been wondering all these years as theoreticians have talked about LHC as a savior of theoretical physics and at the same time put under the rug various anomalies, which should be pure gold for a good theoretician.

## Tuesday, October 11, 2011

### Do X and Y mesons provide evidence for color excited quarks or squarks?

Now and then come the days when head is completely empty of ideas. One just walks around and gets more and more frustrated. One can of course make authoritative appearances in blog groups and express strong opinions but sooner or later one is forced to look for web if one could find some problem. At this time I had good luck. By some kind of divine guidance I found myself immediately in Quantum Diaries and found a blog posting with title Who ordered that?! An X-traordinary particle?

Not too many unified theorists take meson spectroscopy seriously. Although they are now accepting low energy phenomenology (the physics for the rest of us) as something to be taken seriously, meson physics is for them a totally uninteresting branch of botany. They could not care less. As a crackpot I am however not well-informed about what good theoretician should do and shouldn't do and got interested. Could this give me a problem that my poor crackpot brain is crying for?

The posting told me that in the spectroscopy of ccbar type mesons is understood except for some troublesome mesons christened imaginatively with letters X and Y plus brackets containing their mass in MeVs. X(3872) is the firstly discovered troublemaker and what is known about it can be found in the blog posting and also in Particle Data Tables. The problems are following.

1. First of all, these mesons should not be there.
2. Their decay widths seem to be narrow taking into account their mass.
3. Their decay characteristics are strange: in particular the kinematically allow decays to DDbar dominating the decays of Ψ(3770) with branching ratio 93 per cent has not been observed whereas the decay to DDbarπ0 occurs with a branching fraction >3.2× 10-3. Why the pion is needed?
4. X(3872) should decay to photon and charmonium state in a predictable way but it does not.

One of the basic predictions of TGD is that both leptons and quarks should have color excitations (see this). In the case of leptons there is a considerable support as carefully buried anomalies: the first ones come from seventies. But in the case of quarks this kind of anomalies have been lacking. Could these mysterious X:s and Y:s provide the first signatures about the existence of color excited quarks. An alternative proposal is that X and Y are meson like states formed from superpartners of charmed quark and antiquark. Consider for definiteness the option based on color excited quarks.

1. The first basic objection is that the decay widths of intermediate gauge bosons do not allow new light particles. This objection is encountered already in the model of leptohadrons. The solution is that the light exotic states are possible only if they are dark in TGD sense having therefore non-standard value of Planck constant and behaving as dark matter. The value of Planck constant is only effective and has purely geometric interpretation in TGD framework.
2. The basic objection is that light quarks do not seem to have such excitations. The answer is that gluon exchange transforms the exotic quark pair to ordinary one and vice versa and considerable mixing of the ordinary and exotic mesons takes place. This kind of coupling between gluon octet, color triplet and D-dimensional triality one representation is possible for D=6bar and D=15 (note that standard Lie-algebra coupling of gluons is not in question). At low energies where color coupling strength becomes very large and this gives rise to mass squared matrix with very large non-diagonal component and the second eigenstate of mass squared is tachyon and therefore drops from the spectrum. For heavy quarks situation is different and one expects that charmonium states have also exotic counterparts.

3. The selection rules can be also understood. The decays to DDbar involve at least two gluon emissions decaying to quark pairs and producing additional pion unlikes the decays of ordinary charmonium state involving only the emission of single gluon decaying to quark pair so that DDbar results.

The decay of lightest X to photon and charmonium is not possible in the lowest order since at least one gluon exchange is needed to transform exotic quark pair to ordinary one. Exotic charmonia can however transform to exotic charmonia. Therefore the basic constraints seem to be satisfied.

The above arguments apply with minimal modifications also to squark option and at this moment I am not able to to distinguish between this options. The SUSY option is however favored by the fact that it would explain why SUSY has not been observed in LHC in terms of shadronization and subsequent decay to hadrons by gluino exhanges so that the jets plus missing energy would not serve as a signature of SUSY. Note that the decay of gluon to dark squark pair would require a phase transition to dark gluon first.

For details and background see the article Do X and Y mesons provide evidence for color excited quarks or squarks? and the chapter The Recent Status of Leptohadron Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

## Sunday, October 09, 2011

### Wikipedia and Finnish physicists

The power holders of Finnish science continue to create classics of involuntary comics. Some examples related to Wikipedia.

A couple of years ago my name had appeared to the list of Finnish scientist. I was asthonished: how this kind of accident could have happened. Censors had been sleeping. The list contained also Claus Montonen: his name is familiar to every theoretical physicist from Montonen-Olive duality. In any civilized country a scientist of this caliber would be a professor but not in Finland.

I decided to check the situation since the reasonable expectation was that the recent experimental support for TGD must have raised the blood pressure and addrenaline excretion of my colleagues and also waked up the censors.

Gunnar Nordström was still there with an elaboration felt to be very important: "theoretical physicists, did not invent general relativity" (italics is not mine)! How movingly Finnish! For some years ago there had been a proposal to name one street in Kumpula according to Nordström for reasons obvious for anyone with intelligence quotient above 100. The attempt failed: the justification of professors was that Nordstöm had not been a professor!

Not surprisingly, I also learned that the decision makers of Finnish science had reacted to the Wikipedia article appropriately and removed my name as also the name of Claus Montonen! The latter removal made me astohished but tells a lot about the intelligence quotient of the Wikipedia activists involved.

Another descriptive event. For years ago an article about TGD was added to Wikipedia. It created an immense furor and very probably many of the anonymous and extremely aggressive commenters were finnish colleagues. It was amusing to find how creative fiction these otherwise rather unimaginative souls had managed to produce. Was this extreme perverse and in all imaginable manners mad and evil character they had created in the burst of godly (or devilishly?) inspiration really me?!

Later Lubos Motl added a short page about me as a physicst to the category "Physicists" in Wikipedia. Lubos was cautious and took good care that ambivalent interpretation as a crackpot or genius was possible. Even this was too much since this page disappeared in a mysterious manner. It was impossible to find the usual commentary explaining what had happened. Maybe Finnish colleagues had managed to somehow short circuit the usual procedures trying to prevent Wikipedia vandalism.

The laughing audience should however not forget that comic appears only when you look from large enough distance. Improving the resolution reveals tragic suffering and the best comics transform their personal suffering to a universal art. We should feel deep compassion: these poor fellows are really suffering! Really! Jealousy is a horrible and tragic disease.

Addition: Sahscha Vongehr has a nicely written article Superluminal Knee Jerk A Symptom But Transparent Science Maybe Cure about the recent situation in big science. The motivation for the article comes from neutrino super-luminality. The main stream is ready to forget the discovery without further comments as an unidentified measurement error just alike many earlier anomalies. I cannot but agree with the message of Sascha. In media we see only POP science producing hype after hype and ruthless censorship has been accepted as an essential part of science. The degeneration of science ethics has already led to a loss of respectability: laymen are not idiots ready to swallow any kind of hype. My hope is that the openness forced by web could cure the situation. Maybe the situation is becoming mature for a profound change: enough is simply enough.

## Friday, October 07, 2011

### Does Witten prefer knots over strings?

Peter Woit tells in his blog some the news from Dublin is that Witten will be in town soon to give the Hamilton Lecture, with the Irish Times reporting that

Witten’s Hamilton Lecture will abandon string theory, however, in favour of knots, with a talk entitled: The Quantum Theory of Knots.

Suppose that this statement is true and not only journalistic exaggeration. What if means if taken at face value?

1. Strings can get knotted (and linked and braided) in 3-D space: Witten's paper which led to a Fields medal was about topological theory based on Chern-Simons action, which he used to classify knots and 3-topologies in terms of invariant known as Jones polynomial.

2. In 4-D space one has 2-knots and their knotting. Linking is replaced with the intersection of four-surfaces in 4-D case. The so called intersection form is a topological invariant used to classify four-manifolds so that intersection string world sheets could be used to classify four-topologies. Note than linking and knotting are not the same thing in dimensions higher than D=3.

D=4 is the dimension of space-time surface containing string world sheet in TGD framework and I have of course done my best to transform in my own humble physicists's style Witten's horribly technical approach to the description of knots and also knots and also 2-knots in TGD framework (see this).

In TGD Universe knots, links, and braids can be assigned with two kinds of 3-surfaces.

1. 3-surfaces can be light-like 3-surfaces at which the signature of the induced metric changes: they are identified as basic building bricks of elementary particles. One could speak of light-like braids (somewhat illogically I have talked about time-like braids).
2. The second kind of 3-surfaces are space-like 3-surfaces at the ends of space-time sheets at boundaries of causal diamonds. These could be called space-like braids.
Also two-knots emerge in D=4. These light-like resp. space-like braids corresponds to the space-like and light-like boundaries of 2-knots in the interior of space-time defining the counterparts of string world sheets. The basic operations (admittedly somewhat violent;-)) that Alexander the Great used to open knots have interpretation in terms of basic reaction vertices for strings.

This suggests that string diagrams can be used to describe the Alexandrian method of opening knots whose generalization is widely used in Finnish academic life to resolve more abstract problems related to funding issues. If so, a careful recording the steps of this rather unpleasant procedure (from the point of view of knot and -in more abstract context - of finnish scientific dissident) would define a knot invariant.

In TGD framework the topology of the imbedding of string world sheet has a deep physical meaning. DNA as topological quantum computer vision provides a more concrete application of these ideas in quantum biology. One beautiful application is a concrete mechanism of memory coding based on braiding of magnetic flux tubes (amusingly, knots have been used as a manner to code memories!).

So: the innocent question is whether Witten is beginning or has has begun to realize that TGD exists (should I add ";-)" or not?)?

## Wednesday, October 05, 2011

In Not-Even-Wrong there is a posting with title p-Adic Numbers and Cosmology. Peter Woit tells that Leonard Susskind has suddenly discovered p-adic numbers and their applicability to cosmology. I glue below my comment on the topic of posting.

--------------------------------------------

Nice to learn that p-adic numbers about which I have been talking for two decades now and written numerous articled and 15 books with practically everyone of them containing applications of p-adic physics and of a more general vision fusing real and p-adic physics for various primes p to single coherent whole.

There are 7 books about TGD proper and all they contain applications of p-adic physics in various length scales. There are 8 books about quantum biology and consciousness acording to TGD and also here p-adic physics plays also a central role.

Here is a sample of links to online books containing some applications ranging from Planck length scale to cosmology. The book

in particular the following chapters

The chapters in the book p-Adic length scale hypothesis and dark matter hierarchy about p-adic mass calculations and other particle physics related applications. The basic results are also published in Prespacetime journal - actually quite recently - so that I have at least formal priority to these 15 year old results, whose publication was completely out of question in so called respected journals and in arXiv.

At the age of 61 years and having lived for my whole life as academic out-of-law I would be of course really happy if half of my life work which has taken 35 years of my life would be at least mentioned in the reference lists of future articles applying p-adic notions. I however know that we are living the era of Big Science and on basis of my experiences it seems that research ethics does not to belong to Big Science.

## Monday, October 03, 2011

### Why neutrinos travel faster in short length scales?

Sasha Vongehr written several interesting blog postings about superluminal neutrinos. The latest one is titled A million times the speed of light. I glue below my comment about explaining how one can understand qualitatively why the dependence of the maximal signal velocity at space-time sheet along with the relativistic particle propagates is lower in long length scales.

The explanation involves besides the induced metric also the notion of induced gauge field (induced spinor connection): here brane theorists reproducing TGD predictions are bound to meet difficulties and an instant independent discovery of the notion of induced gauge field and spinor structure is needed in order to proceed;-). Here is my comment in somewhat extended form.

-------------------------------------------------------------------------------

Dear Sascha,

I would be critical about two points.

1. I would take Poincare invariance and general coordinate invariance as a starting point. I am not sure whether your arguments are consistent with these requirements.
2. The assumption that neutrinos slow down and have gigantic maximal signal velocities initially does not seem plausible to me. Just the dependence of the maximal signal velocity on length scale is enough to understand the difference between SN1987A and OPERA. What this means in standard physics framework is not however easy to understand.

If one is ready to accept sub-manifold gravity a la TGD, this boils down to the identification of space-time sheets carrying the neutrinos (or any relativistic particles from point A to point B). This TGD prediction is about 25 years old: from Peter Woit's blog's comment section I learned that brane people are now proposing something similar: my prediction at viXra log and my own blog was that this will happen within about week: nice to learn that my blog has readers!

This predicts that the really maximal signal velocity (that for M4) is not probably very much higher than the light velocity in cosmic scales and Robertson-Walker cosmology predicts that the light velocity in cosmic scales is about 73 percent of the really maximal one.

The challenge for sub-manifold gravity approach is to understand the SN1987A-OPERA difference qualitatively. Why neutrino (and any relativistic particle) travels faster in short length scales?

1. Suppose that this space-time sheet is massless extremal topologically condensed on a magnetic flux tube thickened from a string like object X2×: Y2 subset M4× CP2 to a tube of finite thickness. The longer and less straight the tube, the slower the maximal signal velocity since the light-like geodesic along it is longer in the induced metric (time-like curve in M4× CP2). There is also rotation around the flux lines increasing the path length: see below.
2. For a planar cosmic string (X2 is just plane of M4) the maximal signal velocity would be as large as it can be but is expected to be reduced as the flux tube develops 4-D M4 projection. In thickening process flux is conserved so that B scales as 1/S, S the transversal area of the flux tube. Magnetic energy per unit length scales as 1/S and energy conservation requires that the length of the flux tube scales up like S during cosmic expansion. Flux tubes become longer and thicker as time passes.
3. The particle -even neutrino!!- can rotate along the flux lines of electroweak fields inside the flux tube and this makes the path longer. The thicker/longer the flux tube,- the longer the path- the lower the maximal signal velocity. I emphasize that classical Z0 and W fields (and also gluon fields!) are the basic prediction of TGD distinguishing it from standard model: again the notion of induced gauge field pops up!
4. Classically the cyclotron radius is proportional to the cyclotron energy. For a straight flux tube there is free relativistic motion in longitudinal degrees of freedom and cyclotron motion in transversal degrees of freedom and one obtains essentially harmonic oscillator like states with degeneracy due to the presence of rotation giving rise to angular momentum as an additional quantum number. If the transversal motion is non-relativistic, the radii of cyclotron orbits are proportional to a square root of integer. In Bohr orbitology one has quantization of the neutrino speeds: wave mechanically the same result is obtained in average sense. Fermi statistics implies that the states are filled up to Fermi energy so that several discrete effective light velocities are obtained. In the case of a relativistic electron the velocity spectrum would be of form

ceff= L/T= [1+n×(hbar eB/m)]-1/2× c#

Here L denotes the length of the flux tube and T the time taken by a motion along a helical orbit when the longitudinal motion is relativistic and transversal motion non-relativistic. In this case the spectrum for ceff is quasi-continuous. Note that for large values of hbar =nhbar0 (in TGD Universe) quasicontinuity is lost and in principle the spectrum might allow to the determination of the value of hbar.

5. Neutrino is a mixture of right-handed and left handed components and right-handed neutrino feels only gravitation where left-handed neutrino feels long range classical Z0 field. In any case, neutrino as a particle having weakest interactions should travel faster than photon and relativistic electron should move slower than photon. One must be however very cautious here. Also the energy of the relativistic particle matters.

Here brane-theorists trying to reproduce TGD predictions are in difficulties since the notion of induced gauge field is required besides that of induced metric. Also the geometrization of classical electro-weak gauge fields in terms of the spinor structure of imbedding space is needed. It is almost impossible to avoid M4× CP2 and TGD.

To sum up, this would be the qualitative mechanism explaining why the neutrinos travel faster in short scales. The model can be also made quantitative since the cyclotron motion can be understood quantitatively once the field strength is known.

-------------------------------------------------------------------------------------

For background see the chapter TGD and GRT of the online book "Physics in Many-Sheeted Space-time" or the article Are neutrinos superluminal?.