### Large parity breaking in heavy ion collisions?

Ulla Matfolk reminded about an old Sciencedaily article (see this) telling about discovery of large parity breaking effects at RHIC studying collisions of relativistic heavy ions at energies at which QCD suggests the formation of quark gluon plasma. Somehing exotic is observed but it seems to be something different from quark gluon plasma in that long range correlations not characteristic for plasma phase are present and the particle production does not look like black body radiation. Similar findings are made also at LHC and also for proton-proton collisions. This suggests new physics and M

_{89}hadron physics is the TGD inspired candidate for it. In any case, I took the article as a hype as I read it for four years ago.

Now I read the article again and started to wonder on what grounds authors claim large parity violation. What they claim to observed are magnetic fields in which u and d quarks with charges 2/3 and -1/3 move in opposite directions along the magnetic field lines (flux tubes in TGD). They assign these motions to the presence of strong parity breaking, much stronger than predicted by the standard model.

**1. Instanton density as origin of parity breaking **

What says TGD? In TGD magnetic fields would form flux tubes, even flux tubes carrying monopole flux are possible. The findings suggests that magnetic field was accompanied by electric field and that both were parallel to the flux tubes and each other in average sense. Helical magnetic and electric fields parallel in average sense could be associated with flux tubes in TGD.

The helical classical field patterns would break the parity of ground state. Instanton density for Kähler field, essentially E.B, measuring the non-orthogonality of E and B would serve as a measure for the strength of parity breaking occurring at the level of ground state and thus totally different from weak parity breaking. u and d quarks with opposite signs of em charges would move in opposite directions in the electric force.

** 2. The origin of instanton density in TGD Universe**

What is the origin of these non-orthogonal magnetic and electric fields? Here I must dig down to a twenty years old archeological layer of TGD. Already at seventies an anomalous creation of anomalous e^{+}e^{-} pairs having axion-like properties in heavy ion collisions near Coulomb wall was observed. Effect was forgotten since it was not consistent with standard model. TGD explanation is in terms of pairs resulting from the decay of lepto-pion formed as bound states of color excited electron and positron and created in strong non-orthogonal electric and magnetic fields of colliding nuclei.

**Objection**: Color excited leptons do not conform with standard model view about color. In TGD this is not a problem since colored states correspond to partial waves in CP_{2} and both leptons and quarks can move in higher color partial waves but usually with much higher mass.

Non-vanishing instanton density would mean that the orthogonal E and B created by colliding protons appear at the *same* space-time sheet so that a coherent instanton density E.B is created and gives rise to the generation of pairs. Large value of E.B means large parity breaking at the level of ground state. One expects that in most collisions the fields of colliding nuclei stay at different space-time sheets and therefore do not interfere directly (only their effects on charged particles sum up) but that with some property the fields can enter to the same space-time sheet and generate the physics not allowed by standard model.

**Objection**: Standard model predicts extremely weak parity breaking effects: this is due to the massivation of weak bosons, for massless weak bosons the parity breaking would be large. Indeed, if the non-orthogonal E and B are at different space-time sheets, no instantons are generated.

**Objection**: The existence of new particle in MeV scale would change dramatically the decay widths of weak bosons. The TGD solution is that colored leptons are dark in TGD sense (h_{eff}=n×h,n>1). Large h_{eff} would make weak bosons effectively massless below scaled up Compton length of weak bosons proportional to h_{eff} and large parity breaking could be understood also the "conventional manner".

**3. Strong parity breaking as signature of dark variant of M _{89} hadron physics**

This picture would apply also now and also leads to an increased understanding of M_{89} hadron physics about which I have been talking for years and which is TGD prediction for LHC. Very strong non-orthogonal E and B fields would be most naturally associated with colliding protons rather than nuclei. The energy scale is of course much much higher than in the heavy ion experiment. Instanton-like space-time sheets, where the E and B of the colliding nuclei could be formed as magneto-electric flux tubes (a priori this of course need not occur since fields an remain at different space-time sheets).

The formation of axionlike states is expected to be possible as pairs color excited quarks. M_{89} hadron physics is a scaled up copy of the ordinary M_{107} hadron physics with mass scale which is by a factor 512 higher. The natural possibility is pions of M_{89} hadron physics but with large h_{eff}/h ≈ 512 so that the size of M_{89} pions could increase to a size scales of ordinary hadrons! This would explain why heavy ion collisions involve energies in TeV range appropriate for M_{89} hadrons and thus Compton scales of order weak scale whereas size scales are associated with QCD plasma of M_{107} hadron physics and is by a factor 1/512 smaller. Brings in mind a line from an biblical story: The hands are Esau's hands but the voice is Jacob's voice! Quite generally, the failure estimates based on Uncertainty Principle could serve as a signature for non-standard values of h_{eff}: two great energy scale for effect as compared to its length scale.

To sum up, the strange findings about heavy ion and proton proton collisions at LHC for which I suggested M_{89} physics as an explanation would indeed make sense and one also ends up to a concrete mechanism for the emergence of dark variants of weak physics. The magnetic flux tubes playing key role in TGD inspired quantum biology would carry also electric fields not-orthonal to magnetic fields and the two fields would be twisted. As a mattter of fact, the observed strong parity breaking would be very analogous to that observed in biology if one accepts TGD based explanation of chiral selection in living matter.

**4. Could this relate to non-observed SUSY somehow?**

Dark matter and spartners have something in common: it is very difficult to observe them! I cannot resist typing a fleeting crazy idea, which I have managed to forfend several times but is popping up again and again from the murky depths of subconscious to tease me. TGD predicts also SUSY albeit different from the standard one: for instance, separate conservation of lepton and baryon numbers is predicted and fermions are not Majorana fermions. Whether covariantly constant right-handed neutrino mode which carries no quantum numbers except spin could be seen as a Majorana lepton is an open question.

One can however assume that covariantly constant right-handed neutrino, call it ν_{R}, and its antineutrino span N=2 SUSY representation. Particles would appear as SUSY 4-plets: particle, particle+ν_{R},particle + antiν_{R}, particle+ ν_{R}+antiν_{R}. Covariantly constant right-handed neutrinos and antineutrino would generate the least broken sub-SUSY. Sparticles should obey the same mass formula as particles but with possibly different p-adic mass scale.

But how the mass scales of particles and its spartners can be so different if right handed does not have any weak interactions? Could it be that sparticles have same p-adic mass scale as particles but are dark having h_{eff}=n×h so that the observation of sparticle would mean observation of dark matter!?;-). Particle cannot of course transform to its spartner directly: already angular momentum conservation prevents this. For N=2 SUSY one can however consider the transformation of particle to the state particle +X, where X is ν_{R}+antiν_{R} representing a dark variant of particle and having same quantum numbers. It would have non-standard value h_{eff} =n×h of Planck constant. The resulting dark particles could interact and generate also states in dark SUSY 4-plet. Dark photons could be spartners of photons and decay to biophotons. SUSY would be essential for living matter!

Critical reader asks whether leptopions could be actually pairs of (possibly color excited) N=2 SUSY partners of selectron and spositron. The masses of (color) excitations making up electropion must be indeed identical with electron and positron masses. Should one give up the assumption that color octet excitations of leptons are in question? But if color force is not present, what would bind the spartners together for form electropion? Coulomb attraction so that dark susy analog of positronium would be in question? But why not positronium? If spartner of electron is color excited, one can argue that its mass need not be the same as that of electron and could be of order CP_{2}! The answer comes out only by calculating and I am too old to start this business again;-). But what happens to leptohadron model if color excitation is not in question? Nothing dramatic, the mathematical structure of leptohadron model is not affected since the calculations involve only the assumption that electropion couples to electromagnetic "instanton" term fixed by anomaly considerations.

If this makes sense, the answers to four questions: * What is behind chiral selection in biology?*; *What dark matter is? *; *What spartners are and why they are not seemingly observed?*; *What is behind various forgotten axion/pion-like states?* would have a lot in common!

For the new physics predicted by TGD see the chapter "New Particle Physics Predicted by TGD: Part I" of "TGD and p-Adic numbers".

## 9 Comments:

Matti, can you comment on this? http://arxiv.org/abs/1211.4848 Scrutinizing the Cosmological Constant Problem and a possible resolution

Denis Bernard, André LeClair

(Submitted on 20 Nov 2012 (v1), last revised 6 Mar 2013 (this version, v3))

We suggest a new perspective on the Cosmological Constant Problem by scrutinizing its standard formulation. In classical and quantum mechanics without gravity, there is no definition of the zero point of energy. Furthermore, the Casimir effect only measures how the vacuum energy {\it changes} as one varies a geometric modulus. This leads us to propose that the physical vacuum energy in a Friedman-Lema\^itre-Robertson-Walker expanding universe only depends on the time variation of the scale factor a(t). Equivalently, requiring that empty Minkowski space is gravitationally stable is a principle that fixes the ambiguity in the zero point energy. On the other hand, if there is a meaningful bare cosmological constant, this prescription should be viewed as a fine-tuning.

We describe two different choices of vacuum, one of which is consistent with the current universe consisting only of matter and vacuum energy. The resulting vacuum energy density $\rhovac$ is constant in time and approximately k2cH20, where kc is a momentum cut-off and H0 is the current Hubble constant; for a cut-off close to the Planck scale, values of $\rhovac$ in agreement with astrophysical measurements are obtained. Another choice of vacuum is more relevant to the early universe consisting of only radiation and vacuum energy, and we suggest it as a possible model of inflation.

Comments: 25 pages, 2 figures. Published version, Phys. Rev. D. Many additional clarifying remarks and some additional results: (favorable) comparison with low z supernova observations, heuristic explanation and derivation of how gravity perhaps arises from quantum vacuum fluctuations

interesting indeed

"Let us begin by ignoring gravity and considering only quantum mechanics in

Minkowski space. Wheeler and Feynman once estimated that there is enough zero

point energy in a teacup to boil all the Earth’s oceans. This has led to the fantasy

of tapping this energy for useful purposes, however most physicists do not take such

proposals very seriously, and in light of the purported seriousness of the CCP, one

should wonder why. In fact, there is no principle in quantum mechanics that allows a

proper definition of the zero of energy: as in classical mechanics, one can only measure

changes in energy, i.e. all energies can be shifted by a constant with no measurable

consequences. Similarly, the rules of statistical mechanics tell us that probabilities of

configurations are ratios of (conditioned) partition functions, and these are invariant

2

if the partition functions are multiplied by a common factor as induced by a global

shift of the energies. Based on his understanding of quantum electrodynamics and

his own treatment of the Casimir effect, Schwinger once said [7], “...the vacuum is

not only the state of minimum energy, it is the state of zero energy, zero momentum,

zero angular momentum, zero charge, zero whatever.” One should not confuse zero

point energy with “vacuum fluctuations” which refer to loop corrections to physical

processes: photons do not scatter off the vacuum energy, otherwise they would be

unable to traverse the universe. All of this strongly suggests that it is impossible to

harness vacuum energy in order to do work, which in turn calls into question whether

it could be a source of gravitation."

Comment on the first post.

a) The physical effect is accelerated expansion. "Cosmological constant" is a notion of a model trying to explain it in general relativistic framework.

One can introduce to matter part of Einstein's equations energy density with negative pressure and proportional to metric. Or one can modify Einstein's equations by cosmological term.

The proposal obviously assumes the first option since one cannot modify Einstein's equations with a term depending explicitly on time. With this interpretation the proposal is sensible and one can say that Einstein's greatest blunder was indeed blunder!

The resulting physical picture resembles that of TGD.

a) First first principles: In TGD framework I assume that GRT space-time is obtained from many-sheeted space-time by lumping together the sheets of the many-sheeted space-time to single sheet regarded as region of empty Minkowski space and by modifying the flat Minkowski metric to sum of this metric with deviations of the metrics of sheets from Minkowski metric. Same for gauge potentials. This GRT space-time need not be representable as a surface in M^4xCP_2.

b) A further assumption, which is not absolutely necessary but is very natural, is that RW cosmologies are imbeddable as vacuum externals. The flatness of 3-space (Lorentz invariant light cone proper time constant) implying criticality fixes the cosmology apart from single constant

characterising its duration. Pressure is negative as in in presence of cosmological constant. Expansion is accelerating and mass density approaches infinity in finite time and transition to radiation dominated cosmology must take place.

c) In TGD this cosmology describes critical transition periods in various size scales (size of causal diamonds). This cosmology provides TGD counterpart of inflationary cosmology as critical transition period from gas of cosmic strings to radiation dominated phase.

It describes also the recent accelerating expansion. Accelerated expansion and inflation are mathematical scaled variants of the same phenomenon -criticality .

Summary: the TGD analog of cosmological constant indeed depends on cosmic time as suggested in the article.

Comment on second comment.

I agree with the authors. The idea of vacuum energy is one of the strange evergreens which should have been dead for aeons but still hangs here.

If defined quantum field theoretically, it certainly is only a signature about mathematical anomaly of the theory since even its density is infinite. For instance, it breaks Poincare invariance. It vanishes for supersymmetric theories, and this is one good reason to believe that SUSY in some form, certainly not the simplest N=1, is correct.

One can consider also other identifications, such as vacuum energy density of Higgs like field and these definitions are mathematically sensible.

To my opinion, Higgs and its variants to describe massivation are only reparametrizations. The existence of Higgs particle does not imply Higgs vacuum expectation! This is routinely forgotten by every Higgs hypeists and blogs, and I am afraid, by most of colleagues too;-). Deep thinking is not in fashion nowadays, is very slow and steals time from building social networks;-).

Wow, I can hardly believe how this makes sense. It seemed so bewildering just a few years ago...

I think you might be mistaken about the author choosing the first option, it looks like the second option is chosen and they explicitly introduce time dependence into the equations, this is the why this author and/or his colleagues have also stumbled upon the Lambert W function in his work their work in the approximate solution to transcendental equations enumerating the Riemann zeros, this function also shows up as a fundamental constant in a deep theorem about optimal control theory about optimal feedback in the presence of disturbances...!

Quote from the same article:"

As explained in the Introduction, we are interested in the vacuum energy of a free quantum field in the non-static FLRW background spacetime geometry. For simplicity we consider a single scalar field, with action [31]

. (13) In order to simplify the explicit time dependence of the action, and thereby simplify the quantization procedure, define a new field χ as = χ/a3/2. Then the action (13), after an integration by parts, becomes: (14) where (15) The advantage of quantizing χ rather than is that most of the time dependence is now in A, so that there is no spurious time dependence in the canonical momenta,

etc"

Hi,

I meant with first option explicit time dependence. Introducing an additional "cosmological constant" term to energy momentum tensor: this can of course depend of cosmic time unlike Lambda in the modification of Einstein equations.

Ah hah, thanks for the clarification, I didn't mean to imply you were wrong.. I chose the wrong words!

http://arxiv.org/abs/0911.0084

The long-range interactions between branes in diverse dimensions D0 - D8. Can this be seen as octonions?

I cannot answer to this. Branes of varying dimension do not appear in TGD. If one necessarily wants this kind of objects (say to gain acceptance of string theorists;-), one could call space-time surfaces as 3-branes, light-like orbits of partons as 2-branes, and partonic 2-surfaces and string world sheets as 1-branes, and braid strands bounding string world sheets as 0-branes. These

objects are however very different from branes physically.

I must confess that I lack motivation for following these brane world fantasies: they are so hopelessly far from reality.

Post a Comment

<< Home