https://matpitka.blogspot.com/2019/03/

Monday, March 25, 2019

Why the interstellar gas is ionized?

I became aware about new-to-me cosmological anomaly. FB really tests by tolerance threshold but it is also extremely useful.

The news is that the sparsely distributed hot gas in the space between galaxies is ionized. This is difficult to understand: as universe cooled below the temperature at which hydrogen atoms became stable, it should neutralized in standard cosmology.

In biosystems there is similar problem. Why biologically important ions are indeed ions at physiological temperatures? Even the understanding of electrolytes is plagued by a similar problem. It sounds like sacrelege to even mention to a fashionable deeply-reductionistic popular physicist talking fluently about Planck scale physics, multiverses, and landscape about the scandalous possibility that electrolytes might involve new physics! The so called cold fusion is however now more or less an empirical fact (see this) and takes place in electrolytes - also living matter is an electrolyte.

TGD explanation is based on the hierarchy of Planck constants heff=n×h0 predicted by adelic physics as kind of IQ of the system.

  1. The energy of radiation with very low frequencies - such as EEG frequencies - can be in the range of ionisation energies of atoms by E=heff×f - typically in UV range. Hence interaction between long and short length scales characterized by different values of heff becomes possible and in TGD magnetic body (MB) in long scales would indeed control bio-matter at short scales in this manner. Cyclotron radiation from magnetic flux tubes of MB carrying dark ions would be used as control tool and Josephson radiation from cell membrane would be utilized to transfer sensory input to MB.

  2. TGD variant of Nottale's hypothesis predicts really large values of heff. One would have heff= hgr= GMm/v0 at the magnetic flux tubes connecting masses M and m and carrying gravitons (v0 <c is a parameter with dimensions of velocity). What is important that at gravitational flux tubes cyclotron frequencies would not depend on m being thus universal. For instance, biophotons with energies in UV and visible range would result from dark photons with large heff= hgr for frequencies even in EEG range and below.

The ordinary photons resulting from dark photons would ionize biologically important atoms and molecules. In the interstellar space the situation would be the same: dark photons transforming to ordinary higher energy photons would ionize the intersellar gas.

This relates closely to another cosmological mystery.

  1. Standard model based cosmology cannot explain the origin of magnetic fields appering in all scales. Magnetic fields require in Maxwell's theory current and in cosmology thermal equilibrium does not allow any currents in long length scales. In TGD however magnetic flux tubes carrying monopole fluxes are possible by the topology of CP2. They would have closed 2-surface as cross section rather than disk. They are stable and do not require current to generate the magnetic field. These flux tubes would be carriers of dark matter generating the dark cyclotron radiation ionizing interstellar gas in the scale of wavelength, which would be astrophysical.

  2. There are also another kind of magnetic flux tubes for which cross section is sphere but the flux vanishes since the sphere is contractible. hese flux tubes are not stable against splitting. There would be no magnetic field in the scale of flux tube. Magnetic field is however non-vanishing and ions in it generate dark cyclotron radiation. These flux tubes would naturally carry gravitons and photons. These flux tubes could could mediate gravitational and electromagnetic interactions: gravitons and photons (also dark) would propagate along them.

  3. This picture leads to a model for the formation of galaxies as tangles of long monopole flux carrying cosmic strings looking like dipole field in the region of galaxy (for TGD based model of quasars see this): the energy of these tangle would transform to ordinary matter as the cosmic strings would gradually thicken - this corresponds to cosmic expansion. The process would be the analog of inflation in TGD. Also stars and even planets could be formed in this manner, and thickeded cosmic strings would be carriers of dark matter in TGD sense. The model explains the flat galactic rotation curves trivially.

  4. Dark ions responsible for the intergalactic ionization could reside at these monopole flux tubes or at the
    flux tubes which vanishing magnetic flux carrying mediating gravitational interactions. Which option is correct? Or can one consider both options?

    I learned some time ago that T= 160 minute period appears in astrophysics in many scales from stars to quasars (see this. Its origin is not known. The observation is hat dark cyclotron photons created by Fe2+ ions in interstellar magnetic field about .2 nT have period of 160 minutes.

    1. In TGD inspired biology the endogenous magnetic field is about .2 Gauss and now the time scale is t=.1 seconds which corresponds to alpha rhythm, the fundametal biorhyth. 160 minutes would correspond to cosmic alpha rhythm! Also cyclotron photons with this frequency could induce ionization of interstellar scales. This would require hgr which is by a factor T/t= 105 higher. The mass M would be now 105 larger than for ordinary alpha frequency for which M is naturally proportional to the mass of Earth: M= kE ME. Solar mass is 3.33×:105 times mE. Could the dark matter in question be associated with the flux tubes connecting Sun to smaller masses m mediating gravitational interaction? The ratio of Planck constants would be

      hgr,S/hgr,E= (kS/kE)× (v0,E/v0,S)× ( MS/ME).

      This would demand

      (kS/kE)× (v0,E/v0,S)=1/3.33≈ 3 .

    2. Note that the 160 minute period was discovered in the dynamics of Sun: no mechanism is not know for an oscillation coherent in so long length scale. Could this mean that the MB of Sun controls dynamics of Sun just as the MB of Earth controls the dynamics of biosphere? Is Sun a conscious, intelligent, entity?


See the article Could 160 minute oscillation affecting Galaxies and the Solar System correspond to cosmic "alpha rhythm"? or the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, March 24, 2019

Evidence for omega meson of M89 hadron physics from CMS?

Lubos Motl tells that CMS has reported evidence for a bump at 400 GeV decaying to top quark pairs. Local evidence is 3.5 sigma. Look elsewhere effect reduces it to 1.5 sigma. What was searched was new neutral scalar or pseudoscalar Higgs particle predicted by SUSY scenarios. The largest deviation from standard model background was observed for pseudoscalar Higgs.

Lubos wants to interpret this as evidence for CP odd Higgscalled "A" (C even, P odd). The article with title Search for heavy Higgs bosons decaying to a top quark pair in proton-proton collisions at s1/2 = 13 TeV" tells that the search is sensitive to the spin of the resonance. I do not however know how well the spin and CP of the decaying resonance candidate are known.

It is assumed that the resonance candidate is produced as two gluons annilate dominantly to top quark pair which couples to the Higgs candidate resonantly and decays dominantly to top quark pair. There are two effects involved. Resonance like contribution and interference with the contribution of the ordinary Higgs for pseudoscalar Higgs. The parity of the pseudoscalar Higgs shows itself in the angular distribution. CP=-1 character in principle shows itself too since it introduces to the amplitude sign -1. The CP transformation of final state consisting of superpositions of RR or LL fermion pairs is induced by (RR,LL) → -(LL,-RR). If inital state consist of two gluons one expects that CP acts trivially.

TGD almost-predicts a scaled variant of hadron physics at LHC. Mersenne prime M89 characterizes this hadron physics whereas ordinary hadron physics corresponds to Mersennen primeM107). Since there exists a handful of bumps with masses differing by factor 512 from the masses of ordinary mesons, I have the habit of scaling down the masses of the bumps (usually identified as candidates for SUSY Higgs) reported from LHC. This habit means also killing all desperate attempts of Lubos to interpret them in terms of SUSY Higgses.

And indeed. Now the scaling of 400 GeV gives 781 MeV, which is very precisely the mass 782 GeV of ω meson having C=P=-1 and spin 1. I am terribly sorry, Lubos. I am only a messanger, do not kill me.

Could spin=0 state of this meson behaving like pseudoscalar and explain the finding? By looking the article Production of CP-even and CP-odd Higgs bosons at Muon colliders one gets some idea about the symmetries amplitudes involved also in the recent case.

  1. If the resonance is scalar or pseudoscalar, the initial state helicities must be opposite. In spin 1 case there is also a contribution proportional to a matrix element of spin 1 rotation matrix corresponding to a rotation transforming to each other the axis defined by the initial and final state cm momenta of gluons and top quarks.

  2. For pseudovector ω the transformation of the propagator part of the amplitude (there sonance) under P is the sameas for pseudoscalar Higgs (change of sign) so that ω is consistent with A in this respect.

  3. The coupling of (pseudo)vector particle to ttbar pair is of form LL+RR. For pseudoscalar it is of from LR. The massivation of fermions mixing L and R allows the coupling to the longitudinal zero helicity component of spin 1 particle mimic the coupling to pseudoscalar. For massive fermions the gradient coupling of (pseudo)scalar to fermions is equivalent with ordinary (peudoscalar) scalar coupling.

    Remark: Note that the longitudinal components of weak bosons are proportional to the gradient of weakly charged part of Higgs.

    Remark: Higgs mechanism can be argued to be a pseudo solution to the massivation problem, which only reproduces fermion masses but does not predict them (Higgs couplings must be chosen proportional to fermion masses). If fermions get masses by some other genuine massivation mechanism, Higgs couplings proportional to mass follow automatically from gradient coupling. Fermion masses in turn follow in TGD from p-adic thermodynamics.

  4. For Higgs the decay width is about 10-5 of the mass and one expects that the decay width should be also now of same order of magnitude. The actual decay width of the bump is 5 per cent of the mass, and it is not clear to me how kinematics could cause so large a difference. To me this strongly suggests that strong rather than electroweak interactions are involved as TGD indeed predicts.

See the article Three anomalies of hadron physics from TGD perspective or the chapter New physics predicted by TGD: part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, March 20, 2019

The masses of hadrons, weak bosons, and Higgs in p-adic mass calculations

The TGD based model for the additional features of spin puzzle of proton led to a model of baryon in which flux the quark and antiquark at the ends of flux tubes connecting valence quarks to a triangular structure replace sea quarks, which represent a rather ugly feature of perturbative QCD. This led also to a model allowing to understand the successful predictions of baryon masses and magnetic moments in the old Gell-Mann quark model and also how constituent quark masses and current quark masses relate. It also turned out to possible to understand the masses baryons and mesons in p-adic framework.

This led to the question about weak boson and Higgs masses, which had remained poorly understood p-adically: success story continued. The secret of the success is that p-adic arithmetics combined with some very mild physical assumptions is extremely powerful constraint and leads to predictions having 1 per cent accuracy. These calculations are contained by the previous blog post but due to their importance I decided to post them separately.

Nucleon mass

This model could also allow to understand how the old-fashioned Gell-Mann quark model with constituent quarks having masses of order mp/3 about 310 MeV much larger than the current quark masses of u and d quark masses of order 10 MeV.

  1. I have proposed that the current quark + color flux tube would correspond to constituent quark with the mass of color flux tube giving the dominating contribution in the case of u and quarks. If the sea quarks at the ends of the flux tubes are light as perturbative QCD suggests, the color magnetic energy of the flux tube would give the dominating contribution.

  2. One can indeed understand why the Gell-Mann quark model predicts the masses of baryons so well using p-adic mass calculations. What is special in p-adic calculations it is mass squared, which is additive as essentially the eigenvalue of scaling generator of super-conformal algebra denoted by L0.

    m2p= ∑ m2p,n

    This due to the fact that energy is replaced by mass squared, which is Lorentz invariant quantity and conformal charge. Mass squared contributions with different p-adic primes cannot be added and must be mapped to their real counterparts first. On the real side is masses rather than mass squared, which are additive.

  3. Baryon mass receives contributions from valence quarks and from flux tubes. Flux tubes have same p-adic prime characterizing hadron but quarks have different p-adic prime so that the total flux tube contribution m2(tube)p mapped by canonical identification to mR(tubes)= (m2R(tubes))1/2 and analogous valence quark contributions to mass add up.

    mB= mR(tube)+∑q mR(valence,q).

    The map m2p→ m2R is by canonical identification defined as

    xp= ∑nxnpn→ xR= ∑ xnp-n

    mapping p-adic numbers in continuous manner to reals.

  4. Valence quark contribution is very small for baryons containing only u and d quarks but for baryons containing strange quarks it is roughly 100 MeV per strange quark. If the dominating constant contribution from flux tubes adds with the contribution of valence quarks one obtains Gell-Mann formula.

A detailed estimate for nucleon mass using p-adic mass calculations shows the power of p-adic arithmetics even in the case that one cannot perform a complete calculation.
  1. Flux tube contribution can be assumed to be independent of flux tube in the first approximation. Its scale is determined by the Mersenne prime Mk= 2k-1, k=107, characterizing hadronic space-time sheets (flux tubes). Electron corresponds to Mersenne prime M127 and the mass scales are therefore related by factor 2(127-107)/2≈ 210: scaling of electron mass me,127= .5 MeV gives mass me,107≈ .5 GeV, the mass electron had if it would correspond to hadronic p-adic length scale.

    p-Adic mass calculations give for the electron mass the expression

    me≈ [1/(ke+X)1/2] 2-127/2m(CP2) .

    ke=5 corresponds to the lowest order contribution. X<1 corresponds to the higher order contributions.

  2. By additivity of mass squared for flux tubes one has m2(tubes)= 3m2(tube,p) and mR(tubes)=31/2m(tube,R): one has factor 31/2 rather than 3. Irrespective whether mR(tubes) can be calculated from p-adic thermodynamics or not, it has general form m2(tube,p)= kp in the lowest order - higher orders are very small contribute to m2R at most 1/p. k is a small integer so that even one cannot calculate the its precise value one has only few integers from which to choose.
    The real mass from flux tubes is given by

    mR= (3kp/M107)1/2× m(CP2) =(3kp/5)1/2× m(e,107).

    For kp= 6 (for electron one has ke=5) one has mR(tubes)= 949 MeV to be compared with proton mass mp= 938 MeV. The prediction is too large by 1 per cent.


  3. Besides being by 1 per cent too large the mass would leave no room for valence quark contributions, which are about 1 per cent too (see this). There error would be naturally due to the fact that the formula for electron mass is approximate since higher order contributions have been neglected. Taking tis into account means replacing ke1/2=51/2 with (5+X)1/2, X<1, in the formula for mR. This implies the replacement me,107→ (5/(5+X)1/2me,107. The correct mass consistent with valence quark contribution is obtained for X=.2. The model would therefore fix also the precise value of m(CP2) and CP2 radius.

What about the masses of Higgs and weak bosons?

p-Adic mass calculations give excellent predictions for the fermion masses but the situation for weak boson masses is less clear although it seems that the elementary fermion contribution to p-adic mass squared should be sum of mass squared for fermion and antifermion forming the building bricks of gauge bosons. For W the mass should be smaller as it indeed is since neutrino contribution to mass squared is expected to be smaller. Besides this there can be also flux tube contribution and a priori it is not clear which contribution dominates. Assume in the following that fermion contributions dominate over the flux tube contribution in the mass squared: this is the case if second order contributions are p-adically O(p2).

Just for fun one can ask how strong conclusions p-adic arithmetics allows to draw about W and Z masses mW=80.4 GeV and mZ= 91.2 GeV. The mass ratio mW/mZ allows group theoretical interpretation. The standard model mass formulas in terms of vacuum expectation v=246.22 GeV of Higgs read as mZ= (g2+(g')2)1/2v/2 and mW= gv/2= cos(θw)mZ, cos(θW)= g/(g2+(g')2)1/2.

  1. A natural guess is that Higgs expectation v=246.22 GeV corresponds to a fundamental mass scale. The simplest guess for v would be as electron mass ke1/2 m127, ke=5, in the p-adic scale M89 assigned to weak bosons: this would give v= 219× me≈ 262.1 GeV: the error is 6 per cent. For ke=4 one would obtain v= 219× (4/5)1/2me≈ 234.5 GeV: the error is now 5 per cent.

    For ke=1 the mass scale would correspond to the lower bound mmin=117.1 GeV considerably higher than Z mass. Higgs mass is consistent with this bound. kh=1 is the only possible identification and the second order contribution to mass squared in mh2∝ kh+Xh must explain the discrepancy. This gives Xh= (mh/mmin)2-1≈ .141,

    Higgs mass can be understood but gauge boson masses are a real problem. Could the integer characterizing the p-adic prime of W and Z be smaller than k=89 just as k(π)=111=k(p)-4 is smaller than kp?


  2. Could one understand cos(θw) = mW/mZ≈ .8923 as a ratio (kW/kZ)1/2 obtained using first orderp-adic mass formulas for mW and mZ characterizing the masses in the lowest order by integer k? For kW=4 and kZ=5 one would obtain cos(θW)= (kW/kZ1/2= .8944..: the error is .1 per cent. For kZ=89 one would however have mZ=v=me,89, which is quite too high. k=86 would give mZ= 92.7 GeV: the error is 1.6 per cent. For me∝ (5+Xe)1/2, Xe≈ .2 deduced from proton mass, the mass is scaled down by (5/(5+Xe)1/2 giving 90.0 GeV, which is smaller than 91.2 GeV: the mass is two large by 2 per cent. Higher order corrections via XZ=.05 give a correct mass.

    k=86 is however not consistent with the octave rule so that one must kZ=kW=85 with (kW,kZ)=(8,10). This strongly suggests that p-adic mass squared is sum of two identical contributions labelled by kW=4 and kZ=5: this is what one indeed expects from p-adic thermodynamics and the representation of gauge bosons as fermion-antifermion bound states. Recall that also for hadrons proton and baryonic space-time sheet correspond to M107 and pion to k(π)=k(p)-4=111.

  3. There can be also corrections characterized by different p-adic prime: electromagnetic binding energy between fermion and anti-fermion forming Z boson could be such a correction and would reduce Z mass and therefore increase Weinberg angle since W boson does not receive this correction. Higher order corrections to mW and mZ however replace the expression of Weinberg angle with cos(θW)= (kW+XW/(kZ+XZ)1/2 and allow to obtain correct Weinberg angle. Note that canonical identification allows this if the second order correction is of form rp2/s, s small integer.
To sum up, it is fair to say that p-adic mass calculations allow now to understand both elementary particle masses and hadron masses. One cannot calculate everything but p-adic arithmetics with mild empirical constraints fix the masses with 1 per cent accuracy.

See the article Two anomalies of hadron physics from TGD perspective or the chapter New Physics predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, March 17, 2019

Does 160 minute period define a universal "alpha rhythm"?


Sometimes there is a flood of interesting links in FB. This happened also now after a dry period lasting for months. First came new results related to Aleph anomaly, then new discoveries related to the spin puzzle of proton, and now about finding suggesting an analog of fundamental biorhythm 10 Hz in cosmic scales.

The posting in Tallbloke's talkshop titled Evidence for a 160 minute oscillation affecting Galaxies and the Solar System tells about the finding by Valery Kotov that many celestial objects have parameters, which correspond to a fundamental frequency of 160.0101 minutes. There is an overwhelming evidence that a non-local phenomenon is in question. For instance, Earth day is 9 times 160 minutes.

The article gives a long list of links to works demonstrating the presence of this period. See for instance Kotov S.V, Kotov V.A., 1997, Astron. Nachr. 318, 121-128.

This period occurs in many contexts.

  1. Infrasonic oscillations, measured by Doppler effect, on the surface of Sun corresponds to a period of 160,01 minutes. These oscillations were discovered by Kotov and have been conformed by several other laboratories.
  2. Variation of the luminosity of Sun and some other stars.
  3. Period of variations of Delta Scuti stars has been found to be 162+/- 4 min and RR Lyrae stars 161.4+/-1.6 minutes.
This finding relates in an interesting manner to TGD based model of living systems in which cyclotron frequencies in endogenous magnetic field of .2 Gauss = .2×10-4 Tesla play a key role.
  1. For iron the cyclotron frequency is around 10 Hz which is fundamental biorhythm- alpha rhythm.

  2. 160 min cyclotron frequency for Fe would correspond to magnetic field of .2 nT.

  3. Interstellar or galactic magnetic field strengths are not far from this strength.

    • 1 nT for galactic magnetic field is claimed here. This would give 32 min period.

    • For interstellar magnetic field the value 0.1 nTesla for interstellar magnetic field is claimed here.

    • The value .3 nT for interstellar magnetic field is claimed at here.

The proposed value .2 nT is half-way between these two values. Maybe there is fundamental biorhythm in cosmic scales! This is more or less predicted by TGD based vision about quantum coherence in all length scales made possible by the hierarchy of Planck constants heff=n×h0 predicted to define phases of ordinary matter identifiable as dark matter.

For large values of heff predicted by TGD the energies of the dark cyclotron photons can be above thermal threshold in living matter. This implies that the dark cyclotron radiation can have non-trivial effects on living manner: this kind of effects actually led to the idea about hierarchy of Planck constants. Now it can be deduced from what I call adelic physics. The proposal is that bio-photons covering at least visible and UV range result as dark photons with say EEG frequencies transform to ordinary photons.

In TGD inspired biology the cyclotron frequencies defining coordinating rhythms and the recent proposal is that both sensory perception and motor actions and long term memory rely on a universal mechanism based on formation of holograms and their reading using dark cyclotron photon beam as reference beam. Could this mean that this mechanism is used even in galactic and cosmic scales so that life would be everywhere as TGD based theory of consciousness predicts?

See the article Could 160 minute oscillation affecting Galaxies and the Solar System correspond to cosmic "alpha rhythm"? or the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

A new twist in the spin puzzle of proton


A new twist has appeared in proton spin crisis.

  1. u and d sea antiquarks contribute differently to proton spin which looks very strange if sea quarks originate from the decays of gluons as perturbative QCD predicts.

  2. The amount of dbar type sea quark is larger than that of ubar type sea quark. But the amount of proton spin assignable to dbar quark is smaller!

In TGD framework these findings give very valuable hints concerning the detailed structure of proton and also the proper interpretation of what are called sea quarks.

First of all, the notion of sea parton is rather fuzzy statistical notion tailored to the needs of perturbative QCD. Could it be that there could be a much more structured description analogous to that of atom or nucleus? In TGD framework nuclear string model describes nuclei as collection of nucleons connected by flux tubes having quark and antiquark at ends.

What does one obtain if one applies this picture to the ealier model in which valence quark space-time sheets are assumed to be connected by color flux tubes having quark and antiquark at their end forming meson like states. Consider the following picture.

  1. uud with standard wave function describes valence quarks which are almost point like entities assignable to partonic 2-surfaces.

  2. There are 3 color bonds in the triangle like structure formed by valence quarks. Assign to these
    • dbar-d spin singlet analogous to pion with spin 0,
    • dbar-u spin singlet analogous to pion with spin 0 ,
    • ubar-d vector analogous to ρ meson with spin 1.
    Identify the quarks and antiquarks of color bonds with the TGD counterpart of the sea.
  3. Bonds would carry total spin 1. As one forms spin 1/2 state with valence quarks with spin 1/2 valence quarks carry vanishing spin in the resulting state: this solves the core part of proton spin puzzle. Given valence quark has vanishing average spin due to the entanglement with bonds.

  4. Also the observations can be understood qualitatively.
    • The amount dbar in the sea is two times larger than the amount of ubar.
    • The average contribution of dbar to spin is vanishing in spin singlet bonds and spin 1 bond does not even contain dbar. Hence the average contribution to sea quark spin vanishes.
    • The contribution of ubar in ubar-d spin 1 bond is non-vanishing and experimentally known to be larger than that dbar sea quark.
This model could also allow to understand how the old-fashioned Gell-Mann quark model with constituent quarks having masses of order mp/3 about 310 MeV much larger than the current quark masses of u and d quark masses of order 10 MeV.
  1. I have proposed that the current quark + color flux tube would correspond to constituent quark with the mass of color flux tube giving the dominating contribution in the case of u and quarks. If the sea quarks at the ends of the flux tubes are light as perturbative QCD suggests, the color magnetic energy of the flux tube would give the dominating contribution.

  2. One can indeed understand why the Gell-Mann quark model predicts the masses of baryons so well using p-adic mass calculations. What is special in p-adic calculations it is mass squared, which is additive as essentially the eigenvalue of scaling generator of super-conformal algebra denoted by L0.

    m2p= ∑ m2p,n

    This due to the fact that energy is replaced by mass squared, which is Lorentz invariant quantity and conformal charge. Mass squared contributions with different p-adic primes cannot be added and must be mapped to their real counterparts first. On the real side is masses rather than mass squared, which are additive.

  3. Baryon mass receives contributions from valence quarks and from flux tubes. Flux tubes have same p-adic prime characterizing hadron but quarks have different p-adic prime so that the total flux tube contribution m2(tube)p mapped by canonical identification to mR(tubes)= (m2R(tubes))1/2 and analogous valence quark contributions to mass add up.

    mB= mR(tube)+
    q mR(valence,q).

    The map m2p→ m2R is by canonical identification defined as

    xp= ∑nxnpn→ xR= ∑ xnp-n

    mapping p-adic numbers in continuous manner to reals.

  4. Valence quark contribution is very small for baryons containing only u and d quarks but for baryons containing strange quarks it is roughly 100 MeV per strange quark. If the dominating constant contribution from flux tubes adds with the contribution of valence quarks one obtains Gell-Mann formula.

A detailed estimate for nucleon mass using p-adic mass calculations shows the power of p-adic arithmetics even in the case that one cannot perform a complete calculation.
  1. Flux tube contribution can be assumed to be independent of flux tube in the first approximation. Its scale is determined by the Mersenne prime Mk= 2k-1, k=107, characterizing hadronic space-time sheets (flux tubes). Electron corresponds to Mersenne prime M127 and the mass scales are therefore related by factor 2(127-107)/2≈ 210: scaling of electron mass me,127= .5 MeV gives mass me,107≈ .5 GeV, the mass electron had if it would correspond to hadronic p-adic length scale.

    p-Adic mass calculations give for the electron mass the expression

    me≈ [1/(ke+X)1/2] 2-127/2m(CP2) .

    ke=5 corresponds to the lowest order contribution. X<1 corresponds to the higher order contributions.

  2. By additivity of mass squared for flux tubes one has m2(tubes)= 3m2(tube,p) and mR(tubes)=31/2m(tube,R): one has factor 31/2 rather than 3. Irrespective whether mR(tubes) can be calculated from p-adic thermodynamics or not, it has general form m2(tube,p)= kp in the lowest order - higher orders are very small contribute to m2R at most 1/p. k is a small integer so that even one cannot calculate the its precise value one has only few integers from which to choose.
    The real mass from flux tubes is given by

    mR= (3kp/M107)1/2× m(CP2) =(3kp/5)1/2× m(e,107).

    For kp= 6 (for electron one has ke=5) one has mR(tubes)= 949 MeV to be compared with proton mass mp= 938 MeV. The prediction is too large by 1 per cent.


  3. Besides being by 1 per cent too large the mass would leave no room for valence quark contributions, which are about 1 per cent too (see this). There error would be naturally due to the fact that the formula for electron mass is approximate since higher order contributions have been neglected. Taking tis into account means replacing ke1/2=51/2 with (5+X)1/2, X<1, in the formula for mR. This implies the replacement me,107→ (5/(5+X)1/2me,107. The correct mass consistent with valence quark contribution is obtained for X=.2. The model would therefore fix also the precise value of m(CP2) and CP2 radius.

These observations give good hopes that this model replacing quark sea with color bonds solve the proton spin crisis.

What about the masses of Higgs and weak bosons?

p-Adic mass calculations give excellent predictions for the fermion masses but the situation for weak boson masses is less clear although it seems that the elementary fermion contribution to p-adic mass squared should be sum of mass squared for fermion and antifermion forming the building bricks of gauge bosons. For W the mass should be smaller as it indeed is since neutrino contribution to mass squared is expected to be smaller. Besides this there can be also flux tube contribution and a priori it is not clear which contribution dominates. Assume in the following that fermion contributions dominate over the flux tube contribution in the mass squared: this is the case if second order contributions are p-adically O(p2).

Just for fun one can ask how strong conclusions p-adic arithmetics allows to draw about W and Z masses mW=80.4 GeV and mZ= 91.2 GeV. The mass ratio mW/mZ allows group theoretical interpretation. The standard model mass formulas in terms of vacuum expectation v=246.22 GeV of Higgs read as mZ= (g2+(g')2)1/2v/2 and mW= gv/2= cos(θw)mZ, cos(θW)= g/(g2+(g')2)1/2.

  1. A natural guess is that Higgs expectation v=246.22 GeV corresponds to a fundamental mass scale. The simplest guess for v would be as electron mass ke1/2 m127, ke=5, in the p-adic scale M89 assigned to weak bosons: this would give v= 219× me≈ 262.1 GeV: the error is 6 per cent. For ke=4 one would obtain v= 219× (4/5)1/2me≈ 234.5 GeV: the error is now 5 per cent.

    For ke=1 the mass scale would correspond to the lower bound mmin=117.1 GeV considerably higher than Z mass. Higgs mass is consistent with this bound. kh=1 is the only possible identification and the second order contribution to mass squared in mh2∝ kh+Xh must explain the discrepancy. This gives Xh= (mh/mmin)2-1≈ .141,

    Higgs mass can be understood but gauge boson masses are a real problem. Could the integer characterizing the p-adic prime of W and Z be smaller than k=89 just as k(π)=111=k(p)-4 is smaller than kp?

  2. Could one understand cos(θw) = mW/mZ≈ .8923 as a ratio (kW/kZ)1/2 obtained using first orderp-adic mass formulas for mW and mZ characterizing the masses in the lowest order by integer k? For kW=4 and kZ=5 one would obtain cos(θW)= (kW/kZ1/2= .8944..: the error is .1 per cent. For kZ=89 one would however have mZ=v=me,89, which is quite too high. k=86 would give mZ= 92.7 GeV: the error is 1.6 per cent. For me∝ (5+Xe)1/2, Xe≈ .2 deduced from proton mass, the mass is scaled down by (5/(5+Xe)1/2 giving 90.0 GeV, which is smaller than 91.2 GeV: the mass is two large by 2 per cent. Higher order corrections via XZ=.05 give a correct mass.

    k=86 is however not consistent with the octave rule so that one must kZ=kW=85 with (kW,kZ)=(8,10). This strongly suggests that p-adic mass squared is sum of two identical contributions labelled by kW=4 and kZ=5: this is what one indeed expects from p-adic thermodynamics and the representation of gauge bosons as fermion-antifermion bound states. Recall that also for hadrons proton and baryonic space-time sheet correspond to M107 and pion to k(π)=k(p)-4=111.

  3. There can be also corrections characterized by different p-adic prime: electromagnetic binding energy between fermion and anti-fermion forming Z boson could be such a correction and would reduce Z mass and therefore increase Weinberg angle since W boson does not receive this correction. Higher order corrections to mW and mZ however replace the expression of Weinberg angle with cos(θW)= (kW+XW/(kZ+XZ)1/2 and allow to obtain correct Weinberg angle. Note that canonical identification allows this if the second order correction is of form rp2/s, s small integer.

To sum up, it is fair to say that p-adic mass calculations allow now to understand both elementary particle masses and hadron masses. One cannot calculate everything but p-adic arithmetics with mild empirical constraints fix the masses with 1 per cent accuracy.

See the article Two anomalies of hadron physics from TGD perspective or the chapter New Physics predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Aleph anomaly just refuses to disappear

From FB I learned about evidence for a bump around 28 GeV ). The title of the preprint is " Search for resonances in the mass spectrum of muon pairs produced in association with b quark jets in proton-proton collisions at s1/2 = 8 and 13 TeV. Thanks to Ulla for the link.

An excess of events above the background near a dimuon mass of 28 GeV is observed in the 8 TeV data, corresponding to local significances of 4.2 and 2.9 standard deviations for the first and second event categories, respectively. At 13 TeV data the excess is milder. This induced two dejavu experiences.

1. First dejavu

Last year (2018) came a report from Aleph titled " Observation of an excess at 30 GeV in the opposite sign di-muon spectra of Z→ bbbar+X events recorded by the ALEPH experiment at LEP" . The preprint represents re-analysis of data from 1991-1992. The energy brings strongly in mind 28 GeV bump.

TGD - or more precisely p-adic fractality - suggests the existence of p-adically scaled variants of quarks and leptons with masses coming as powers of 2 (or perhaps even 21/2. They would be like octaves of a fundamental tone represented by the particle. Neutrino physics is plagued by anomalies and octaves of neutrino could resolve these problems.

Could one understand 30 GeV bump - possibly same as 28 GeV bump in TGD framework? b quark has mass 4.12 GeV or 4.65 GeV depending on the scheme used to estimate it. b quark could correspond to to p-adic length scale L(k) for k=103 but the identification of the p-adic scale is not quite clear. p-Adically scaling b-quark mass taken to be 4.12 GeV by factor 4 gives about 16.5 GeV (k= 103-4= 99), which is one half of 32 GeV: could this correspond to the proposed 30 GeV resonance or even 28 GeV resonance? One must remember that these estimates are rough since already QCD estimates for b quark mass vary about 10 per cent.

28 GeV bump could correspond to p-adically scaled variant of b with k=99. b quark would indeed appear as octaves. But how to understand the discrepancy: could one imagine that there are actually two mesons involved and analogous to pion and rho meson?

2. Second dejavu

Concerning quarks, I remember an old anomaly reported by Aleph at 56 GeV. This anomaly is mentioned in a preprint published last year and there is reference to old paper by ALEPH Collaboration, D. Buskulic et al., CERN preprint PPE/96-052. What was observed was 4-jet events consisting of dijets with invariant mass around 55 GeV. What makes this interesting is that the mass of 28 GeV particle candidate would be one half of the mass of a particle with mass of mass of 56 GeV particle, quite near to 55 GeV.

My proposal for the identification of the 55 GeV bump was as a meson formed from scaled variants b and bbar corresponding to p-adic prime p≈ 2k, k=96. The above argument suggests k=99-2=97. Note that the production of the 28 GeV bump decaying to muon pair is associated with production of b quark and second jet.

3. What the resonance are and how could they be produced?

The troubling question is why the two masses around 28 GeV ad 30 GeV? Even worse: for 30 GeV candidate a dip is reported in at 28 GeV! Could the two candidates correspond to π(28) and ρ(30) having slightly different masses by color-magnetc spin-spin splitting?

The production mechanism should explain why the resonance is associated with b-quark and jet and also why two different mass values suggest themselves.

  1. If one has 56 GeV pseudo-scalar resonance consisting mostly of bbbar - call it π(56), it could couple to Z0 by standard instanton density coupling, and one could have the decay Z→ Z+π(56). The final state virtual Z would produce the b-tag in its decay.

  2. π(56) in turn would decay strongly to π(28)+ρ(30) with spin 1 and analogous to the rho meson partner of ordinary pion. Masses would be naturally different for π and ρ.

It is easy to check that the observed spin-spin splitting is consistent with the simplest model for the spin-spin splitting obtained by extrapolating the for ordinary π-ρ system.
  1. At these mass scales the spin-spin splitting proportional to color magnetic moments and thus to inverses of the b quark masses should be small and indeed is.

  2. Consider first ordinary π-ρ system. The predicted masses due to spin-spin splitting are m(π)= m-Δ/2 and m(ρ)= m+ 3Δ/2), where one has m= (3m(π)+ m(ρ))/4 and Δ= (m(ρ)-m(π))/2. For π-ρ system one has r1= Δ m/m≈ .5.

    Δ m/m is due to the interaction of color magnetic moments and proportional to r2s2 m2(π)/m2(d). The small masses of u and d quarks - m(d)≈ 4.8 MeV (Wikipedia value, the estimate vary widely) - implies that m(π)/m(d)≈ 28.2 is rather large. The value of αs is larger than αs=.1 achieved at higher energies, which gives r2= αs2 m2(π)/m2(d)>.28. One has r1/r2≈ .57.

  3. For π(28)-ρ(30) system the values of the parameters are m≈ 29 GeV and Δ m=2 GeV and r1= Δ m/m≈ .07. The mass ratio is roughly m(π)/m(b) = 2 for heavy mesons for which quark mass dominates in the meson mass. For αs=.1 the order of magnitude for r2s2 m2(π(28))/m2(b) is r2≈ .04 and one has r1/r2=.57 to be compared with r1/r2=.56 for ordinary π(28)-ρ(30) system so that the model looks realistic.

    Interestingly, the same value of αs works in both cases: does this provide support for the TGD view about renormalization group invariance of coupling strengths? This invariance is not global but implies discrete coupling constant evolution.

See the chapter New Physics predicted by TGD: part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, March 15, 2019

Minimal surfaces: comparison of the perspectives of mathematician and physicist

The popular article Math Duo Maps the Infinite Terrain of Minimal Surfaces was an exceptional representative of its species. It did not irritate the reader with non-sense hype but gave very elegant and thought provoking representation of very abstract ideas in mathematics.

The article told about the work of mathematicians Fernando Coda Marques and Andre Neves based on the thesis of John Pitts about minimal surfaces but forgotten by mathematics community. Minimal surfaces are also central in TGD - in TGD Universe space-times are minimal surfaces with lower-dimensional singularities - and this motivated an article representing minimal surfaces from the point of view of mathematician and physicist. The view is highly subjective since the physicist is me. In the article I discuss the basic ideas about minimal surfaces, summarize the basic mathematical results of Marques and Neves, and discuss minimal surfaces from TGD point of view.

See the article Minimal surfaces: comparison of the perspectives of mathematician and physicist or the chapter The Recent View about Twistorialization in TGD Framework.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 11, 2019

Could one distinguish experimentally between standard and TGD views about time?

I received a link to an interesting popular article "Neuroscientists read unconscious brain activity to predict decisions". The article tells about the work of Koenig-Robert and Person published in Scientific Reports as an article with the title "Decoding the contents and strength of imagery before volitional engagement".

I received also two other highly interesting articles, which have helped in the construction of a more detailed model for memory recall in TGD framework. The second link was to a popular article in Science News with title "Ripples race in the brain as memories are recalled" telling about the findings of neuroscientists Vaz et al about memory recall published in Science as article with title "Coupled ripple oscillations between the medial temporal lobe and neocortex retrieve human memory" .

The third link was to a popular article "The human brain works backwards to retrieve memories". The article tells about the work of Linde-Domingo and Wimber et al published in Nature Communications as article titled " Evidence that neural information flow is reversed between object perception and object reconstruction from memory" .

In the sequel I will consider only the first article and mention the important results of the two latter articles only in passing.

1. The experiment

Consider first the experiment described in "Neuroscientists read unconscious brain activity to predict decisions".

  1. The situation was following. The subject person looked at most T=20 seconds two different pictures, decided to imagine either of them, and pushed immediately the knob. Then she tried to imagine the chosen picture.

    Neural activity was detected in brain and it was found that it emerged t=11 second before the decision. From the pattern of activity it was possible to predict the picture. Also the subjectively experienced intensity of imagination also reported by the subject persion can be predicted. One could say that the sensory experience was re-created by imagination in the brain of past.

  2. The imagination involved could be also regarded as an active memory recall. This interpretation suggests that the time t at which the neural activity appears must be within the T= 20 second interval during which the decision was made.

  3. The authors leave open whether their finding excludes free will. The first interpretation is that the choice really occurred at unconscious level and for some reason subject person experienced illusion of choise. A real choice combined with illusion about real choice looks rather weird idea, and only shifts the problem of free will to a level unconscious to us. If there is no free will then all experiments involving choice are pseudo experiments: this would throw a large portion of neuroscience to trash bin.

2. TGD based model for what happens in imagination as active memory recall

This picture does not say much about what really happens in imagination as active memory recall. To develop this model some background ideas about TGD are needed.

  1. I have developed a model for motor action as time reversal of sensory perception based on ZEO in an earlier article. This leads also to a model for remory recall as sending a signal to geometric past giving rise to time reflected signal as memory recall Episodal memory as re-experience in the geometric past would correspond to generation of mental image in the geometric past at the opposite boundary of causal diamond (CD) which is basic geometric correlate of conscious entity in ZEO based theory of consciousness provided by TGD.

  2. There are several words to which one must give meaning: what do "re-experience in geometric past", "time reflection", "imagination as active memory recall" mean? Who is the intentional agent which imagines? The above experiment inspired an attempt to give a more precise meaning for these words.

    The idea is to combine the model of memory with a decades old model of living matter as conscious hologram (one more unprecisely defined word!) .

    Magnetic body (MB) is the basic notion. MB acts as intentional agent using biological body (BB) as
    motor instrument and sensory receptor. In the recent case MB imagines and performs active memory recall by selecting the picture and directing its attention to it (still more words!).

    Dark matter hierarchy as hierarchy of phases of ordinary matter (also photons) assignable to the MB and labelled by the value of effective Planck constant heff =n×h0 is a further central element of the general picture. In particular, EEG photons are dark photons with very large value of Planck constant guaranteing that their energies are above thermal threshold. Bio-photons would with energies in visible and UV range would result as dark EEG photons with very large value of heff transform to ordinary photons.

  3. Brain as a hologram is an old idea originally to Karl Pribram. The formation of hologram involves two waves with the same freqeuency: reference wave and the wave representing the target - typically it reflected from the target. The reference wave is typically simple planewave with some wavelength and thus frequency and defines kind of scale parameter. These waves must interfere so that coherence is required. The interference pattern is stored by the modification of the hologram substrate.

    If one illuminates the resulting hologram by reference wave the image of the target is formed. If one illuminates the target with the phase conjugate of the reference wave - its time reversal - the phase conjugate of the image is formed. In ZEO time reversal has precise meaning as also the time reversal of self and mental image.

    This requires coherence in the length scale of hot and wet brain and non-standard value of heff makes this possible. The coherence for ordinary photons need not be quantum coherence, but induced by quantum coherence of dark photons transforming to ordinary photons. Quite generally, the coherence of living matter would be induced in this manner from quantum coherence.

With these ingredients one can build a rather simple model for memory.
  1. Memory and sensory mental images is generated as MB creates a reference wave in the formation of hologram as interference pattern of incoming ordinary light beam and dark reference beam. This induces the pattern of neural activity. Coherence is not quantum coherence but inherited from quantum coherence of dark photon beam from MB. Also phase conjugate in active memory recall comes from MB. The reported ripples associated with the formation of sensory percept would correspond to the formation of conscious hologram.

  2. Phase conjugate wave corresponds to time reversal of wave and would be created in ZEO in "big" state function reduction reversing the arrow of time for self involved. The phase conjugate of the reference wave generated by magnetic body (MB) acting as intentional agent trying to imagine would propagate to geometric past and scatter from the brain substrate acting as a hologram and generate the memory mental image in geometric past at the opposite boundary - the "re-experience", which need not be conscious-to-us. The ripples reported to accompany memory recall (see this and this) would correspond to the scattering of the phase conjugate wave from the hologram.

    This phase conjugate mental image need not be conscious-to-us: the assumption has indeed been that time reversed mental images are not conscious to us. The assumption will be kept also now.

    The next "big" quantum jumps would mean the "death" of the memory mental image and rebirth as a mental image in standard time direction. This would correspond to the "time reflection" generating a signal to the geometric future defining in the recent situation declarative, verbal memory of the mental image. This would be the outcome of imagination experienced by the subject person.

    Why these "normal" mental images are not usually genuine sensory mental images at our level of self hierarchy? A good reason for this is that they would interfere with the ordinary sensory perceptions. We can indeed have this kind of mental images during dreaming and hallucinations. During dreaming it is not a threat for survival as it is during hallucinations. I have discussed a detailed model for imagination as almost sensory mental images (see this). They would be created by feedback signals from MB via cortex to a level above sensory organs in the hierarchy so that no actual sensory percepts is obtained. Also imagined motor actions would be similar.

    An essential element of the model is that the sensory input is transformed to dark photons beams propagating along flux tubes parallel to axons and being responsible for the communications. The function of nerve pulses would be creation of communication channels by connecting flux tubes associated with axons to longer structures: neural transmitters and various information molecules would do this connecting. Situation would be very much analogous to that in mobile phone communications. It should be noticed that flux tubes also serve as correlates of directed attention.

    The notion of re-incarnation is certainly the most controversial aspect of the proposed vision. TGD predicts self hierarchy and sub-selves are identified as mental images so that one can look whether re-incarnation hypothesis makes sense for them. After images appearing periodically would be examples of this kind of mental images: they would be conscious to us and correspond to the level of self hierarchy immediately below us. Since they are typically of different color than the original image, we know that they do not represent a real object. The periods without after image would correspond to the phase conjugates of these mental images and would be un-conscious to us. Essentially a sequence of re-incarnations of mental image would be in question.

  3. How can subject person (idenfiable as her MB!) actively choose the target of the memory recall? In the experiment considered the two pictures were seen by the subject person for a time not longer than T=20 seconds. Both generate a hologram like structure in visual cortex which in good approximation are disjoint patterns of neural activity - presumably regions of coherence induced by quantum coherence of the dark reference beam.

    A conscious choise associated with the memory recall requires that the two areas are labelled by some control parameter which MB can vary. Fixing this parameter directs the attention of MB to
    either picture. The frequency of the laser beam is the only parameter available. Incoming beam of light corresponds to the energies of visible light and for the ordinary value of Planck constant one cannot vary the frequency. There is however EEG frequency, which can be varied but its ratio to the frequency of visible light is of order 10-14 for 10 Hz! The energy E= hf of EEG photons is extremely small and EEG photons should have absolutely no effects on brain or correlate with the contents of consciousness. We however know that it does!

    In TGD framework this fact was the original motivation for the hierarchy of Planck constants. The choice of the picture to be imagined/attended by MB would mean that the value of heff associated with it changes. The chosen picture naturally corresponds to a larger value of Planck constant since the maximal conscious information content of system increases as heff increase. The increase of heff requires metabolic energy as directed attention certainly indeed does.

  4. A more refined view about memory recall involves hierarchical structure in which memory recall is built up so that first the "gist" of the pattern is recalled and then come the details. This is the opposite of what happens in sensory perception in which features are identified first and the holistic view emerges later.

    TGD predicts self hierarchies labelled by the values of heff and by p-adic length scales. The higher the level of self hierarchy the longger the corresponding length scale. The "gist" corresponds to large values of heff and low EEG frequencies whereas details correspond to smaller values of heff and higher EEG frequencies and smaller wavelengths for ordinary photons. The construction of the memory mental images would correspond to a cascade of state function reductions proceeding from long to short length scales and beginning from largest value of heff involved. The model for what happens in state function reduction in TGD framework assumes this cascade (see this and this).

Some remarks are in order.
  1. This mechanism generalizes to the case of motor action and sensory perception as its time reversal. MB as an intentional agent would be sending reference beams and their phase conjugates at various frequencies f and values of heff serving as control knobs!

  2. The most general picture is that the "reading" of the sensory percept resp. memory uses reference beam resp. its phase conjugate whereas the formation of sensory percept resp.motor action involve both object beam and reference beam resp. their phase conjugates. Therefore memory recall would involve is re-experiencing in reversed time direction assumed to be unconscious-to-us.

  3. It is essential that sensory input is transformed to dark photons at sensory organs propagating to the brain: this also makes the processing of sensory information fast and sensory mental images can be built as standardized mental images - pattern recognition - by forth-and-back signalling between brain and sensory organ combining artificial sensory input from brain with genuine sensory input. It is hard to imagine anything simpler!

  4. Neural activity associated with the neural percept preserves the topography of the visual percept so that the shape of the firing pattern in cortex is same as that of object. This cannot be however used as an objection against holography since it is the reading of the neural hologram which generates the image of the object. The topography of the hologram has nothing to do with the shape of the object.

3. Could one demonstrate experimentally that the standard view about time is wrong?

The prevailing view in neuroscience and physics identifies experienced time with geometric time despite the fact that these two times have very different properties. In TGD framework these times are not identified but are closely correlated. TGD inspired theory of consciousness based on zero energy ontology (ZEO) allows to understand the relationship between the two times and leads to rather dramatic predictions.

TGD interpretation says that in the act of free will MB sends phase conjugate signal to the brain of geometric past or stating it otherwise: replaces the determistic time evolution of brain (and also its past) with a new one (strictly speaking, replaces their quantum superposition with a new one). This should happen also in the choice of which picture is to be imagined.

Could a modification of the experiment replacing imagination with an activity not requiring memory recall allow to demonstrate that the standard interpretation is wrong?

  1. Consider a thought experiment experiment in which the subject person receives a stimulus and makes a decision to do something - not imagine but something else - during some time interval T after it. Suppose that the decision is found to be preceded by neural activity before the the stimulus appears.

    Standard view about time not does allow this since person could have decided about the reaction to the stimulus before it came (precognition would be the only explanation - there is some evidence for this too).

    TGD view about the relationship between subjective and geometric time allows this since the decision sends signal to the brain of the past and there is no reason why the moment in past could be before the stimulus.

  2. The modification of the above experiment in this manner could mean the reduction of T= 20 seconds to - say - 9 seconds. If the neural activity would appear 11 second earlier it would emerge before person has seen the pictures and one would have paradox for standard view about time. However, if the imagined picture relies on memory, this does not happen.

See the article Three findings about memory recall and TGD based view about memory retrieval or the chapter Sensory Perception and Motor Action as Time Reversals of Each Other: a Royal Road to the Understanding of Other Minds?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, March 07, 2019

Hachimoji DNA from TGD perspective

The popular article " Freaky Eight-Letter DNA Could Be the Stuff Aliens Are Made Of" (see this) tells about very interesting discovery related to astrobiology, where the possible existence of variants of DNA and other bio-molecules are of considerable interest. The article "Hachimojii DNA and RNA: A genetic system with eight building blocks" (see this) published in Science tells about a discovery of a variant of DNA with 8 letters instead of 4 made by Hoshika et al (see this). By using an engineered T7 RNA polymerase this expanded DNA alphabet could be transcribed into Hachimoji variant of RNA. The double strand structure of hachimoji DNA is similar to that of ordinary DNA and it is thermodynamically stable.

No amino-acid counterparts assigned to the hachimoji RNA were engineered: this would require the existence of translation machinery. The possible existence of also additional amino-acids leads to the speculation is that both alien life forms utilizing this kind of extended code could have evolved. One can also ask whether mere synthetic hachimoji RNA could be enough for synthetic life.

The abstract of the article gives a more technical description about what has been achieved.


" We report DNA- and RNA-like systems built from eight nucleotide "letters" (hence the name "hachimoji") that form four orthogonal pairs. These synthetic systems meet the structural requirements needed to support Darwinian evolution, including a polyelectrolyte backbone, predictable thermodynamic stability, and stereoregular building blocks that fit a Schrödinger aperiodic crystal. Measured thermodynamic parameters predict the stability of hachimoji duplexes, allowing hachimoji DNA to increase the information density of natural terran DNA. Three crystal structures show that the synthetic building blocks do not perturb the aperiodic crystal seen in the DNA double helix. Hachimoji DNA was then transcribed to give hachimojii RNA in the form of a functioning fluorescent hachimoji aptamer. These results expand the scope of molecular structures that might support life, including life throughout the cosmos."

If the additional code letters for DNA (8 code letters instead of 4) really carry information, the number of codewords is extended by factor 23=8 giving 29=512 code words. What the number of amino-acids would be, can be only guessed: the simplest guess is that also now the number is scaled up by factor 8 but this is only a guess.

In the sequel I consider hachimoji code from TGD perspective. The natural guess is that the hachimoji code corresponds to 8 copies of the ordinary genetic code in some sense. TGD predicts two basic realizations of the genetic code corresponding to dark genetic code and bio-harmony.

  1. In the case of the dark code it is possible to imagine an extension of the code based on the notion of dark nucleus and the number codons is multiplied by 8. In the case of bio-harmony fusion of 8 copies of bio-harmony allows to realize hachimoji code.

  2. I have considered two basic realizations of bio-harmony giving also realization of the genetic code (see this). The first realization is as a fusion of 3 icosahedral harmonies and tetrahedral harmony. Second realization is as a fusion of 2 icosahedral harmonies and 1 toric harmony. These constructions do not however allow any elegant geometric interpretation since two different geometries are involved in both cases.

    During writing I was forced to reconsider this problem and realized that a fusion of 2 icosahedral harmonies with 20 chords and 2 dodecahedral harmonies with 12 chords produces genetic code with 20+20+12+12=64 codons. Icosahedral and docecadedral harmonies correspond to dual tesselations of sphere so that bio-harmony can be represented as a bundle over sphere with two notes represented as points of the fiber. Hachimoji harmony is obtained by replacing 2-point fiber with 8× 2-point fiber. The presence of the dual tesselations conforms with the fact that Eastern music uses micro-intervals, which rather naturally correspond to 20-note dodecahedral scale.

  3. The reason why for the hachimoji code could the basic problem of the music scale realized in terms of rational frequency ratios. Already Pythagoras was aware of this problem. The construction of the scale as powers of quint (3/2-fold scalings of the basic frequency) using octave equivalence produces with 12 iterations 7 octaves but only approximately: the 12th iterate does not quite correspond to the basic note in the octave equivalence. Performing the 12-fold iteration 8 times gives therefore a refined scale with each note replaced with 8 almost copies identifiable as hachimoji scale.

For details see the article Hachimoji DNA from TGD perspective or the chapter About the Correspondence of Dark Nuclear Genetic Code and Ordinary Genetic Code of "Genes and Memes".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 05, 2019

What went wrong with SUSY?

As we now know, SUSY was not found at LHC and the basic motivation for SUSY at LHC energies has disappeared. The popular article
Where Are All the 'Sparticles' That Could Explain What's Wrong with the Universe? tells about the situation. The title is however strange. There is nothing wrong with the Universe. Theoreticians stubbornly sticking to a wrong theory are the problem .

Could it be that the interpretation of SUSY has been wrong? For instance, the minimal N=1 SUSY predicts typically Majorana neutrinos and non-conservation of fermion number. This does not conform with my own physical intuition. Perhaps one should seriously reconsider the notion of supersymmetry itself and ask what goes wrong with it.

Can TGD framework provide any new insight?

  1. TGD can be seen as a generalization of superstring models, which emerged years before superstring models came in fashion. In superstring models supersymmetry is extended to super-conformal invariance and could give badly broken SUSY as space-time symmetry. SUSY in standard QFT framework requires massless particles and this requires generalization of the Higgs mechanism. The proposals are not beautiful - this is most diplomatic manner to state it.

    In TGD framework super-conformal symmetries generalize dramatically since light-like 3-D surfaces - in particular light-cone boundary and boundaries of causal diamond (CD) have one light-like direction and are metrically 2-D albeit topologically 3-D. One outcome is modification of AdS/CFT duality - which turned out to be a disappointment - to a more realistic duality in which 2-D surfaces of space-time regarded itself as surface in M4×CP2 are basic objects. The holography in question is very much like strong form of ordinary holography and is akin to the holography assigned with blackhole horizons.

  2. The generators of supersymmetries are fermionic oscillator operators and the Fock states can be regarded as members of SUSY multiplets but having totally different physical interpretation. At elementary particle level these many fermion states are realized at partonic 2-surfaces carrying point-like fermions assignable to lepton and quark like spinors associated with single fermion generations. There is infinite number of modes and most of them are massive.
    This gives rise to infinite super-conformal multiplets in TGD sense. Ordinary light elementary particles could correspond to partonic 2-surfaces carrying only fermion number at most +/-1.

  3. By looking the situation from the perspective of 8-D imbedding space M4×CP2 situation gets really elegant and simple.

    8-D twistorialization requires massless states in 8-D sense and these can be massive in 4-D sense. Super-conformal invariance for 8-D masslessness is infinite-D variant of SUSY: all modes of fundamental fermions generate supersymmetries. The counterpart SUSY algebra is generated by the fermionic oscillator operators for induced spinor fields. All modes independently of their 4-D mass are generators of supersymmetries. M4 chirality conservation of 4-D SUSY requiring 4-D masslessness is replaced by 8-D chirality conservation implying a separate conservation of baryon and lepton numbers. Quark-lepton symmetry is possible since color quantum numbers are not spin-like but realized as color partial waves in cm degrees of freedom of particle like geometric object.

    No breaking of superconformal symmetry in the sense of ordinary SUSYs is needed. p-Adic thermodynamics causes massivation of massless (in 4-D sense) states of spectrum via mixing with very heavy excitations having mass scale determined by CP2 mass.

    One could say that the basic mistake of colleagues - who have been receiving prizes for impressively many breakthroughs during last years - is the failure to realize that 4-D spinors must be replaced with 8-D ones. This however requires 8-D imbedding space and space-time surfaces and one ends up to TGD by requiring standard models symmetries or just the existence of twistor lift of TGD. All attempts to overcome the problems lead to TGD. Colleagues do not seem like this at all so that they prefer to continue as hitherto. And certainly this strategy has been an amazing professional success;-).

What about space-time supersymmetry - SUSY - in TGD framework.
  1. The analog of SUSY would be generated by massless or light modes of induced spinor fields. Space-time SUSY would correspond to the lightest constant components the induced spinor fields in 1-1-correspondence with the components of H-spinors. The number N associated with SUSY is quite large as the number of components of H-spinors. The corresponding fermionic oscillator operators generate repsesentations of Clifford algebra and SUSY multiplets are indeed such.

    If space-time surfaces is canonically imbedded Minkowski space M4, no SUSY breaking occurs.
    This is however an unrealistic situation. For general preferred extremal right- and left handed components of spinors mix, which causes in turn massivation and breaking of SUSY in 4-D sense.

    Could right-handed neutrino be an exception. It does not couple to electroweak and color gauge potentials. Does this mean that νR and its antiparticle generate exact N=2 SUSY? No: νR has small coupling to CP2 parts of induced gamma matrices mixing neutrino chiralities and this coupling causes also SUSY breaking. This coupling is completely new and not present in standard QFTs since they do not introduce induced spinor structure forced by the notion of sub-manifold gemetry.

    Even worse, one can argue that right-handed neutrino is "eaten" as right- and left-handed massless neutrinos combine to massive neutrino unless one has canonically imbedded M4. There fate resembles that of charge Higgs components. One could still however say that one has an analog of broken SUSY generated by massive lepton and quark modes. But it would be better to talk about 8-D supersymmetry.

  2. The situation is now however so simple as this. TGD space-time is many-sheeted and one has a hierarchy of space-time sheets in various scales labelled by p-adic primes labelling also particles and by the value of Planck constant heff= n×h0.

    Furthermore, spinors can be assigned to 4-D space-time interiors, to 2-D string world sheets, to their light-like 1-D boundaries at 3-D light-like orbits of partonic 2-surfaces, or even with the partonic orbits. 2-D string world sheets are analogous to edges of 3-D object and action receives "stringy" singular contribution from them because of edge property. Same applies to the boundaries of string world sheets location at the light-like orbits of partonic 2-surfaces. Think of a cloth, which has folds which move along it as an analog. Space-time interior is a minimal surface in 4-D sense except at 2-D folds and string world sheets and their boundaries are also minimal surfaces.

    Therefore one has many kinds of fermions: 4-D space-time fermions, 2-D string world sheet fermions possibly associated with hadrons (there presence might provide new insights to the spin puzzle of proton), and 1-D boundary fermions for these as point-like particles and naturally identifiable as basic building bricks of ordinary elementary particles. Perhaps even 3-D fermions associated with light-like partonic orbits can be considered. All these belong to the spectrum and the situation is very much like in condensed matter physics, where people talk fluently about edge states.

  3. In TGD framework ordinary elementary particles are assigned with the light-like boundaries of string world sheets. Right-handed neutrino and antineutrino generate N=2 SUSY for massless states assignable as light-like curves at light-like orbits of partonic 2-surfaces. These states are however massless in 8-D sense, not in 4-D sense!
    This means badly broken SUSY and it seems that one cannot talk about SUSY at all in the conventional sense.

    One can however consider in TGD an analogy of SUSY for which massless νR modes in 4-D space-time interior - rather than at orbits of partonic 2-surfaces - generate supersymmetry. One could say that the many particle state, rather than particle has a spartner. Think of any system - it can contain larger number of ordinary particles forming a single quantum coherent entity to which one an assign space-time sheet. One can assign to this system space-time shet a right-handed neutrino, antineutrino, or both. This gives the superpartner of the system. The presence of νR is not seen in the same manner in interactions as in SUSY theories.

This picture is an outcome of a work lasted for decades, not any ad hoc model. One can say that classical aspects of TGD (exact part of quantum theory in TGD framework) are now well understood.

To sum up, the simplest realizations of SUSY in TGD sense are following.

  1. Massless 4-D supersymmetry generated by νR. Other fermions which are massive because of their electroweak and color interactions not possessed by νR. Also νR generates small mass. These spartners are not however visible in elementary particle physics but belong to condensed matter physics.

  2. Massive neutrino and other fermions but no supersymmetry generatig νR anymore since it is "eaten". This would be realized as very badly broken SUSY in 4-D sense and the spartners would be very massive. At the partonic 2-surfaces, this option forced by Uncertainty Principle. The notion of 8-D SUSY is more appropriate manner to talk about the situation.

See the article "SUSY after LHC: the TGD perspective" or the chapter Does the QFT Limit of TGD Have Space-Time Super-Symmetry?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 04, 2019

Books split in pieces and homepage re-organized

I have used more than month to the updating of books and homepage. This is not rocket science but gives a feeling of doing something useful when ideas do not flow.

I decided to divide 7 books into two pieces since their page number was around thousand and quite too high for any reader. The number of shortened books is now 24, the magic number of mathematics and mathematical physics. Amusingly, also my name day happens to be February 24! I decided to keep also the full books so that reader can choose. Besides this there are 7 longer versions giving 31 books: 31 is Mersenne prime and one in the Combinatorial Hierarchy. Seems that I cannot write books anymore!

The updating required rewriting the introductions. This led to critical questions and new understanding about old problems. Returning repeatedly to old writings can be painful but is also extremely powerful manner to generate ideas. Recommended. As a result, I have written some blogposts and articles, and also added contributions to books, so that even this period was not totally devoid of ideas. As a consequence, I can claim that classical TGD is now very well understood. I also threw out eternal ugly duclings. Most importantly, I try to remember the chapters, which make me blush and require critical re-reading.

I decided to reorganize the homepage: there is multiple storage of the same data and the addition of new information could be much simpler and less time-consuming. These changes are not actually visible for a visitor who just searches fo an article or book chapter using a link given in some publication.

It turned out that quite many links to old file names did not work although they should have worked. I found that a change of the filename helped and decided to make changes as a mass operation using Python. I knew from experience that Python really deserves its name. Using it to a mass operation for files is suicidical. Files disappear and their contents change: file can be replaced with its older version.

The reason is probably that Python is a memory thief. It steals memory resources reserved for addresses of files. The operating system has addresses to several copies of files and when this happens an older file effectively replaces the file or effectively disappears. This caused a real nightmare. Now I know it for the rest of my life: do never-ever use Python for a mass operation unless it is at separate computer.

You can look the re-organized homepage at here. There are some links which still fail but I hope that they begin to function within few days.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.