https://matpitka.blogspot.com/2016/07/

Sunday, July 31, 2016

Leptonic CKM mixing and CP breaking?

Cabibbo-Kobayashi-Maskawa (CKM) matrix is 3× 3 unitary matrix describing the mixing of D type quarks in the couplings of W bosons to a pair of U and D type quarks. For 3 quarks it can involve phase factors implying CP breaking. The origin of the CKM matrix is a mystery in standard model.

In TGD framework CKM mixing is induced by the mixing of the topologies of 2-D partonic surfaces characterized by genus g=0,1,2 (the number handles added to sphere to obtain topology of partonic 2-surface) assignable to quarks and also leptons (see this and this). The first three genera are special since they allow a global conformal symmetry always whereas higher genera allow it only for special values of conformal moduli. This suggests that handles behave like free particles in many particle state that for higher genera and for three lowest genera the analog of bound state is in question.

The mixing is in general different for different charge states of quark or lepton so that for quarks the unitary mixing matrices for U and type quarks - call them simply U and D - are different. Same applies in leptonic sector. CKM mixing matrix is determined by the topological mixing being of form CKM=UD for quarks and of similar form for charged leptons and neutrinos.

The usual time-dependent neutrino mixing would correspond to the topological mixing. The time constancy assumed for CKM matrix for quarks must be consistent with the time dependence of U and D. Therefore one should have U= U1X(t) and D= D1X(t), where U1 and D1 are time independent unitary matrices.

In the adelic approach to TGD (see this and this) fusing real and various p-adic physics (correlates for cognition) would have elements in some algebraic extension of rationals inducing extensions of various p-adic number fields. The number theoretical universality of U1 and D1 matrices is very powerful constraint. U1 and D1 would be expressible in terms of roots of unity and e (ep is ordinary p-adic number so that p-adic extension is finite-dimensional) and would not allow exponential representation. These matrices would be constant for given algebraic extension of rationals.

It must be emphasized that the model for quark mixing developed for about 2 decades ago treats quarks as constituent quarks with rather larger masses determining hadron mass (constituent quark is identified as current valence quark plus its color magnetic body carrying most of the mass). The number theoretic assumptions about the mixing matrices are not consistent with the recent view: instead of roots of unity trigonometric functions reducing to rational numbers (Pythagorean triangles) were taken as the number theoretic ideal.

X(t) would be a matrix with real number/p-adic valued coefficients and in p-adic context it would be an imaginary exponential exp(itH) of a Hermitian generator H with the p-adic norm t < 1 to guarantee the existence of the p-adic exponential. CKM would be time independent for XU=XD. TGD view about what happens in state function reduction (see this, this, and this) implies that the time parameter t in time evolution operator is discretized and this would allow also X(tn) to belong to the algebraic extension.

For quarks XU= XD=Id is consistent with what is known experimentally: of course, the time dependent topological mixing of U or D type quarks would be seen in the behavior of proton. One also expects that the time dependent mixing is very small for charged leptons whereas the non-triviality of Xν(t) is suggested by neutrino mixing. Therefore the assumption XL=Xν is not consistent with the experimental facts and XL(t)=Id seems to be true a good approximation so that only Xν(t) would be non-trivial? Could the vanishing em charge of neutrinos and/or the vanishing weak couplings of right-handed neutrinos have something to do with this? If the μ-e anomaly in the decays of Higgs persists ( this), it could be seen as a direct evidence for CKM mixing in leptonic sector.

CP breaking is also possible. As a matter fact, one day after mentioning the CP breaking in leptonic sector I learned about indications for leptonic CP breaking emerging from T2K experiment performed in Japan: the rate for the muon-to-electron neutrino conversions is found to be higher than that for antineutrinos. Also the NOVA experiment in USA reports similar results. The statistical significance of the findings is rather low and the findings might suffer the usual fate. The topological breaking of CP symmetry would in turn induce the CP breaking the CKM matrix in both leptonic and quark sectors. Amusingly, it has never occurred to me whether topological mixing could provide the first principle explanation for CP breaking!

For background and details see the article Some comments about τ-μ anomaly of Higgs decays and anomalies of B meson decays.

For a summary of earlier postings see Latest progress in TGD.

Thursday, July 28, 2016

Dark matter is absorbed by blackhole slower than ordinary matter

Few days ago I encountered a link to a highly interesting popular article telling about the claim of astronomers that blackholes do not absorb dark matter as fast as they should. The claim is based on a model for dark matter: if the absorption rate were what one would expect by identifying dark matter as some exotic particle, the rate would be quite too fast and the Universe would look very different.

How could this relate to the vision that dark matter is ordinary matter in large Planck constant phase with heff=n× h= hgr= GMm/v0 generated at quantum criticality? Gravitational Planck constant hgr was originally introduced by Nottale. In this formula M is some mass, say that of black hole or astrophysical object, m is much smaller mass, say that of elementary particle, and v0 is velocity parameter, which is assumed to be in constant ratio to the spinning velocity of M in the model for quantum biology explaining biophotons as decay products of dark cyclotron photons.

Could the large value of Planck constant force dark matter be delocalized in much longer scale than blackhole size and in this manner imply that the absorption of dark matter by blackhole is not a sensible notion unless dark matter is transformed to ordinary matter? Could it be that the transformation does not occur at all or occurs very slowly and is therefore the slow bottle neck step in the process leading to the absorption to the interior of the blackhole? This could be the case! The dark Compton length would be Λgr= hgr/m= GM/v0 = rS/2v0, and for v0/c <<1 this would give dark Compton wavelength considerable larger than the radius rS=2GM of blackhole. Note that dark Compton length would not depend on m in accordance with Equivalence Principle and natural if one accepts gravitational quantum coherence is astrophysical scales. The observation would thus suggest that dark matter around blackhole is stable against phase transition to ordinary matter or the transition takes place very slowly. This in turn would reflect Negentropy Maximization Principle favoring the generation of entanglement negentropy assignable to dark matter.

For details see the chapter Quantum Astrophysics of "Physics in Many-sheeted Space-time.

For a summary of earlier postings see Latest progress in TGD.

Wednesday, July 27, 2016

The problem of two Hubble constants

The rate of cosmic expansion manifesting itself as cosmic redshift is proportional to the distance r of the object: the expansion velocity satisfies v=Hr. The proportionality coefficients H is known as Hubble constant. Hubble constant has dimensions of 1/s. A more convenient parameter is Hubble length defined as lH= c/H, whose nominal value is 14.4 light years and corresponds to the limit at which the distant object recedes with light velocity from observer.

  1. The measurement of Hubble constant requires determination of distance of astrophysical object. For instance, the distance using so called standard candles - type I a supernovae having always same brightness decreasing like inverse square of distance (cosmic redshift also reduces the total intensity by shifting the frequencies). This method works for not too large distances (few hunder million light years, the size scale of the large voids): therefore this method gives the value of the local Hubble constant.

  2. The rate can be also deduced from cosmic redhift for CMB radiation. This method gives the Hubble constant in cosmic scales considerably longer than the size of large voids: one speaks of global determination of Hubble constant.

The problem has been that local and global method give different values for H. One might hope that the discrepancy should disappear as measurements become more precise. The recent determinination of the local value of the Hubble constant however demonstrates that the problem persists. The global value is roughly 9 per cent smaller than the local value. For popular articles about the finding see this and this.

The explanation of the discrepancy in terms of many-sheeted space-time was one of the first applications of TGD inspired cosmology. The local value of Hubble constant would correspond to space-time sheets of size at most that of large void. Global value would correspond to space-time sheets with size scales up to ten billion years assignable to the entire observed cosmos. The smaller value of the Hubble constant for space-time sheets of cosmic size would reflect the fact that the metric for them corresponds to a smaller average density for them. Mass density would be fractal in accordance with the fractality of TGD Universe implied by many-sheetedness.

Reader has perhaps noticed that I have been talking about space-time sheets in plural. The space-time of TGD is indeed many-sheeted 4-D surface in 8-D M4×CP2. It corresponds approximately to GRT space-time in the sense that the gauge potentials and gravitational fields (deviation of induced metric from Minkowksi metric) for sheets sum up to the gauge potential and gravitational field for the space-time of GRT characterized by metric and gauge potentials in standard model. Many-sheetedness leads to predictions allowing to distinguish between GRT and TGD. For instance, the propagation velocities of particles along different space-time sheets can differ since the light-velocity along space-time sheets is typically smaller than the maximal signal velocity in empty Minkowski space M4. Evidence for this effect was observed for the first time for supernova 1987A: neutrinos arrived in two bursts and also gamma ray burst arrived at different time than neutrinos: as if the propagation would have taken place along different space-time sheets (see this). Evidence for this effect has been observed also for neutrinos arrived from galactic blackhole Sagittarius A. Two pulses were detected and the difference for arrival time was few hours (see this).

For details see the chapter More about TGD cosmology of "Physics in Many-Sheeted Space-time".

For a summary of earlier postings see Latest progress in TGD.

Langlands program and TGD

Langlands correspondence is for mathematics what unified theories are for physics. The number theoretic vision about TGD has intriguing resemblances with number theoretic Langlands program. There is also geometric variant of Langlands program. I am of course amateur and do not have grasp about the mathematical technicalities and can only try to understand the general ideas and related them to those behind TGD. Physics as geometry of WCW ("world of classical worlds") and physics as generalized number theory are the two visions about quantum TGD: this division brings in mind geometric and number theoretic Langlands programs. This motivates re-consideration of Langlands program from TGD point of view. I have written years ago a chapter about this earlier but TGD has evolved considerably since then so that it is time for a second attempt to understand what Langlands is about.

By Langlands correspondence the representations of semi-direct product of G and Galois group Gal and G should correspond to each other. This suggests that he representations of G should have G-spin such that the dimension of this representation is same as the representation of non-commutative Galois group. This would conform with the vision about physics as generalized number theory. Could this be the really deep physical content of Langlands correspondence?

See the chapter Langlands correspondence and TGD: years later of "Physics as generalized number theory" or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Monday, July 25, 2016

Biophotons and evolution of intelligence

It is gradually becoming clear that bio-photons have a role in brain function. An interesting claim is that the biophoton spectrum is shifted towards infrared as the intelligence of the species develops (see the article published in Proceedings of National Academy of Sciences of USA). The idea is that biophotons are involved with the communications between parts of brain and biophotons with lower frequencies are favored: one reason could be metabolic economy since biophotons have energies in visible and UV range mostly and in humans the extends to near infrared. The observation is that glutamate-induced biophotonic activities and transmission in brain slices represent a spectral redshift feature from animals to humans.

Could TGD based model for biophotons as decay products of dark cyclotron photons help to understand this? In TGD framework dark photons would be involved with communications of biological body with personal magnetic body (MB). Bio-photons would result from dark cyclotron photons with heff=n× h= hgr= GMm/v0 (see the explanation of the formula above) in energy conserving transformation to ordinary photons reducing the value of heff to h. Both dark cyclotron photons from MB to brain and analogs of Josephson photons from cell membranes to MB would be involved. When dark photons transform to ordinary photons they can induce molecular transitions. MB would control biomatter by inducing these molecular transitions. This explains the range of biophoton energies. Also EEG would consist of dark photons in this energy range but frequencies in EEG range and wavelengths of astrophysical size (7.8 Hz corresponds to circumference of Earth).

Dark cyclotron photons have cyclotron energy heff× eBend/m = (GM/v0)*(eBend) independent of the mass of charged particle mass, which is essential for the universality of bio-photon spectrum. The value Bend of the "endogenous" magnetic field introduced by Blackman should vary by say two orders of magnitude to explain the range of biophoton energies. The value of heff should be rather high.

The redshift of biophoton energy spectrum for humans as compared to lower animals could mean that the spectrum for the values of Bend extends to lower values. Cyclotron periods would be also longer at lower end for the spectrum. Could the higher intelligence could be achieved by better metabolic energy economy? Or could the presence of flux tubes with lower value of Bend extend the spectrum of biophoton energies and bring in molecules with lower transitions energies (down to near infrared)? It should be possible to identify the molecules in question. They should be involved with the "glutamate-induced biophotonic activities". The communications between brain slices could be also indirect: first sensory signal to MB is sent and response comes as control signal to other part of brain.

Bend in Blackman's experiments (I have identified it as lower end for the spectrum of the values of Bend) for vertebrates was 2/5 of Earth's magnetic field BE. Why 2/5 rather than 1? Could this reflect that gradual reduction of Bend from BE during evolution? Should one repeat the experiments of Blackman and other pioneers for non-vertebrates to find whether BE is higher for them?

See the chapter Are dark photons behind biophotons? of "TGD based view about consciousness, living matter, and remote mental interactions".

For a summary of earlier postings see Latest progress in TGD.

Wednesday, July 20, 2016

Pear-shaped Barium nucleus as evidence for large parity breaking effects in nuclear scales?

Pieces of evidence for nuclear physics anomalies continue to accumulate. Now there was a popular article telling about the discovery of large parity breaking in nuclear physics scale. What have been observed is pear-shaped 144Ba nucleus not invariant under spatial reflection. The arXiv article speaks only about octupole moment of Barium nucleus difficult to explain using existing models. Therefore one must take the popular article managing to associate the impossibility of time travel to the unexpectedly large octupole moment with some caution. As a matter fact, pear-shapedness has been reported earlier for Radon-220 and Radium-224 nuclei by ISOLDE collaboration working at CERN (see this and this).

The popular article could have been formulated without any reference to time travel: the finding could be spectacular even without mentioning the time travel. There are three basic discrete symmetries: C,P, T and their combinations. CPT is belived to be unbroken but C,P, CP and T are known to be broken in particle physics. In hadron and nuclear physics scales the breaking of parity symmetry P should be very small since weak bosons break it and define so short scaled interaction: this breaking has been observed.

The possible big news is following: pear-shaped state of heavy nucleus suggests that the breaking of P in nuclear physics is (much?) stronger than expected. With parity breaking one would expect ellipsoid with vanishing octupole moment but with non-vanishing quadrupole moment. This suggests parity breaking in unexpectedly long length scale. This is not possible in standard model where parity breaking is larger only in weak scale which is roughly 1/1000 of nuclear scale and fourth power of this factor reduces the weak parity breaking effects in nuclear scale.

Does this finding force to forget the plans for the next summer's time travel? If parity breaking is large, one expects from the conservation of CPT also large compensating breaking of CT breaking. This might relate to the matter-antimatter asymmetry of the observed Universe and I cannot relate it to time travel since the very idea of time travel in its standard form does not make much much sense to me.

In TGD framework one can imagine two explanations involving large parity breaking in unexpectedly long scales. In fact, in living matter chiral selection represents mysteriously large parity breaking effect and the proposed mechanisms could be behind it.

  1. In in terms of p-adically scaled down variants of weak bosons having much smaller masses and thus longer Compton length - of the order of nuclear size scale - than the ordinary weak bosons have. After this phase transition weak interaction in nuclear scale would not be weak anymore.

  2. In terms of dark state of nucleus involving magnetic flux tubes with large hbar carrying ordinary weak bosons but with scaled up Compton length (proportional to heff/h=n) of order nuclear size. Also this phase transition would make weak interactions in nuclear scale much stronger.

There is a connection with TGD based explanation of X boson anomaly. The model for the recently reported X boson involves both options but 1) is perhaps more elegant and suggests that weak bosons have scaled down variants even in hadronic scales: the prediction is unexpectedly large parity breaking. This is amusing: large parity breaking in nuclear scales for three decades ago one of the big problems of TGD and now it might have been verified!

See the chapter Nuclear string hypothesis of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Monday, July 18, 2016

CMS provides evidence for two new spin 2 mesons of M89 hadron physics

I summarized few days ago the recent evidence for M89 hadron physics (see this. Today Lubos told about very interesting new bumps reported by CMS in ZZ channel. There is 3-4 sigma evidence in favor of a 650 GeV boson. Lubos suggests an interpretation as bulk graviton of Randall-Sundrum model. Lubos mentions also evidence for a boson of gamma-gamma resonance with mass 975 GeV.

M89 hadron physics explains the masses for a variety of bumps observed hitherto. The first guess therefore that mesons of M89 hadron physics are in question. By performing the now boringly familiar scaling down of masses by factor 1/512 for the masses one obtains the masses of corresponding mesons of ordinary hadron physics: one obtains 1270 MeV and 1904 MeV corresponding to 650 GeV and 975 GeV. Do ordinary mesons with these masses exist?

To see that this is the case, one can go to the table of exotic mesons . There indeed is exotic graviton like meson f2++(1270). Complete success! There is also exotic meson f2++(1910): the mass differs from the predicted 1904 MeV by .15 per cent. Graviton like states understandable as tetraquark states not allowed by the original quark model would be in question. The interested reader can scale up the masses of other exotic mesons identifiable as candidates for tetraquarks to produce predictions for new bumps to be detected at LHC.

Both states have spin 2 as also Randall-Sundrum bulk gravitons. What distinguishes the explanations that TGD predicts the masses of these states with an excellent accuracy and predicts a lot of more: just take the table of mesons and multiply by 512 and you can tell your grand children that you predicted entire spectroscopy correctly!

In TGD framework these states are indeed possible. All elementary particles and also meson like states correspond to pairs of wormhole contacts. There is closed monopole flux tube with the shape of highly flattened square with long sides of the order of Compton length in question and short sides of the order of CP2 size. The wormhole throats of both wormhole contact carry quark and antiquark and and one can see the structure either as a pair of gauge boson like states associated with the contacts or as a pair of mesonlike states at the two space-time sheets involved.

For background see the article Indications for the new physics predicted by TGD and the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic physics".

For a summary of earlier postings see Latest progress in TGD.

Thursday, July 14, 2016

What if 750 GeV bump disappears?

I have been working for years with M89 hadron physics hypothesis inspired originally by p-adic length scale hypothesis around 1995 and also by strange cosmic ray events (see this). Later I realized that the strange and unexpected findings about the properties of quark gluon plasma could be perhaps understand in terms of M89 hadron physics. This inspired to consider also the possibility that the the candidates for M89 mesons are produced as dark particles having Compton length which is of same order of magnitude as proton Compton length.

If dark variants of particles are produced only at quantum criticality, it might happen that the production of M89 mesons occurs considerably only around critical collision energy for the proton beams at LHC and the bumps could disappear at higher LHC energies. Unfortunately, quantum criticality does not belong to the vocabulary of particle physicists so that I must be ready to tolerate merciless ridicule also in future! This seems to be the universal fate of all who see farther off than others.

I collect here what looks like the quintessence of the comments about M89 hadron physics. I have not edited the old comments and it I cannot exclude the possibility of some small internal inconsistencies.

Some background

Large Hadron Collider May Have Produced New Matter is the title of popular article explaining briefly the surprising findings of LHC made for the first time September 2010. A fascinating possibility is that these events could be seen as a direct signature of brand new hadron physics. I distinguish this new hadron physics using the attribute M89 to distinguish it from ordinary hadron physics assigned to Mersenne prime M107 =2107 -1.

Quark gluon plasma is expected to be generated in high energy heavy ion collisions if QCD is the theory of strong interactions. This would mean that quarks and gluons are de-confined and form a gas of free partons. Something different was however observed already at RHIC: the surprise was the presence of highly correlated pairs of charged particles. The members of pairs tended to move in parallel: either in same or opposite directions.

This forced to give up the description in terms of quark gluon plasma and to introduce what was called color glass condensate. The proposal was that so called color glass condensate, which is liquid with strong correlations between the velocities of nearby particles rather than gas like state in which these correlations are absent, is created: one can imagine that a kind of thin wall of gluons is generated as the highly Lorentz contracted nuclei collide. The liquid like character would explain why pairs tend to move in parallel manner. Why they can move also in antiparallel manner is not obvious to me although I have considered the TGD based view about color glass condensate inspired by the fact that the field equations for preferred extremals are hydrodynamical and it might be possible to model this phase of collision using scaled version of critical cosmology which is unique apart from scaling of the parameter characterizing the duration of this critical period. Later LHC found a similar behavior in heavy ion collisions. The theoretical understanding of the phenomenon is however far from complete.

The real surprise was the observation of similar events in proton proton collisions at LHC: for the first time already at 2010. Lubos Motl wrote a nice posting about this observation. Also I wrote a short comment about the finding. Now the findings have been published: preprint can be found in arXiv. Below is the abstract of the preprint.

Results on two-particle angular correlations for charged particles emitted in pPb collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV are presented. The analysis uses two million collisions collected with the CMS detector at the LHC. The correlations are studied over a broad range of pseudorapidity η, and full azimuth φ, as a function of charged particle multiplicity and particle transverse momentum, pT. In high-multiplicity events, a long-range (2<|(Δ η| <4), near-side Δ φ approximately 0) structure emerges in the two-particle Δ η-Δ φ correlation functions. This is the first observation of such correlations in proton-nucleus collisions, resembling the ridge-like correlations seen in high-multiplicity pp collisions at s1/2 = 7 TeV and in A on A collisions over a broad range of center-of-mass energies. The correlation strength exhibits a pronounced maximum in the range of pT = 1-1.5 GeV and an approximately linear increase with charged particle multiplicity for high-multiplicity events. These observations are qualitatively similar to those in pp collisions when selecting the same observed particle multiplicity, while the overall strength of the correlations is significantly larger in pPb collisions.

Second highly attractive explanation discussed by Lubos Motl is in terms of production of string like objects. In this case the momenta of the decay products tend to be parallel to the strings since the constituents giving rise to ultimate decay products are confined inside 1-dimensional string like object. In this case it is easy to understand the presence of both parallel and antiparallel pairs. If the string is very heavy, a large number of particles would move in collinear manner in opposite directions. Color quark condensate would explain this in terms of hydrodynamical flow.

In TGD framework these string like objects would correspond to color magnetic flux tubes. These flux tubes carrying quark and antiquark at their ends should however make them manifest only in low energy hadron physics serving as a model for hadrons, not at ultrahigh collision energies for protons. Could this mean that these flux tubes correspond to hadrons of M89 hadron physics? M89 hadron physics would be low energy hadron physics since the scaled counterpart of QCD Λ around 200 MeV is about 100 GeV and the scaled counterpart of proton mass is around.5 TeV (scaling is by factor is 512 as ratio of square roots of M89 =289 -1, and M107 ). What would happen in the collision would be the formation of p-adically hot spot at p-adic temperature T=1 for M89 .

For instance, the resulting M89 pion would have mass around 67.5 GeV if a naive scaling of ordinary pion mass holds true. p-Adic length scale hypothesis allows power of 21/2 as a multiplicative factor and one would obtain something like 135 GeV for factor 2: Fermi telescope has provided evidence for this kind particle although it might be that systematic error is involved (see the nice posting of Resonaance). The signal has been also observed by Fermi telescope for the Earth limb data where there should be none if dark matter in galactic center is the source of the events. I have proposed that M89 hadrons - in particular M89 pions - are also produced in the collisions of ultrahigh energy cosmic rays with the nuclei of the atmosphere: maybe this could explain also the Earth limb data. Recall that my first erratic interpretation for 125 GeV Higgs like state was as M89 pion and only later emerged the interpretation of Fermi events in terms of M89 pion.

Could M89 hadrons give rise to the events?

One can consider a more concrete model for the situation.

  1. The first picture is that M89 color magnetic flubes tubes are created between the colliding protons and have length and thickness which is 512 shorter than that of ordinary hadronic color flux tubes and therefore also 512 times higher energy. The energy of colliding protons would be partially transformed to that of M89 mesons. This process should occur above critical collision energy Ecr(p)=512 mp∼ .5 TeV and perhaps already above Ecr(p)= m(pi89)=67.5 GeV. One can worry about the small geometric size of M89 mesons: is it really possible to transfer of energy of protons consisting of quarks to a scale shorter by factor 1/512 or does this process occur at quark level and doesn't one encounter the same problem here? This problem leads to second picture.

  2. M89 mesons could be dark so that their size is same as the size of protons: this could make possible a collective transfer of collision energy in the scale of entire proton to that of dark M89 mesons transforming later to much smaller ordinary M89 mesons. If this is the size the value heff/h=512 is favourable.

  3. The proposal (see this) is that dark phases of matter are generated at quantum criticality: does quantum criticality mean now that dark M89 mesons are created only near the threshold for the process but not at higher collision energies? If so, the production of M89 mesons would be observed only near energies Ecr assignable to proton-proton cm and quark-quark cm. For constituent quarks identifiable as current quark plus its magnetic body, the masses would be roughly mp/3 and one would have Ecr(q)=3 Ecr(q) (note that the masses of u and d current quarks are the scale of 5-20 MeV so thatcolor magnetic energy dominates baryon mass).

  4. This brings in mind leptohadron model (see this) explaining the reported production of mesonlike states in heavy ion collisions. These states had mass slightly larger than twice the mass of electron and they decayed to electron-positron pair. The production was observed only in the vicinity of Coulomb wall of order MeV, the mass of electro-pion. The explanation is in terms of color excited electrons forming pion like bound state. If color excited leptons are light, the decay widths of weak bosons are predicted to be too large. If the produced states are dark, one circumvents this problem. Quantum criticality corresponds to Coulomb wall and explains why the production occurs around it.

    In the recent case quantum criticality could mean the threshold for production of M89 mesons. The bad news is that quantum criticality could mean that M89 mesons are not produced at higher LHC energies so that the observed bumps assignable to M89 would suffer the usual fate of the bump. Since quantum criticality does not belong to the conceptual repertoire of particle physicists, one cannot expect that the notion of M89 hadron would be accepted easily by the community.

Further indications for M89 hadron physics

During last years several indications for the new physics suggested by TGD have emerged. Recently the first LHC Run 2 results were announced and there was a live webcast (see this).

  1. The great news was the evidence for a two photon bump at 750 GeV about which there had been rumors. Lubos told earlier about indications for diphoton bump around 700 GeV. If the scaling factor is the naive 512 so that M89 pion would have mass about 70 GeV, there are several meson candidates. The inspection of the experimental meson spectrum (see this) shows that there is quite many resonances with desired quantum numbers. The scaled up variants of neutral scalar mesons η(1405) and η(1475) consisting of quark pair would have mases 719.4 GeV and 755.2 GeV and could explain both 700 GeV and 750 bump. There are also neutral exotic mesons which cannot be quark pairs but pairs of quark pairs (see this) f0(400), f0(980), f2(1270), f0(1370), f0(1500), f2(1430), f2(1565), f2(1640), f?(1710) (the subscript tells the total spin and the number inside brackets gives mass in MeVs) would have naively scaled up masses 204.8, 501.8, 650.2, 701.4, 768.0, 732.2, 801.3, 840.0, 875.5 GeV. Thus f0 meson consisting of two quark pairs would be also a marginal candidate. The charged exotic meson a0(1450) scales up to 742.4 GeV state.

  2. There is a further mystery involved. Matt Strassler (see this) emphasizes the mysterious finding fact that the possible particle behind the bump does not seem to decay to jets: only 2-photon state is observed. Situation might of course change when data are analyzed. Jester (see this) in fact reports that 1 sigma evidence for Zγ decays has been observed around 730 GeV. The best fit to the bump has rather large width, which means that there must be many other decay channels than digamma channels. If they are strong as for TGD model, one can argue that they should have been observed.

    As if the particle would not have any direct decay modes to quarks, gluons and other elementary particles. If the particle consists of quarks of M89 hadron physics it could decay to mesons of M89 hadron physics but we cannot directly observe them. Is this enough to explain the absence of ordinary hadron jets: are M89 jets somehow smoothed out as they decay to ordinary hadrons? Or is something more required? Could they decay to M89 hadrons leaking out from the reactor volume before a transition to ordinary hadrons?

    Or could a more mundane explanation work? Could 750 GeV states be dark M89 eta mesons decaying only via digamma annihilation to ordinary particles be in question? For ordinary pion the decays to gamma pairs dominate over the decays to electron pairs. Decays of ordinary pions to lepton or quark pairs must occur either by coupling to axial weak current or via electromagnetic instanton term coupling pseudo-scalar state to two photon state. The axial current channel is extremely slow due to the large mass of ordinary weak bosons but I have proposed that variants of weak bosons with p-adically scaled down masses are involved with the decays recently called X bosons (see this) and perhaps also with the decays of ordinary pion to lepton pairs). Pseudoscalar can also decay to virtual gamma pair decaying to fermion pair and for this the rate is much lower than for the decay to gamma pair. This would be the case also for M89 mesons if the decays to lepton or quark pair occurs via these channels. This might be enough to explain why the decay products are mostly gamma pairs.


  3. Above arguments suggest the production of dark M89 hadrons with heff/h=512 at quantum criticality. The TGD inspired idea that M89 hadrons are produced at RHIC in heavy ion collisions and in proton heavy ion collisions at LHC as dark variants with large value of heff= n× h with scaled up Compton length of order hadron size or even nuclear size conforms with finding that the decay of string like objects identifiable as M89 hadrons in TGD framework explains the unexpected properties of what was expected to be simple quark gluon plasma analogous to blackbody radiation.

    Quantum criticality (see this) suggests that the production of dark M89 mesons (responsible for quantal long range correlations) is significant only near the threshold for their production (the energy transfer would take place in scale of proton to dark M89 meson with size of proton). Note that in TGD inspired biology dark EEG photons would have energies in bio-photon energy range (visible and UV) and would be exactly analogous to dark M89 hadrons. The criticality could correspond to the phase transition from confined to de-confined phase (at criticality confinement with much larger mass but with scaled up Compton wavelength!).

    The bad news is that the rate for the production of M89 mesons with standard value of Planck constant at higher LHC energies could be undetectably small. If this is the case, there is no other way than tolerate the ridicule, and patiently wait that quantum criticality finds its place in the conceptual repertoire of particles physicists. New results about 750 GeV bump will be released at the beginning of August and there are "reliable" rumors that the bump is disappearing. The group led by my finnish colleague Risto Orava (we started as enthusiastic physics students at the same year and were coffee table friends) is scanning for old LHC data for possible evidence for 750 GeV state. If the bump is there but disappears at higher energies, it would provide support for quantum criticality.

  4. Lubos mentions in his posting several excesses, which could be assigned with the above mentioned states. The bump at 750 GeV could correspond to scaled up copy of η(1475) or - less probably - f0(1500). Also the bump structure around 700 GeV for which there are indications (see this) could be explained as a scaled up copy of η(1405) or f0(1370) with mass around 685 GeV. Lubos mentions also a 662 GeV bump (see this). If it turns out that there are several resonances in 700 TeV region (and also elsewhere) then the only reasonable explanation relies on hadron like states since one cannot expect a large number of Higgs like elementary particles. One can of course ask why the exotic states should be seen first.

  5. Remarkably, for the somewhat ad hoc scaling factor 2× 512∼ 103 one does not have any candidates so that the M89 neutral pion should have the naively predicted mass around 67.5 GeV. Old Aleph anomaly > had mass 55 GeV. This anomaly did not survive. I found from my old writings > that Delphi and L3 have also observed 4-jet anomaly with dijet invariant mass about 68 GeV: M89 pion? There is indeed an article about search of charged Higgs bosons in L3 (see this) telling about an excess in csbarτ-νbarτ production identified in terms of H+H- annihilation suggesting charged Higgs mass 68 GeV. TGD based interpretation would in terms of the annihilation of charged M89 pions.

    The gammas in 130-140 GeV range detected by Fermi telescope (see this) were the motivation for assuming that M89 pion has mass twice the naively scaled up mass. The digammas could have been produced in the annihilation of a state with mass 260 GeV. The particle would be the counterpart of the ordinary η meson η(548) with scaled up mass 274 GeV thus decaying to two gammas with energies 137 GeV. An alternative identification of the galactic gamma rays in terms of gamma ray pairs resulting in the annihilation of two dark matter particles nearly at rest. It has been found that this interpretation cannot be correct (see this).

    Also scaled up eta prime should be there. Also an excess in the production of two-jets above 500 GeV dijet mass has been reported (see this) and could relate to the decays of η'(958) with scaled up mass of 479 GeV! Also digamma bump should be detected.

  6. What about M89 kaon? It would have scaled up mass 250 GeV and could also decay to digamma. There are indications for a Higgs like state with mass of 250 GeV from ATLAS (see this! It would decay to 125 GeV photons - the energy happens to be equal to Higgs mass. There are thus indications for both pion, kaon, all three scaled up η mesons and kaon and η' with predicted masses! The low lying M89 meson spectroscopy could have been already seen!

  7. Lubos mentions (see this) also indications for 285 GeV bump decaying to gamma pair. The mass of the eta meson of ordinary hadron physics is .547 GeV and the scaling of eta mass by factor 512 gives 280.5 GeV : the error is less than 2 per cent.

  8. Lubos tells (see this) about 3 sigma bump at 1.650 TeV assigned to Kaluza-Klein graviton in the search for Higgs pairs hh decaying to bbbar +bbbar>. Kaluza-Klein gravitons are rather exotic creatures and in absence of any other support for superstring model they are not the first candidate coming into my mind. I do not know how strong the evidence for spin 2 is but I dare to consider the possibility of spin 1 and ask whether M89 hadron physics could allow an identification for this bump.

    1. Very naively the scaled up J/Psi of the ordinary M107 hadron physics having spin J=1 and mass equal to 3.1 GeV would have 512 times higher mass 1.585 TeV: error is about 4 per cent. The effective action would be based on gradient coupling similar in form to Zhh coupling. The decays of scaled up Ψ/J could take place via hh → bbbar+bbbar also now.

    2. This scaling might be too naive: the quarks of M89 hadron physis might be same as those of ordinary hadron physics so that only the color magnetic energy would be scaled up by factor 512. c quark mass is equal 1.29 GeV so that the magnetic energy of ordinary J/Psi would be equal to .52 GeV. If so, M89 version of J/Psi would have mass of only 269 GeV. Lubos tells also about evidence for a 2 sigma bump at 280 GeV identified as CP odd Higgs - this identification of course reflects the dream of Lubos about standard SUSY at LHC energies. However, the scaling of η meson mass 547.8 MeV by 512 gives 280.4 GeV so that the interpretation as η meson proposed already earlier is convincing. The naive scaling might be the correct thing to do also for mesons containing heavier quarks.

  9. Lubos (see this) also tells about an excess (I am grateful for Lubos for keeping book about the bumps: this helps enormously), which could have interpretation as the lightest M89 vector meson - ρ89 or ω89. Mass is the predicted correctly with 5 per cent accuracy by the familiar p-adic scaling argument: multiply the mass of ordinary meson with 512.

    This 375 GeV excess might indeed represent the lightest vector meson of M89 hadron physics. ρ and ω of standard hadron physics have mass 775 MeV and the scaled up mass is about 397 GeV, which is about 5 per cent heavier than the mass of Zγ excess.

    The decay ρ→ Z+γ describable at quark level via quark exchange diagram involving emission of Z and γ. The effective action would be proportional to Tr(ρ*γ*Z), where the product and trace are for antisymmetric field tensors. This kind effective action should describe also the decay to gamma pair. By angular momentum conservation the photons of gamma pairs should be in relative L=1 state. Since Z is relativistic, L=1 is expected to be favored also for Z+γ final state. Professional could immediately tell whether this is correct view. Similar argument applies to the decay of ω which is isospin singlet. For charged ρ also decays to Wγ and WZ are possible. Note that the next lightest vector meson would be K* with mass 892 MeV. K*89 should have mass 457 GeV.

  10. Lubos (see this) also reports that ATLAS sees charged boson excess manifesting via decay to tb in the range 200-600 TeV. Here Lubos takes the artistic freedom to talk about charged Higgs boson excess since Lubos still believes in standard SUSY predicting copies several Higgs doublets. TGD does not allow them. In TGD framework the excess could be due to the presence of charged M89 mesons: pion, kaon, ρ, ω.

  11. A smoking gun evidence would be detection of production of pairs of M89 nucleons with masses predicted by naive scaling to be around 470 GeV. This would give rise to dijets above 940 GeV cm energy with jets having total quantum numbers of ordinary nucleons. Each M89 nucleon consisting of 3 quarks of M89 hadron physics could also transform to ordinary quarks producing 3 ordinary hadron jets.

For background see the article Indications for the new physics predicted by TGD and the chapter New Particle Physics Predicted by TGD: Part I of "p-Adic physics".

For a summary of earlier postings see Latest progress in TGD.

Tuesday, July 12, 2016

Lightnings, dark matter, and lepto-pion hypothesis again

Lightnings have been found to involve phenomena difficult to understand in the framework of standard physics. Very high energy photons, even gamma rays and electrons and positrons with energies in gamma energy range, have been observed.

I learned recently about even more mysterious looking discovery (see this). Physicist Joseph Dwyer from University of New Hampshire and lightning scientists from the University of California at Santa Cruz and Florida Tech describe this discovery in a paper to be published in the Journal of Plasma Physics. In August 2009, Dwyer and colleagues were aboard a National Center for Atmospheric Research Gulfstream V when it inadvertently flew into the extremely violent thunderstorm - and, it turned out, through a large cloud of positrons, the antimatter opposite of electrons, that should not have been there. One would have expected that positrons would have been produced by annihilation of highly energetic gamma rays with energy above .5 MeV but no gamma rays were detected.

This looks rather mysterious from standard physics point of view. There are also earlier strange discoveries related to lightnings.

  1. Lightning strikes release powerful X-ray bursts (see "Lightning strikes release powerful X-ray bursts" ).

  2. Also high energy gamma rays and electrons accompany lightnings (see "Earth creates powerful gamma-ray flashes"). The problem is that electrons should lose their energy while traversing through the atmosphere so that energies in even X ray range would be impossible.

  3. The third strange discovery was made with Fermi telescope (see "Antimatter from lightning flashes the Fermi space telescope"): gamma rays with energies .511 MeV (electron mass) accompany lightnings as if something with mass of 2 electron masses would decay to gamma pairs.

Could TGD explain these findings?
  1. A possible explanation for the finding of Fermi telescope is that in the strong magnetic field of colliding very high energy colliding electrons assignable to the dark magnetic flux tubes of Earth particles that I call electropions suggested by TGD are created (see this). Also evidence for mu-pions and tau-pions exists. They would have mass rather precisely 2 times the mass of electron and would be bound states of color excited electron and positron. Evidence for this kind of states was found already at seventies in heavy ion collisions around Coulomb wall producing electron positron pairs at total energy of 2 times electron mass but since they do not fit at all to the standard physics picture (too large decay width for weak bosons would be predicted) they have been put under the rug, so to say. The paradox is solved if these particles are dark in TGD sense.

  2. If the annihilations of electropions give rise to dark electron-positron pairs and dark gamma rays, which then transform to ordinary particles, one could understand the absence of gamma rays in the situation described by Dwyer et al in terms of too slow transformation to ordinary particles. For instance, the strong electric fields created by a positively charged region of cloud could accelerate electron from both downwards and upwards to this region and lepto-pions would be generated in the strong magnetic fields generating strong electromagnetic instanton density E•B generating lepto-pion coherent state.

  3. But how it is possible to observe gamma rays and ultrahigh energy electrons at the surface of Earth? The problem is that atmosphere is not empty and dissipation would restrict the energies to be much lower than gamma ray energies which are in MeV range. Note that the temperatures in lightning are about 3× 104 K and correspond to electron energy of 2.6 eV which is by a factor 105 smaller than electron mass and gamma ray energy scale! And how the electrons with energies above MeV range are created in thunder cloud? For years ago I proposed a model for high energy gamma rays and electrons associated with lightnings in terms of dark matter identified as heff=n× h phases. This model could provide answer to these questions.

First some background is needed.
  1. I ended up to heff=n× h hypothesis from the observations of Blackman and other pioneers of bio-electromagnetism about quantal effects of ELF em fields to vertebrate brain, which he explained in terms of cyclotron frequencies of Ca++ ion in endogenous magnetic field Bend=0.2 Gauss (2/5:th of the nominal value BE=.5 Gauss of the Earth's magnetic field). Cyclotron energy E= h× f is however extremely low, much below the thermal energy in physiological temperature so that no quantal effects should be possible. This inspired the hypothesis heff=n× h scaling up the energy.

  2. Nottale introduced originally the notion of gravitational Planck constant hgr= GMm/v0 to explain the orbital radii of planets in solar system as Bohr orbits. The velocity parameter v0 is different for inner and outer planets. Quite recently I proposed v0 is in constant ratio to the rotation velocity of the large mass M. The interpretation in TGD framework is that the magnetic flux tubes mediate gravitational interaction between M and m and the value of Planck constant is hgr at them. The proposal heff=hgr at flux tubes is very natural sharpening of the original hypothesis. The predictions of the model do not depend on whether m is taken to be the mass of the planet or any elementary particle associated with it and the gravitational Compton length λgr= GMc/v0 does not depend on the mass of the particle as is proportional to the Schwartschild radius 2GM of Sun.

  3. This hypothesis can be generalized to apply also to Earth (see this). For the strength Bgal∼ 1 nT for galactic magnetic field assumed to mediate Earth's gravitational interaction cyclotron frequency 10 Hz in alpha band is mapped to cyclotron frequency scale of 72 minutes. Scaled EEG range corresponds to cyclotron periods varying up to 12 hours for Bgal. For M= ME and Bgal the cyclotron energy corresponds to about 1 eV at the lower end of visible photon energies.

  4. What about the interpretation of ordinary EEG in terms of cyclotron frequencies assuming that the corresponding energies are in visible and UV range corresponding to the variation of Bend? ME is certainly too large to give a spectrum of cyclotron energies in this range suggested by Blackman to explain the findings about quantal effects of ELF radiation on brain not possible in standard quantum theory because the energy is much below the thermal threshold. MD= .5 × 10-4 ME would be needed. I have proposed that MD corresponds to a mass assignable to a spherical layer at distance of Moon's orbital radius and there are independent pieces of evidence for the existence of this layer. Bend would represent the lower bound for the value range of the magnetic field varying at least by 7 octaves would give the highest UV energies around 124 eV. The transformation of dark photons to ordinary photons would yield biophotons with energies in visible and UV range. Also Bgal would have some variation range.

  5. This has a connection to quantum biology and neuroscience. The proposal is that dark cyclotron photons with energies in visible and UV range associated with flux tubes of magnetic field of appropriate strength serve as a communication tool making biological body (BB) to communicate sensory data to magnetic body (MB) and allow BB to control BB.

Consider now the model for how electrons and gamma rays accompanying lightnings can travel to the surface of Earth without dissipating their energies and how the collisions of electrons with gamma ray energies generating electropions are possible.
  1. What happens if one replaces MD with ME meaning that also Earth's gravitons could reside also at the flux tubes of Bend rather than only those of Bgal? The energies get scale up by a factor ME/M1= 2× 104 and this scales up the 1-100 eV range .02-2 MeV so that also gamma ray energies would be obtained.

  2. The earlier proposal was that electrons and gamma rays associated with lightning arrive to the surface of Earth along dark magnetic flux tubes so that by macroscopic quantum coherence in scale of λgr they do not dissipate their energy.

See the chapter Recent status of leptopion hypothesis of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Sunday, July 10, 2016

Details related to adelic NMP

What happens in state function reduction and what NMP really says is still far from being completely clear. The basic condition is that standard measurement theory emerges as a special case and is forced by NMP. This does not however fix the
NMP completely.

1. Adelic NMP as the only reasonable option

During years I have considered two options for NMP.

  1. In the original approach to NMP it was assumed that both generic entanglement with real entanglement probabilities and entanglement with algebraic entanglement probabilities are possible. Real entanglement is entropic and demands standard measurement theory leading to a 1-D eigen-space of the density matrix. Algebraic entanglement can be negentropic in number theoretic sense for some p-adic primes, and in this case state function reduction occurs only if it increases negentropy. It takes place to N-dimensional eigen-space of the density matrix. The basic objection is that real entanglement is transcendental in the generic case reducing to algebraic entanglement only as a special case. Algebraic entanglement is also extremely rare without additional physical assumptions.

  2. In the adelic approach entanglement coefficients and therefore also entanglement probabilities are always algebraic numbers from the condition that the notion of p-adic Hilbert space makes sense. Also extensions of rationals defining finite-dimensional extension of p-adic numbers (roots of e can appear in extension) must be allowed. Same entanglement can be seen from both real (sensory) and p-adic perspectives (cognitive). The entanglement is always entropic in the real sector but can be negentropic in some p-adic sectors. It is now clear that the adelic option is the only sensible one.

2. Variants of the adelic NMP

The adelic option allows to consider several variants.

  1. Negentropy could correspond a) to the sum N= NR+∑p Np of real and various p-adic negentropies or b) to the sum N=∑ Np of only p-adic negentropies. Np is non-vanishing for a finite number of p-adic primes only as is easy to find. In both cases ∑p Np could be interpreted as negentropy assignable to cognition. NR might have interpretation as a measure of ignorance of one of the entangled systems about the state of other.

  2. NMP implies that state function reduction (measurement of density matrix leading to its eigen-space) occurs if negentropy 1) is not reduced or 2) increases. This means that negentropic entanglement is stable against NMP.

Can one select between these options?
  1. For option a) NMP becomes trivial for rational entanglement probabilities as is easy to find: one has N= NR+∑p Np=0. NMP does not force state function reduction to occur but it could occur and imply ordinary state function reduction as a special case for option 1) (when eigen-spaces are 1-dimensional). Therefore one would have option 1a).

  2. If option 1a) is unrealistic, only the options 1b) and 2b) with N= ∑p Np are left. For option 2b) state function necessarily occurs for N=∑p Np<0 but not for N=0 - not even in rational case. For option 2b) the state function reduction could occur also for N=0. However, since Np is proportional to log(p) and the numbers log(p) are algebraically independent, N=0 is not actually possible so that 1b) and 2b) are equivalent. Therefore NMP states that N=∑p Np must increase for N<0: this forces state function reduction to an eigen-space of density matrix.

    But is it really possible to have ∑ Np<0 making possible ordinary state function reduction? For rational entanglement probabilities this is not possible by SR= ∑p Np and one might even speculate that for algebraic extensions one as ∑p Np≥ SR. Mathematician could probably check the situation. ∑p Np≥ SR holds true, entanglement is stable against NMP and ordinary state function reduction is not possible. This would leave only the option 1a) and negentropic entanglement with N>0 would be stable also now. N=0 entanglement (possibly rational always) would allow ordinary state function reduction.

This leaves still two options. Negentropy gain is A) maximal or B) non-negative but not necessarily maximal: I have considered the latter option earlier. For option 1a) reduction is possible only for N=0 and in this case negentropy gain is zero for all possible eigen-spaces of density matrix and maximality condition does not say anything.
  1. For option 1a) reduction is possible only for N=0 and in this case negentropy gain is zero for all possible eigen-spaces of density matrix and A) and B) are equivalent. One obtains ordinary state function reductions.

  2. Consider next the equivalent options 1b) and 2b) making sense if ∑p Np<0 is possible. For option A) negentropy gain is maximal and the reduction occurs to an eigen-space with maximum dimension N=Nmax. There can be several eigen-spaces with the same maximal dimension. As a special case one obtains ordinary state function reduction. The reduction probability is same as in standard quantum measurement theory.

    For option B) the reduction could occur also to any N-dimensional eigen-space or its sub-space. The idea would be that NMP allows something analogous to a choice between good and evil: the negentropy gain could in this case be also smaller than the maximal one corresponding to log(Nmax). This would conform with the intuition that we do not seem to live in best possible world. On the other hand, negentropy transfer between systems could be also seen as stealing in some situations and metabolism identified as negentropy transfer could be seen as the fundamental "crime" to which all other forms of reduce.

3. Could quantum measurement involve also adelic localization?

For option B) there is still one possible refinement involved. p-Adic mass calculations lead to the conclusion that elementary particles are characterized by p-adic primes and that p-adic length scale hypothesis p≈ 2k holds true: a more general form of hypothesis allows also to consider primes near powers qn of some small prime such as q=3.

Could state function reduction imply also adelic/cognitive localization in the sense that the negentropy is nonzero and positive for only single p-adic prime in the final state? The reduction would occur to pk-dimensional eigen-space with pk dividing N: any divisor would be allowed. Note that Hilbert spaces with prime dimension are prime with respect to the decomposition to tensor product so that reduction would select prime power factor of the eigen-space.

The information theoretic meaning would be that prime-dimensional Hilbert spaces are stable against decomposition to tensor products so that the notion of entanglement would not make sense and therefore also the change of the state by the reduction of entanglement would be impossible. I have considered the possibility that prime-dimensional state spaces could make possible stable storage of quantum information. The prime-dimensional state when imbedded to higher-dimensional space could be seen as an entangled state.

This hypothesis would provide considerable insights to the origin of p-adic length scale hypothesis. To get a contact with physics consider electron as an example.

  1. In the case of electron one would have p=M127=2127-1∼ 1038. Could electron decompose to two entangled subsystems with density matrix equal to p× p identity matrix? The dimension of eigen-space would be huge and electron would carry negentropy of 127 bits: also p-adic mass calculations combined with a generalization of Hawking-Bekenstein formula suggest that electron carries entropy of 127 bits: in adelic picture these views are mutually consistent.

    The recent view indeed is that all elementary particles correspond to closed monopole magnetic flux tubes with a shape of highly flattened rectangles with short sides identifiable as extremely short wormhole contacts (CP2 size) and long sides with length of order Compton length. Magnetic monopole flux traverses along first space-time sheet between wormhole throats, goes through wormhole contact, and returns back along second space-time sheet. Many-fermion states are assigned with the throats and are located at the ends of strings traversing along the flux tubes.

    Could this structure be in the case of electron a 127-sheeted structure such that the two wormhole contacts carry a superposition of pairs formed by states containing n ∈{1,...,127} fermions at second contact and n antifermions with opposite charges at second contact so that 2127-1 dimensional eigen-space would be obtained for a fermion with given spin and isospin. For instance, n=0 state with no fermion-pairs could be excluded.

  2. Right-handed neutrinos and antineutrinos are candidates for the generators of N=2 supersymmetry in TGD framework. It however seems that SUSY is not manifested at LHC energies, and one can wonder whether right-handed neutrinos might be realized in some other manner. Also the mathematics involved remains still somewhat unclear. For right-handed neutrinos, which are not covariantly constant transformation to left-handed neutrinos is possible and leads to the mixing and massivation of neutrinos. For covariantly constant right handed neutrino spinors this does not happen but they can included into the spectrum only if they have non-vanishing norm.

    This might be the case with a proper definition of norm with Ψbar pkγkΨ replaced by Ψbar; nkγkΨ: here nk defines normal of the light-like boundary of CD. Covariantly constant right-handed neutrinos have neither electro-weak, color, nor gravitational interactions so that their negentropic entanglement would be highly stable. Unfortunately, the situation is still unclear and this leaves open the idea that right-handed neutrinos might play fundamental role in cognition and negentropy storage. Amusingly, I proposed the notion of cognitive neutrino long time ago but based on arguments which turned out to be wrong.

    One could indeed consider the possibility that each sheet of the 127-sheeted structure contains at most one νR at the neutrino end of the flux tube accompanied by νbarR at anti-neutrino end. One would have a superposition p=2127-1 states formed by many-neutrino states and their CP conjugates at opposite "ends" of the flux tube. It is also possible that νbarRR pairs are spin singlets so that one has superposition over many-particle states formed from these analogous to coherent state.

    This is not the only possibility. The proposal for how the finite range of weak interactions emerges suggests a possible realization for how the number of states in superposition reduces from 2127 to 2127-1. The left weak isospin of fermion at wormhole throat is compensated by the opposite weak isospin of neutrino/antineutrino plus νbarRR or cancelling its fermion number: therefore weak charges vanish in scales longer than the flux tube length of order of the Compton length. The physical picture is that massless weak boson exchanges occur inside the flux tube which therefore defines the range of weak interactions. Same mechanism could be at work for both wormhole throat pairs and therefore also for fermion and anti-fermion at opposite wormhole throats defining building bricks of bosons. The state νbarRR would be excluded from the superposition of pairs of many-particle states and superposition would contain p=2127-1 states.

  3. Could this relate to heff=n× h hypothesis? It has been assumed that heff/h=n corresponds to space-time surfaces representable as n-fold singular coverings, whose sheets co-incide at the 3-D ends of the space-time surface at opposite boundaries of CD. There is of course no need to assume that the covering considered above corresponds to singular covering and the vision that only particles with same value of n appear in same vertices suggests that n=1 holds true for visible matter.

    One can still ask whether the elementary particle characterized by p ≈ 2k could corresponds to k-fold singular covering and to heff/h= k? This would require that phase transitions changing the value of k take place at the lines of scattering diagrams to guarantee that all particles have the same value of k in given vertex. These phase transitions are a key element of TGD inspired quantum biology.

    In the first order of perturbation theory this would not mean any deviations from standard quantum theory for given k and the general vision that loop corrections from the functional integration over WCW vanish suggests that there are no effects in perturbation theory for given k. p-Adic coupling constant evolution would be discrete and make itself visible by the phase transitions at the lines of scattering diagrams (not identifiable as Feynman diagrams). The different values of heff/h=n be also seen through non-perturbative effects assignable to the bound states and also via the proportionality of p-adic mass scales to p-1/2≈ 2-k/2 predicted by p-adic mass calculations.

See the chapter Non-locality in quantum theory, in biology and neuroscience, and in remote mental interactions: TGD perspective of "TGD Based View About Living Matter And Remote Mental Interactions" or article with the same title.

For a summary of earlier postings see Latest progress in TGD.