Tuesday, October 16, 2018

Anomalously strong 21-cm absorption line of hydrogen in cosmology as indication for TGD based view about dark matter

The called 21-cm anomaly meaning that there is unexpected absorption of this line could be due to the transfer of energy from gas to dark matter leading to a cooling of the gas. This requires em interaction of the ordinary matter with dark matter but the allowed value of electric charge must be must much smaller than elementary particle charges. In TGD Universe the interaction would be mediated by an ordinary photon transforming to dark photon having effective value heff/h0=n larger than standard value h implying that em charge of dark matter particle is effectively reduced. Interaction vertices would involve only particles with the same value of heff/h0=n.

In this article a simple model for the mixing of ordinary photon and its dark variants is proposed. Due to the transformations between different values of heff/h0=n during propagation, mass squared eigenstates are mixtures of photons with various values of n. An the analog of CKM matrix describing the mixing is proposed. Also the model for neutrino oscillations is generalized so that it applies - not only to photons - but to all elementary particles. The condition that "ordinary" photon is essentially massless during propagation forces to assume that during propagation photon is mixture of ordinary and dark photons, which would be both massive in absence of mixing. A reduction to ordinary photon would take place in the interaction vertices and therefore also in the absorption. The mixing provides a new contribution to particle mass besides that coming from p-adic thermodynamics and from the Kähler magnetic fields assignable to the string like object associated with the particle.

See the article 21-cm anomaly and analogs of CKM mixing and neutrino oscillations for photon and its dark variants or the chapter
Quantum criticality and dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, October 15, 2018

Increase of the dimension of extension of rationals as the emergence of a reflective level of consciousness

In TGD framework the hierarchy of extensions of rationals defines a hierarchy of adeles and evolutionary hierarchy.
What could the interpretation for the events in which the dimension of the extension of rationals increases? Galois extension is extensions of an extension with relative Galois group Gal(rel)= Gal(new)/Gal(old). Here Gal(old) is a normal subgroup of Gal(new). A highly attractive possibility is that evolutionary sequences quite generally (not only in biology) correspond to this kind of sequences of Galois extensions. The relative Galois groups in the sequence would be analogous to conserved genes, and genes could indeed correspond to Galois groups (see this). To my best understanding this corresponds to a situation in which the new polynomial Pm+n defining the new extension is a polynomial Pm having as argument the old polynomial Pn(x): Pm+n(x)=Pm(Pn(x)).

What about the interpretation at the level of conscious experience? A possible interpretation is that the quantum jump leading to an extension of an extension corresponds to an emergence of a reflective level of consciousness giving rise to a conscious experience about experience. The abstraction level of the system becomes higher as is natural since number theoretic evolution as an increase of algebraic complexity is in question.

This picture could have a counterpart also in terms of the hierarchy of inclusions of hyperfinite factors of type II1 (HFFs). The included factor M and including factor N would correspond to extensions of rationals labelled by Galois groups Gal(M) and Gal(N) having Gal(M)⊂ Gal(M) as normal subgroup so that the factor group Gal(N)/Gal(M) would be the relative Galois group for the larger extension as extension of the smaller extension. I have indeed proposed (see this) that the inclusions for which included and including factor consist of operators which are invariant under discrete subgroup of SU(2) generalizes so that all Galois groups are possible. One would have Galois confinement analogous to color confinement: the operators generating physical states could have Galois quantum numbers but the physical states would be Galois singlets.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

See the article Re-examination of the basic notions of TGD inspired theory of consciousness or the article Does M8-H duality reduce classical TGD to octonionic algebraic geometry?.

Thursday, October 11, 2018

Learning by conditioning and learning by discovery

I had an "entertaining" discussion with two fellows - l call them A and B -, which taught a lot, I hope also for A and B -, and actually gave a good example of two kinds of learning. Learning by conditioning and learning by discovery. It also led to a possible understanding about what goes wrong in what I would call ultra-skeptic cognitive syndrome.

[This discussion by the way gave me good laughs. A and B - first summarized his academic background by "studied strings" and second was Bachelor in computer science but pretending to be M-theorist. They tried to demonstrate that I am a crackpot. They carried out an "investigation" following the principles of investigations made for witch candidates at middle ages. The victim had two options: she drowns or not in which case she is burned at stake.]

The highly emotional discussion was initiated by a totally non-sense hype about transferring consciousness of C Elegance to computer program (see this). I told that the news was hype and this raised the rage of A and B. The following considerations have very little to do with this article. Note however that I have done some work AI in general and even with with the basic ideas of deep learning. For instance, we had two years ago a collaboration about AI, IIT approach to consciousness, and about a possible connection with remote mental interactions together with Lian Sidorov and Ben Goertzel, who is behind Sophia robot. There two chapters related to this (see this and this). I think that the latter chapter is published in a book by Goertzel. There is also a critical article inspired by Sophia robot about which Ben Goertzel wrote an enthusiastic article and sent to Lian Sidorov and me (this).

1. Learning by conditioning

Returning to learning. The first kind of learning is learning by conditioning, which deep learning algorithms try to mechanize. Second kind of learning is learning by discovery. The latter is impossible for computers because they obey deterministic algorithm and are unable to do anything creative.

Emotions play a strong role in the learning by conditioning in the case of living systems and in the simplest form it is learning of X-good and X-bad type associations helping C elegance to survive in the cruel world. In case of humans this kind of associations can be extremely dangerous as for instance the course of events in USA has shown.

Very large part of our learning is just forming of associations: this is what Pavlov's dogs did. In school we learn to associate to "2×3=" symbol "6". In our youth we learned also algorithms for sum, division, multiplication and division, and even for finding the roots second order polynomial. Often this is called learning of mathematics. Later some mathematically gifted ones however discovered that this is just simple conditioning of an algorithm, and has very little to do with genuine mathematical thinking. The discovery of the algorithm itself would be mathematical thinking. The skill to code for algorithm - usually given - is also an algorithm and it can be also coded in AI.

If we are good enough in getting conditioned we get a studentship in University and learn science. This involves also learning of simple conditionings of type X-good and X-bad. In this learning social feedback from others reinforces learning: who would not like to earn the respect of the others!

For X-bad conditionings X can be homeopathy, water memory, cold fusion, telepathy, remote viewing, non-reductionistic/non-physicalistic world view, quantum theories of consciousness, TOEs other than M-theory, etc... For X-good conditionings X can be physicalism, reductionism, strong AI, superstrings, Witten, etc...

The student learns also to utter simple sentences demonstrating that he has learned the desired conditionings. This is important for career. Proud parents who hear the baby say their first word encourage the child. In the same manner environment reinforces the learning of "correct" opinions by a positive feedback. The discussion with A and B ga a quite a collection of these simple sentences. "I guessed that he is a crank" from A is a good example intended to express the long he life experience and wisdom of the youngster.

These conditionings make it also easy "recognize" whether someone is a crank/crackpot/etc... and even to carry out personal investigations - analogous with witchcraft investigations at middle ages - whether some-one is a crank or not. This is what A and B in their young and foolish arrogance indeed decided to carry out.

2. Learning by Eureka experience

There is also second kind of learning. Learning by discovery. Computers are not able to do this. I mentioned in the discussion what happens when you look certain kind of image consisting of mere random looking spots in plane. After enough staring suddenly a beautiful 3-D patterns emerges. This is a miracle like phenomenon, Eureka experience. Quantum consciousness based explanation is the emergence of quantum coherence in the scale of the neuronal cognitive representation in visual cortex at least. New 3-D mental image emerges from purely 2-D one. One goes outside of the context.

The increase of dimension might provide an important hint about what happens more generally: and this would indeed occur for the dimension of extension of rationals in Eureka quantum jump in TGD based model of what could occur. Physically this would correspond to the increase of the effective Planck constant heff= n×h0, h=6×h0 assignable to the mental image created by the image. n is indeed the dimension of extension of rationals and would increase and also scale of quantum coherence would increase from that of single spot to that for the entire pictures.

This kind of learning by Eureka is probably very common for children: they are said to be genii. Later the increasing dominance on the learning by conditioning often eliminates this mode of learning and the worst outcome is a mainstream scientist who is hard-nosed skeptic. Solving genuine problems is the manner to gain these learning experiences but they come only now and then. Some of them are really big: during my professional career there have been - I would guess about 10 really big experiences of this kind involving discovery of a new principle or totally new physical idea.

3. How to understand what is wrong with vulgar skeptics?

The discussion was very interesting since it led me to ponder why it is so hopeless to explain something extremely simple for skeptics. There is a beautiful connection with a learning based on Eureka experience. Physically this corresponds in TGD to a phase transition increasing scale of quantum coherence and algebraic complexity: more technically effective Planck constant heff increases at some levels. More intelligent mental images become possible and Eureka experience happens as in the situation when chaotic 2-D set of points becomes beautiful 3-D object.

Biological evolution at the level of species is based on this: we humans are more intelligent than banana flies. This evolution occurs at all levels - also at the level of individuals but it is not politically correct to say this aloud. Some of us are in their intellectual evolution at higher level than others, either congenitally or by our own efforts or both. This creates of cause bitter feelings. Intellectual superiority irritates and induces hatred. This is why so many intellectuals spend most of their life in jail.

Take seeing as an example. If person has become blind at adult age, he understands that he is blind and also what it feels to see. Also congenitally blind person believes that he is blind: this because most people in his environment tell that it is possible to see and that he is blind. He does not however feel what it is to see. Suppose now that most of us are blind and then comes some-one and tells that he sees. How many would believe him? They cannot feel what it to see. Very probably they conclude that this fellow is a miserable crank.

Suppose now that certain person - call him MP - has used 4 decades to develop a TOE based on generalization of superstring model made 5 years before the first superstring evolution and explaining also consciousness. MP tries his best to explain his TOE to a couple of skeptics but finds it hopeless. They even arrange "investigation" following the best traditions of witch hunt to demonstrate his crackpotness. And indeed, they conclude that they were correct: all that this person writes is totally incoherent non-sense just as this 2-D set of random points.

These two young fellows are simply intellectually blind since their personal hierarchy of Planck constants does not contain the required higher values. A Eureka experience would be required. MP could of course cheat and tell that he believes in superstrings and give a hint that the is a good friend of Witten. This would help but would only lead to pretended understanding. The fellows would take MP seriously only because MP agrees with Witten and claims to be a friend of Witten but still they would not have a slightest idea what TGD is. They cannot feel what it is to understand TGD.

The only hope is personal intellectual evolution increasing the needed Planck constants in the personal hierarchy. This is possible only if these fellows admit that they are intellectually blind in some respects but if they are young arrogant skeptics they furiously deny this and therefore also the possibility of personal intellectual evolution.

See the article Two manners to learn and what goes wrong with vulgar skeptics?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, October 07, 2018

TGD view about ANITA anomalous events

I read an article (see this) telling about 2 anomalous cosmic ray events detected by ANITA (The Antarctic Impulsive Transient Antenna) collaboration. Also ICECUBE collaboration has observed 3 events of this kind. What makes the events anomalous is that the cosmic ray shower emanates from Earth: standard model does not allow the generation of this kind of showers. The article proposes super-partner of tau lepton known as stau as a possible solution of the puzzle.

Before continuing it is good to summarize the basic differences between TGD and standard model at the level of elementary particle physics. TGD differs from standard model by three basic new elements: p-adic length scale hypothesis predicting a fractal hierarchy of hadron physics and electroweak physics; topological explanation of family replication phenomenon; and TGD view about dark matter.

  1. p-Adic length scale hypothesis states that Mersenne primes Mn and Gaussian Mersennes MG,n give rise to scaled variants of ordinary hadron and electroweak physics with mass scale proportional to Mn1/2= 2n/2.

    M127 would correspond to electron and possibly also to what I have called lepto-hadron physics. Muon and nuclear physics would correspond to MG,113 and τ and hadron physics would correspond to M107. Electroweak gauge bosons would correspond to M89. nG= 73, 47, 29, 19, 11,7,5,3,2 would correspond to Gaussian Mersennes and n= 61,31,19,17,13,7,5,3,2 to ordinary Mersennes. There are four Gaussian Mersennes corresponding to nG∈{151,157,163,167} in biologically relevant length scale range 10 nm-2.5 μm (from cell membrane thickness to nucleus size): this can be said to be a number theoretical miracle.

  2. The basic assumption is that the family replication phenomenon reduces to the topology of partonic 2-surfaces serving as geometric correlates of particles. Orientable topology is characterized by genus - the number of handles attached to sphere to obtain the topology. 3 lowest genera are assumed to give rise to elementary particles. This would be due to the Z2 global conformal symmetry possible only for g=0,1,2. By this symmetry single handle behaves like particle and two handles like a bound state of 2 particles. Sphere corresponds to a ground state without particles. For the higher genera handles and handle pairs would behave like a many-particle states with mass continuum.

  3. The model of family replication is based on U(3) as dynamical "generation color" acts as a combinatorial dynamical symmetry assignable to the 3 generations so that fermions correspond to SU(3) multiplet and gauge bosons to U(3) octet with lowest generation associated with U(1). Cartan algebra of U(2) would correspond to two light generations with masses above intermediate boson mass scale.

    3 "generation neutral" (g-neutral) weak bosons (Cartan algebra) are assigned with n=89 (ordinary weak bosons), nG= 79 and nG=73 correspond to mass scales m(79) = 2.6 TeV and m(73) =20.8 TeV. I have earlier assigned third generation with n=61. The reason is that the predicted mass scale is same as for a bump detected at LHC and allowing interpretation as g-neutral weak boson with m(61)=1.3 PeV.

    3+3 g-charged weak bosons could correspond to n=61 with m(61)= 1.3 PeV (or nG=73 boson with m(73) =20.8 TeV) and to nG= 47,29, 19 and n= 31,19. The masses are m(47)= .16 EeV, m(31)=256× m(47)=40 EeV, m(29)=80 EeV, m(19)= 256 EeV, m(17)= .5× 103 EeV, and m(13)= 2× 103 EeV. This corresponds to the upper limit for the energies of cosmic rays detected at ANITA.

    In TGD framework the most natural identification of Planck length would be as CP2 length R which is about 103.5 times the Planck length as it is usually identified. Newton's constant would have spectrum and its ordinary value would correspond to G= R2/&bar;effeff which &bar;effeff∼ 107. UHE cosmic rays would allow to get information about physics near Planck length scale in TGD sense!

  4. TGD predicts also a hierarchy of Planck constants heff=n× h0, h=6h0, labelling phases of ordinary matter identified as dark matter. The phases with different values of n are dark matter relative to each other but phase transitions changing the value of n are possible. The hypothesis would realize quantum criticality with long length scale quantum fluctuations and it follows from what I call adelic physics.

    n corresponds to the dimension of extension of rationals defining one level in the hierarchy of adelic physics defined by extensions of rationals inducing extensions of p-adic number fields serving as correlates for cognition in TGD inspired theory of consciousness. p-Adic physics would provide extremely simple but information rich cognitive representations of the real number based physics and the understanding of p-adic physics would be easy manner to understand the real physics. This idea was inspired by the amazing success of p-adic mass calculations, which initiated the progress leading to adelic physics.

It is natural to ask what TGD could say about the Anita anomaly serving as very strong (5 sigma) evidence for new physics beyond standard model. Consider first the basic empirical constraints on the model.
  1. According to the article. there are 2 anomalous events detected by ANITA collaboration and 3 such events detected by ICECUBE collaboration. For these events there is cosmic ray shower coming Earth's interior. Standard model does not allow this kind of events since the incoming particle - also neutrino - would dissipate its energy and never reach the detector.

    This serves as a motivation for the SUSY inspired model of the article proposing that stau, super-partner of tau lepton, is created and could have so weak interactions with the ordinary matter that it is able to propagate through the Earth. There must be however sufficiently strong interaction to make the detection possible. The mass of stau is restricted to the range .5-1.0 TeV by the constraints posed by LHC data on SUSY.

  2. The incoming cosmic rays associated with anomalous events have energies around εcr=.5× 1018 eV. A reasonable assumption is that the rest system of the source is at rest with respect to Earth in an energy resolution, which corresponds to a small energy EeV scale. No astrophysical mechanism producing higher energy cosmic rays about 1011 GeV based on standard physic is known, and here the p-adic hierarchy of hadron physics and electroweak physics suggests mechanisms.

In TGD framework the natural question is whether the energy scale correspond to some Mersenne or Gaussian Mersenne so that neutrino and corresponding lepton could have been produced in a decay of W boson labelled by this prime. By scaling of weak boson mass scale Gaussian Mersenne MG,47 =(1+i)47-1 would correspond to a weak boson mass scale m(47)= 2(89-47)/2× 80 GeV = .16 EeV. This mass scale is about roughly a factor 1/3 below the energy scale of the incoming cosmic ray. This would require that the temperature of the source is at least 6× m(47) at source if neutrino is produced in the decay of MG,47 W boson. This option does not look attractive to me.

Could cosmic rays be (possibly dark) protons of MG,47 hadron physics.

  1. The scaling of the mass of the ordinary proton about mp(107)≈ 1 GeV gives mp(47)= 2(107-47)/2 GeV ≈ 1 EeV! This is encouraging! Darkness in TGD sense could make for them possible to propagate through matter. In the interactions with matter neutrinos and leptons would be generated.

    The article tells that the energy εcr of the cosmic ray showers is εcr∼ .6 EeV, roughly 60 per cent the rest mass of cosmic ray proton. I do not how precise the determination of the energy of the shower is. The production of dark particles during the generation of shower could explain the discrepancy.

  2. What could one say about the interactions of dark M(47) proton with ordinary matter? Does p(47) transform to ordinary proton in stepwise manner as Mersenne prime is gradually reduced or in single step. What is the rate for the transformation to ordinary proton. The free path should be a considerable fraction of Earth radius by the argument of the article.

    The transformation to ordinary proton would generate a shower containing also tau leptons and tau neutrinos coming pion decays producing muons and electrons and their neutrinos. Neutrino oscillations would produce tau neutrinos: standard model predicts flavor ratio about 1:1:1.

  3. What could happen in the strong interactions of dark proton with nuclei? Suppose that dark proton is relativistic with Ep =x Mp= x EeV, x>1, say x∼ 2. The total cm energy Ecm in the rest system of ordinary proton is for a relativistic)!) EeV dark proton + ordinary proton about Ecm=(3/2)x1/2 (mpMp1/2= x1/2× 5 TeV, considerably above the rest energy mp(89)=512 mp=.48 TeV of M89 dark proton. The kinetic energy is transformed to rest energy of particles emanating from the collision of dark and ordinary proton.

    If the collision takes place with a quark of ordinary proton with mass mq= 5 MeV, Ecm is reduced by a factor of 51/210-3/2 giving E=x1/2 1.3 TeV, which is still above for the threshold for transforming the cosmic ray dark proton to M89 dark proton.

    This suggests that the interaction produce first dark relativistic M89 protons, which in further interactions transform to ordinary protons producing the shower and neutrinos. I have proposed already more than two decades ago that strange cosmic ray events such as Centauros generate hot spot involving M89 hadrons. At LHC quite a number of bumps with masses obtained by scaling from the masses of mesons of ordinary hadron physics are observed. I have proposed that they are associated with quantum critically assignable to a phase transition analogous to the generation of quark gluon plasma, and are dark in TGD sense having heff/h=512 so that their Compton wavelengths are same as for ordinary hadrons.

  4. The free path of (possibly) dark MG,47 proton in ordinary matter should be a considerable fraction of the Earth's radius since the process of tau regeneration based on standard physics cannot explain the findings. The interaction with ordinary matter possibly involving the transformation of the dark proton to ordinary one (or vice versa!) must be induced by the presence of ordinary matter rather than being spontaneous.

    Also the flux of cosmic ray protons at EeV energies must be high enough. It is known that UHE cosmic rays very probably are not gamma rays. Besides neutrinos dark MG,47 protons would be a natural candidate for them.

See the article Topological description of family replication and evidence for higher gauge boson generations, the shorter article TGD based explanation of two new neutrino anomalies. or the chapter New Particle Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, October 05, 2018

New indications for the third generation weak bosons

There are indications (see this) that electron neutrinos appear observed by ICECUBE more often than other neutrinos. In particular, the seems to be a deficit of τ neutrinos. The results are very preliminary. In any case, there seems to be an inconsistency between two methods observing the neutrinos. The discrepancy seems to come from higher energy end of the energy range [13 TeV, 7.9 PeV] from energies above 1 PeV.

The article "Invisible Neutrino Decay Could Resolve IceCube's Track and Cascade Tension" by Peter Denton and Irene Tamborra tries to explain this problem by assuming that τ and μ neutrinos can decay to a superparticle called majoron (see this).

The standard model for the production of neutrinos is based on the decays of pions producing e+νe and
μ+ νμ. Also μ+ can travel to the direction of Earth and decay to e+ νe νμ and double the electron neutrino fraction. The flavor ratio would be 2:1:0.

Remark: The article at (see this) claims that the flavor ratio is 1:2:0 in pion decays, which is wrong: the reason for the lapsus is left as an exercise for the reader.

Calculations taking into account also neutrino oscillations during the travel to Earth to be discussed below leads in good approximation to a predicted flavor ratio 1:1:1. The measurement teams suggest that measurements are consistent with this flavor ratio.

There are however big uncertainties involved. For instance, the energy range is rather wide [13 TeV, 7.9 PeV] and if neutrinos are produce in decay of third generation weak boson with mass about 1.5 PeV as TGD predicts, the averaging can destroy the information about branching fractions.

In TGD based model (see this) third generation weak bosons - something new predicted by TGD - at mass around 1.5 TeV corresponding to mass scale assignable to Mersenne prime M61 (they can have also energies above this energy) would produce neutrinos in the decays to antilepton neutrino pairs.

  1. The mass scale predicted by TGD for the third generation weak bosons is correct: it would differ by factor 2(89-61)/2= 214 from weak boson mass scale. LHC gives evidence also for the second generation corresponding to Mersenne prime M79: also now mass scale comes out correctly. Note that ordinary weak bosons would correspond to M89.

  2. The charge matrices of 3 generations must be orthogonal and this breaks the universality of weak interactions. The lowest generation has generation charge matrix proportional to (1,1,1) - this generation charge matrix describes couplings to different generations. Unit matrix codes for universality of ordinary electroweak and also color interactions. For higher generations of electro-weak bosons and also gluons universality is lost and the flavor ratio for the produced neutrinos in decays of higher generation weak bosons differs from 1:1:1.

    One example of charge matrices would be 3/21/2×(0,1,-1) for second generation and (2,-1,-1)/21/2 for the third generation. In this case electron neutrinos would be produced 2 times more than muon and tau neutrinos altogether. The flavor ratio would be 0:1:1 for the second generation and 4:1:1 for the third generation in this particular case.

  3. This changes the predictions of the pion decay mechanism. The neutrino energies are above the energy about 1.5 PeV in the range defined by the spectrum of energies for the decaying weak boson. If they are nearly at rest the energie are a peak around the rest mass of third generation weak boson. The experiments detect neutrinos at energy range [13 TeV, 7.9 PeV] having the energy of the neutrinos produced in the decay of third generation weak bosons in a range starting from 1.5 PeV and probably ending below 7.9 PeV. Therefore their experimental signature tends to be washed out if pion decays are responsible for the background.

These fractions are however not what is observed at Earth.
  1. Suppose that L+νL pair is produced. It can also happen that L+, say μ+ travels to the direction of Earth. It can decay to e+νμνe. Therefore one obtains both νμ and νe. From the decy to τ+ντ one obtains all three neutrinos. If the fractions of the neutrinos from the generation charge matrix are (Xe,Xμ,Xτ), the fractions travelling to each are proportional to

    xα↔ Xα=(Xe,Xμ,Xτ) =(xe +xμ+xτ,xμ+ xτ,xτ) .

    and the flavor ratio in the decays would be

    Xe:Xμ:Xτ =xe +xμ+xτ: xμ+xτ:xτ .

    The decays to lower neutrino generations tend to increase the fraction of electronic and muonic neutrinos
    in the beam.

  2. Also neutrino oscillations due to different masses of neutrinos (see this) affect the situation. The analog of CKM matrix describing the mixing of neutrinos, the mass squared differences, and the distance to Earth determines the oscillation dynamics.

    One can deduce the mixing probabilities from the analog of Schrödinger equation by using approximation E= p+m2/2p which is true for energies much larger than the rest mass of neutrinos. The masses of mass eigenstates, which are superpositions of flavour eigenstates, are different.

    The leptonic analog of CKM matrix Uα i (having in TGD interpretation in terms of different mixings of topologies of partonic 2-surfaces associated with different charge states of various lepton families allows to express the flavor eigenstates να as superpositions of mass eigenstates νi. As a consequence, one obtains the probabilities that flavor eigenstate να transforms to flavour eigenstate νβ during the travel. In the recent case the distance is very large and the dependence on the mass squared differences and distance disappears in the averaging over the source region.

    The matrix Pαβ telling the transformation probabilities α→β is given in Wikipedia article (see this) in the general case. It is easy to deduce the matrix at the limit of very long distances by taking average over source region to get exressions having no dependence

    Pαβ= δαβ- 2 ∑i>j Re[Uβ iUi αUα jU] .

    Note that ∑β Pαβ=1 holds true since in the summation second term vanishes due to unitary condition U†U=1 and i>j condition in the formula.

  3. The observed flavor fraction is Ye:Yμ:Yτ, where one has

    Yα = PαβXβ .

    It is clear that if the generation charge matrix is of the above form, the fraction of electron neutrinos increases both the decays of τ and μ and by this mechanism. Of course, the third generation could have different charge matrix, say (3/21/2(0,1,-1). In this case the effects would tend to cancel.

See the article Topological description of family replication and evidence for higher gauge boson generations.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Wednesday, October 03, 2018

Atyiah, fine structure constant, and TGD view based view about coupling constant evolution

Atyiah has recently proposed besides a proof of Riemann Hypothesis also an argument claiming to derive the value of the structure constant (see this). The mathematically elegant arguments of Atyiah involve a lot of refined mathematics including notions of Todd exponential and hyper-finite factors of type II (HFFs) assignable naturally to quaternions. The idea that 1/α could result by coupling constant evolution from π looks however rather weird for a physicist.

What makes this interesting from TGD point of view is that in TGD framework coupling constant evolution can be interpreted in terms of inclusions of HFFs with included factor defining measurement resolution. An alternative interpretation is in terms of hierarchy of extensions of rationals with coupling parameters determined by quantum criticality as algebraic numbers in the extension.

In the following I will explain what I understood about Atyiah's approach. My critics includes the arguments represented also in the blogs of Lubos Motl (see this) and Sean Carroll (see this). I will also relate Atyiah's approach to TGD view about coupling evolution. The hasty reader can skip this part although for me it served as an inspiration forcing to think more precisely TGD vision.

There are two TGD based formulations of scattering amplitudes.

  1. The first formulation is at the level of infinite-D "world of classical worlds" (WCW) uses tools like functional integral. The huge super-symplectic symmetries generalizing conformal symmetries raise hopes that this formulation exists mathematically and that it might even allow practical calculations some day. TGD would be an analog of integrable QFT.

  2. Second - surprisingly simple - formulation is based on the analog of micro-canonical ensemble in thermodynamics (quantum TGD can be seen as complex square root of thermodynamics). It relates very closely to TGD analogs of twistorialization and twistor amplitudes.

    During writing I realized that this formulation can be regarded as a generalization of cognitive representations of space-time surfaces based on algebraic discretization making sense for all extensions of rationals to the level of scattering amplitudes. This formulation allows a continuation to p-adic sectors and adelization and adelizability is what leads to the concrete formula - something new - for the evolution of Kähler coupling strength αK forced by the adelizability condition.

    The condition is childishly simple: the exponent of complex action S (more general than that of Kähler function) equals to unity: exp(S)=1 and thus is common to all number fields. This condition allows to avoid the grave mathematical difficulties cause by the requirement that exp(S) exists as a number in the extension of rationals considered. Second necessary condition is the reduction of twistorial scattering amplitudes to tree diagrams implied by quantum criticality.

  3. One can also understand the relationship of the two formulations in terms of M8-H duality. This view allows also to answer to a longstanding question concerning the interpretation of the surprisingly successful p-adic mass calculations: as anticipated, p-adic mass calculations are carried out for a cognitive representation rather than for real world particles and the huge simplification explains their success for preferred p-adic prime characterizing particle as so called ramified prime for the extension of rationals defining the adeles.

  4. I consider also the relationship to a second TGD based formulation of coupling constant evolution in terms of inclusion hierarchies of hyper-finite factors of type II1 (HFFs). I suggest that this hierarchy is generalized so that the finite subgroups of SU(2) are replaced with Galois groups associated with the extensions of rationals. An inclusion of HFFs in which Galois group would act trivially on the elements of the HFFs appearing in the inclusion: kind of Galois confinement would be in question.

See the article TGD view about coupling constant evolution or the chapter of "Towards M-matrix" with same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.