Tuesday, October 30, 2018

Still about quark gluon plasma and M89 physics

QCD predicts that quark gluon plasma (QGP) is is created in p-p, p-A, and A-A high energy collisions. Here p denotes proton and A heavy nucleus. In the first approximation the nuclei are expected to go through each other but for high enough collision the kinetic energy of the incoming beams is expected to materialize to quarks and gluons giving rise to QGP. Various signatures of QGP such as high density, strangeness production, and the failure of quark jets to propagate have been observed.

Also unexpected phenomena such as very small shear viscosity to entropy ratio η/s meaning that QGP behaves like ideal liquid and double ridge structure detected first in p-Pb collisions implying long range correlations suggesting emission of particles in opposite directions from a linear string like object. Also the predicted suppression of charmonium production seems to be absent for heavy nuclei.

I have already earlier proposed explanation in terms of a creation of dark pions (and possibly also heavier mesons) of M89 hadron physics with Planck constant heff=512× h. M89 pions would be flux tube like structures having mass 512 times that of ordinary pion but having the same Compton length as ordinary pion and being of the same size as heavy nuclei. The unexpected features of QGP, in particular long range correlations, would reflect quantum criticality. Double ridge structure would reflect the decay of dark mesons to ordinary hadrons. In this article this proposal is discussed in more detail.

See the article Still about quark gluon plasma and M89 physics or the chapter New Physics predicted by TGD: part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, October 23, 2018

Cosmological constant in TGD and in superstring models

Cosmological constant Λ is one of the biggest problems of modern physics.

  1. Einstein proposed non-vanishing value of Λ in Einstein action as a volume term at his time in order to get what could be regarded as a static Universe. It turned out that Universe expanded and Einstein concluded that this proposal was the greatest blunder of his life. For two decades ago it was observed that the expansion of the Universe acclerates and the cosmological constant emerged again. Λ must be extremely small and have correct sign in order to give accelerating rather decelerating expansion in Robertson-Walker cooordinate. Here one must however notice that the time slicing used by Einstein was different and fr this slicing the Universe looked static.

  2. Λ can be however understood in an alternative sense as characterizing the dynamics in the matter sector. Λ could characterize the vacuum energy density of some scalar field, call it quintessense, proportional to 3- volume in quintessence scenario. This Λ would have sign opposite to that in the first scenario since it would appear at opposite side of Einstein's equations.

Remark: This is an updated version of an earlier posting, which represented a slightly wrong view about the interpretation and evolution of cosmological constant in TGD framework. This was due to my laziness to check the details of the earlier version, which is quite near to the version represented here.

Cosmological constant in string models and in TGD

It has turned out that Λ could be the final nail to the coffin of superstring theory.

  1. The most natural prediction of M-theory and superstring models is Λ in Einsteinian sense but with wrong sign and huge value: for instance, in AdS/CFT correspondence this would be the case. There has been however a complex argument suggesting that one could have a cosmological constant with a correct sign and even small enough size.

    This option however predicts landscape and a loss of predictivity, which has led to a total turn of the philosophical coat: the original joy about discovering the unique theory of everything has changed to that for the discovery that there are no laws of physics. Cynic would say that this is a lottery win for theoreticians since theory building reduces to mere artistic activity.

  2. Now however Cumrun Vafa - one of the leading superstring theorists - has proposed that the landscape actually does not exist at all (see this). Λ would have wrong sign in Einsteinian sense but the hope is that quintessence scenario might save the day. Λ should also decrease with time, which as such is not a catastrophe in quintessence scenario.

  3. Theorist D. Wrase et al has in turn published an article (see this) claiming that also the Vafa's quintessential scenario fails. It would not be consistent with Higgs Higgs mechanism. The conclusion suggesting itself is that according to the no-laws-of-physics vision something catastrophic has happened: string theory has made a prediction! Even worse, it is wrong.

    Remark: In TGD framework Higgs is present as a particle but p-adic thermodynamics rather than Higgs mechanism describes at least fermion massivation. The couplings of Higgs to fermions are naturally proportional their masses and fermionic part of Higgs mechanism is seen only as a manner to reproduce the masses at QFT limit.

  4. This has led to a new kind of string war: now inside superstring hegemony and dividing it into two camps. Optimistic outsider dares to hope that this leads to a kind of auto-biopsy and the gloomy period of superstring hegemony in theoretical physics lasted now for 34 years would be finally over.

String era need not be over even now! One could propose that both variants of Λ are present, are large, and compensate each other almost totally! First I took this as a mere nasty joke but I realized that TGD indeed suggests something analogous to this!

The picture in which Λeff parametrizes the total action as dimensionally reduced 6-D twistor lift of Kähler action could be indeed interpreted formally as sum of genuine cosmological term identified as volume action and Kähler action identified as an analog of quintessence. This picture is summarized below.

The picture emerging from the twistor lift of TGD

Consider first the picture emerging from the twistor lift of TGD.

  1. Twistor lift of TGD leads via the analog of dimensional reduction necessary for the induction of 8-D generalization of twistor structure in M4× CP2 to a 4-D action determining space-time surfaces as its preferred extremals. Space-time surface as a preferred extremal defines a unique section of the induced twistor bundle. The dimensionally reduced Kähler action is sum of two terms. Kähler action proportional to the inverse of Kähler coupling strength and volume term proportional to the cosmological constant Λ.

    Remark: The sign of the volume action is negative as the analog of the magnetic part of Maxwell action and opposite to the sign of the area action in string models.

    Kähler and volume actions should have opposite signs. At M4 limit Kähler action is proportional to E2-B2 In Minkowskian regions and to -E2-B2 in Euclidian regions.

  2. Twistor lift forces the introduction of also M4 Kähler form so that the twistor lift of Kähler action contains M4 contribution and gives in dimensional reduction rise to M4 contributions to 4-D Kähler action and volume term.

    It is of crucial importance that the Cartesian decomposition H=M4× CP2 allows the scale of M4 contribution to 6-D Kähler action to be different from CP2 contribution. The size of M4 contribution as compared to CP2 contribution must be very small from the smallness of CP breaking (see this and this.

    For canonically imbedded M4 the action density vanishes. For string like objects the electric part of this action dominates and corresponding contribution to 4-D Kähler action of flux tube extremals is positive unlike the standard contribution so that an almost cancellation of the action is in principle possible.

  3. What about energy? One must consider both Minkowskian and Euclidian space-time regions and be very careful with the signs. Assume that Minkowskian and Eucidian regions have same time orientation.

    1. Since a dimensionally reduced 6-D Kähler action is in question, the sign of energy density is positive Minkowskian space-time regions and of form (E2+B2)/2. Volume energy density proportional to Λ is positive.

    2. In Euclidian regions the sign of g00 is negative and energy density is of form (E2-B2)/2 and is negative when magnetic field dominates. For string like objects the M4 contribution to Kähler action however gives a contribution in which the electric part of Kähler action dominates so that M4 and CP2 contributions to energy have opposite signs. One can even consider the possibility that energies cancel in a good approximation and that the total energy is parameterized by effective cosmological constant Λeff.

The identification of the observed value of cosmological constant is not straightforward and I have considered several options without making explicit their differences even to myself. For Einsteinian option cosmological constant could correspond to the coefficient Λ of the volume term in analogy with Einstein's action. For what I call quintessense option cosmological constant Λeff would approximately parameterize the total action density or energy density.

  1. Cosmological constant - irrespective of whether it is identified as Λ or Λeff - is extremely small in the recent cosmology. The natural looking assumption would be that as a coupling parameter Λ or Λeff depends on p-adic length scale like 1/Lp2 and therefore decreases in average sense as 1/a2, where a is cosmic time identified as light-cone proper time assignable to either tip of CD. This suggests the following rough vision.

    The increase of the thickness of magnetic flux tubes carrying monopole flux liberates energy and this energy can make possible increase of the volume so that one obtains cosmic expansion. As the space-time surface expands, its cosmological constant is eventually reduced in a phase transition changing the p-adic length scale. This phase transition liberates volume energy and leads to an accelerated expansion. The space-time surface would expand by jerks in stepwise manner. This process is analogous to breathing. This process would replace continuous cosmic expansion of GRT. One application is TGD variant of Expanding Earth model explaining Cambrian Explosion, which is really weird event (see this).

    One can however raise a serious objection: since the volume term is part of 6-D Kähler action, the length scale evolution of Λ should be dictated by that for 1/αK and be very slow: therefore cosmological constant identified as Einsteinian Λ seems to be excluded.

  2. This leaves only Λeff option. Λeff would parameterize the value of the total action or energy of the space-time surface. Λeff would be analogous to the sum of Einsteinian and quintessential cosmological constants.

    The gradual reduction of Λeff could be interpreted in terms of the reduction of the total action or energy or both. The reduction of the total action would be by the cancellation of M4 and CP2 parts of Kähler action for string like objects. The reduction of the total energy would be by the cancellation of the contribution of Minkowskian and Euclidian regions. The p-adic length scale evolution of Λ would be slow and induced by that of αK.

This picture still leaves the question whether one should assign Λeff to action or energy or both in which case the assignments should be equivalent and action and energy should be proportional to each other. This identification is however the most detailed one developed hitherto.

Second manner to increase 3-volume

Besides the increase of 3-volume of M4 projection, there is also a second manner to increase volume energy: many-sheetedness. The negative sign of Λ could in fact force many-sheetedness.

  1. Superconductors of type II see this) provide a helpful analogy. In superconductors of type II Meissner effect is not complete unlike for those of type. Below critical value Hc,1 external magnetic field penetrates as flux quanta, which can be cylindrical flux tube and also form complex effectively 2-D thin 3-surfaces maximizing their area near critical field Hc,1 for which magnetic field penetrates the entire superconductor. The reason is that the surface energy for the boundary of non-super-conducting and super-conducting phase is negative for superconductors of type II so that the area in question is maximized. Note that near criticality also volume energy is minimized.

  2. In TGD the negative volume energy associated with Λ is analogous to the surface energy in superconductors of type II. The thin 3-surfaces in superconductors could have similar 3-surface analogs in TGD since their volume is proportional to surface area - note that TGD Universe can be said to be quantum critical.

    This is not the only possibility. The sheets of many-sheeted space-time having overlapping M4 projections provide second mechanism. The emergence of many-sheetedness could also be caused by the increase of n=heff/h0 as a number of sheets of Galois covering.

  3. Could the 3-volume increase during deterministic classical time evolution? If the minimal surface property assumed for the preferred extremals as a realization of quantum criticality is true everywhere, the conservation of volume energy prevents the increase of the volume. Minimal surface property is however assumed to fail at discrete set of points due to the transfer of conserved charged between Kähler and volume degrees of freedom. Could this make possible the increase of volume during classical time evolution so that volume and Kähler energy could increase?

  4. ZEO allows the increase of average 3-volume by quantum jumps. There is no reason why each "big" state function reduction changing the roles of the light-like boundaries of CD could not decrease the average volume energy of space-time surface for the time evolutions in the superposition. This can occur in all scales, and could be achieved also by the increase of heff/h0=n.

  5. The geometry of CD suggests strongly an analogy with Big Bang followed by Big Crunch. The increase of the volume as increase of the volume of M4 projection does not however seem to be consistent with Big Crunch. One must be very cautious here. The point is that the size of CD itself increases during the sequence of small state function reductions leaving the members of state pairs at passive boundary of CD unaffected. The size of 3-surface at the active boundary of CD therefore increases as also its 3-volume.

    The increase of the volume during the Big Crunch period could be also due to the emergence of the many-sheetedness, in particular due to the increase of the value of n for space-time sheets for sub-CDs. In this case, this period could be seen as a transition to quantum criticality accompanied by an emergence of complexity.

  6. In type II superconductivity magnetic energy and negative surface energy for flux quanta compete. Now one has Kähler magnetic energy and negative volume energy. By energy conservation they compete. Could this analog be helpful in TGD? Could the penetration of Kähler magnetic flux tubes to a system give rise to generation of space-time sheets and perhaps increase of n in order to reduce total energy?

See the article TGD View about Coupling Constant Evolution or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, October 22, 2018

Brief summary of TGD

Topological Geometrodynamics (TGD) is an attempt to unify fundamental interactions by assuming that physical space-times can be regarded as sub-manifolds of certain 8-dimensional space H, which is product H= M4× CP2 of Minkowski space and 4-dimensional complex projective space CP2. One could end up with TGD as a generalization of string model obtained by replacing 1-dimensional strings with 3-dimensional light-like surfaces, or as an attempt to construct Poincare invariant theory of gravitation (Poincare group acts in imbedding space rather than on space-time surface). One outcome is the notion of many-sheeted space-time involving notions like topological field quantization and field body. The huge conformal symmetries of the theory are essentially due to the light-likeness of 3-surfaces. The empirically motivated generalization of quantum theory by introducing hierarchy of Planck constants forces a generalization of imbedding space to a book like structure and different pages of the book corresponds to macroscopic quantum phases which behave relative to each other like dark matter.

Quantum TGD realies on two parallel approaches.

  1. The first approach starts from a generalization of Einstein's program proposing geometrization of classical physics so that entire quantum physics is geometrized in terms of the Kähler geometry of "World of Classical Worlds" (WCW) consisting of space-time surfaces which are preferred extremals of the action principle involved. Classical physics becomes exact part of quantum theory. The Kähler geometry of infinite-D WCW and thus physics is unique from its mere existence (this was shown for loop spaces by Dan Freed).

    Generalization of 4-D twistor approach to its 8-D variant based on on lift of Kähler action exists only for H=M4× CP2 forced also by standard model symmetries so that TGD become completely unique.

  2. Second approach is number theoretic. The starting point was the amazing success of p-adic mass calculations. The interpretation is that p-adic number fields and p-adic physics provide the physical (in generalized sense) correlates of cognition and imagination. This leads to what I call adelic physics fusing real number based "ordinary" physics for matter and various p-adic physics for cognition to single coherent whole. Extensions of rationals induce an infinite hierarchy of extensions of p-adic number fields and therefore of also adeles.

    This hierarchy can be seen as evolutionary hierarchy for cognition. The higher the dimension of extension, the higher the complexity and the higher the "IQ". Cognitive representation is the basic notion. At space-time level it consists of points common to real space-time surfaces and various p-adic space-time surfaces: common points have preferred imbedding space coordinates in the extension of rationals so that they make sense in all number fields. This extensions generalizes to the level of WCW: the discrete set of points serves as coordinates of WCW point.

    This also leads to a hierarchy of cognitive representations of scattering amplitudes. Cognitive representation serves as an approximation of the actual scatting amplitude and there is hierarchy of improving approximation. One concrete prediction is hierarchy heff/h0=n of effective values of Planck constant assumed to label phases of ordinary matter behaving like dark matter. The number theoretic interpretation of n is as the dimension of extension of rationals.

    Also quaternions and octonions play a key role in TGD. So called M8-H duality mapping algebraic associative surfaces in complexified octonionic M8 to space-time surfaces in H appearing as preferred extremals of the action principle. The identification of preferred extremals as minimal surfaces apart from discrete set of points is very promising since minimal surface property extremizes also Kähler action and the dynamics involves no couplings as quantum criticality requires. This would reduce TGD to octonionic algebraic geometry at the level of M8 allowing to understand the hierarchy of Planck constants geometrically.

TGD forces to give up the naive length scale reductionism characterizing competing theories, in particular string models, and to replace it with fractality: this has far reach implications such as predicted existence of scaled variants of electroweak and hadronic physics. The notion of many-sheeeted space-time, p-adic length scale hypothesis, and the identification of dark matter as heff=n× h phases of ordinary matter are corner stones of this picture.

TGD inspired theory of consciousness and quantum biology are also essential parts of TGD.

  1. Around 1995 I started to work with what I call TGD inspired theory of consciousness. It can be seen as a generalization of quantum measurement theory so that the observer become part of physical system rather than remaining an outsider as in the usual approaches.

    A generalization of the standard ontology to what I call zero energy ontology (ZEO) solves the basic problem of quantum measurement theory due to the conflict between determinism of unitary evolution and non-determinism of state function reduction, and leads to a new view about the relationship between subjective time and geometric time: there are two causalities, the causality of free will and that of field equations and they are consistent with each other in ZEO.

  2. Also quantum biology as seen from TGD perspective became application of TGD. An essential role is played by the the notions of many-sheeted space-time and the notion of field body emerging naturally from the identification of space-time as 4-surface. The effects due to many-sheetedness are not seen at QFT limit but play a pivotal role in living matter. One can assign to any system field identity - field body, in particular magnetic body. This completes the pair bio-system--environment to a triplet field-body--bio-system--environment. Magnetic body can be said to act as intentional agent using biological body as a sensory receptor and motor instrument. For instance, EEG would serve this purpose.

Material about TGD

The articles at following address give summary about basic ideas of TGD and TGD inspired theory of consciousness and quantum biology.

  • "Why TGD and What TGD is?": see this.


  • "Can one apply Occam’s razor as a general purpose debunking argument to TGD?": see this.

  • "Getting philosophical: some comments about the problems of physics, neuroscience, and biology": see this.

  • "TGD": see this.

There are several sources about TGD.

  • Besides thesis there are three published books about TGD: see this, this, and this.

  • Homepage contains both online books (17) and articles related to TGD: see this..

  • I have published most of the articles as versions in the journals published by Huping Hu. See this., this, and this.

    The links up-to-date versions of articles can be found at my homepage : see this.

  • TGD can be found also in Research Gate: see this.

  • "TGD diary" is a blog telling about progress in TGD: see this. For a list of blog postings arranged according to topic see "The latest progress in TGD": see this.

  • My FB timeline contains (mostly) links to a progress in TGD: see this.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, October 16, 2018

Anomalously strong 21-cm absorption line of hydrogen in cosmology as indication for TGD based view about dark matter

The called 21-cm anomaly meaning that there is unexpected absorption of this line could be due to the transfer of energy from gas to dark matter leading to a cooling of the gas. This requires em interaction of the ordinary matter with dark matter but the allowed value of electric charge must be must much smaller than elementary particle charges. In TGD Universe the interaction would be mediated by an ordinary photon transforming to dark photon having effective value heff/h0=n larger than standard value h implying that em charge of dark matter particle is effectively reduced. Interaction vertices would involve only particles with the same value of heff/h0=n.

In this article a simple model for the mixing of ordinary photon and its dark variants is proposed. Due to the transformations between different values of heff/h0=n during propagation, mass squared eigenstates are mixtures of photons with various values of n. An the analog of CKM matrix describing the mixing is proposed. Also the model for neutrino oscillations is generalized so that it applies - not only to photons - but to all elementary particles. The condition that "ordinary" photon is essentially massless during propagation forces to assume that during propagation photon is mixture of ordinary and dark photons, which would be both massive in absence of mixing. A reduction to ordinary photon would take place in the interaction vertices and therefore also in the absorption. The mixing provides a new contribution to particle mass besides that coming from p-adic thermodynamics and from the Kähler magnetic fields assignable to the string like object associated with the particle.

See the article The analogs of CKM mixing and neutrino oscillations for particle and its dark variants or the chapter
Quantum criticality and dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, October 15, 2018

Increase of the dimension of extension of rationals as the emergence of a reflective level of consciousness

In TGD framework the hierarchy of extensions of rationals defines a hierarchy of adeles and evolutionary hierarchy.
What could the interpretation for the events in which the dimension of the extension of rationals increases? Galois extension is extensions of an extension with relative Galois group Gal(rel)= Gal(new)/Gal(old). Here Gal(old) is a normal subgroup of Gal(new). A highly attractive possibility is that evolutionary sequences quite generally (not only in biology) correspond to this kind of sequences of Galois extensions. The relative Galois groups in the sequence would be analogous to conserved genes, and genes could indeed correspond to Galois groups (see this). To my best understanding this corresponds to a situation in which the new polynomial Pm+n defining the new extension is a polynomial Pm having as argument the old polynomial Pn(x): Pm+n(x)=Pm(Pn(x)).

What about the interpretation at the level of conscious experience? A possible interpretation is that the quantum jump leading to an extension of an extension corresponds to an emergence of a reflective level of consciousness giving rise to a conscious experience about experience. The abstraction level of the system becomes higher as is natural since number theoretic evolution as an increase of algebraic complexity is in question.

This picture could have a counterpart also in terms of the hierarchy of inclusions of hyperfinite factors of type II1 (HFFs). The included factor M and including factor N would correspond to extensions of rationals labelled by Galois groups Gal(M) and Gal(N) having Gal(M)⊂ Gal(M) as normal subgroup so that the factor group Gal(N)/Gal(M) would be the relative Galois group for the larger extension as extension of the smaller extension. I have indeed proposed (see this) that the inclusions for which included and including factor consist of operators which are invariant under discrete subgroup of SU(2) generalizes so that all Galois groups are possible. One would have Galois confinement analogous to color confinement: the operators generating physical states could have Galois quantum numbers but the physical states would be Galois singlets.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

See the article Re-examination of the basic notions of TGD inspired theory of consciousness or the article Does M8-H duality reduce classical TGD to octonionic algebraic geometry?.

Thursday, October 11, 2018

Learning by conditioning and learning by discovery

I had an "entertaining" discussion with two fellows - l call them A and B -, which taught a lot, I hope also for A and B -, and actually gave a good example of two kinds of learning. Learning by conditioning and learning by discovery. It also led to a possible understanding about what goes wrong in what I would call ultra-skeptic cognitive syndrome.

[This discussion by the way gave me good laughs. A and B - first summarized his academic background by "studied strings" and second was Bachelor in computer science but pretending to be M-theorist. They tried to demonstrate that I am a crackpot. They carried out an "investigation" following the principles of investigations made for witch candidates at middle ages. The victim had two options: she drowns or not in which case she is burned at stake.]

The highly emotional discussion was initiated by a totally non-sense hype about transferring consciousness of C Elegance to computer program (see this). I told that the news was hype and this raised the rage of A and B. The following considerations have very little to do with this article. Note however that I have done some work AI in general and even with with the basic ideas of deep learning. For instance, we had two years ago a collaboration about AI, IIT approach to consciousness, and about a possible connection with remote mental interactions together with Lian Sidorov and Ben Goertzel, who is behind Sophia robot. There two chapters related to this (see this and this). I think that the latter chapter is published in a book by Goertzel. There is also a critical article inspired by Sophia robot about which Ben Goertzel wrote an enthusiastic article and sent to Lian Sidorov and me (this).

1. Learning by conditioning

Returning to learning. The first kind of learning is learning by conditioning, which deep learning algorithms try to mechanize. Second kind of learning is learning by discovery. The latter is impossible for computers because they obey deterministic algorithm and are unable to do anything creative.

Emotions play a strong role in the learning by conditioning in the case of living systems and in the simplest form it is learning of X-good and X-bad type associations helping C elegance to survive in the cruel world. In case of humans this kind of associations can be extremely dangerous as for instance the course of events in USA has shown.

Very large part of our learning is just forming of associations: this is what Pavlov's dogs did. In school we learn to associate to "2×3=" symbol "6". In our youth we learned also algorithms for sum, division, multiplication and division, and even for finding the roots second order polynomial. Often this is called learning of mathematics. Later some mathematically gifted ones however discovered that this is just simple conditioning of an algorithm, and has very little to do with genuine mathematical thinking. The discovery of the algorithm itself would be mathematical thinking. The skill to code for algorithm - usually given - is also an algorithm and it can be also coded in AI.

If we are good enough in getting conditioned we get a studentship in University and learn science. This involves also learning of simple conditionings of type X-good and X-bad. In this learning social feedback from others reinforces learning: who would not like to earn the respect of the others!

For X-bad conditionings X can be homeopathy, water memory, cold fusion, telepathy, remote viewing, non-reductionistic/non-physicalistic world view, quantum theories of consciousness, TOEs other than M-theory, etc... For X-good conditionings X can be physicalism, reductionism, strong AI, superstrings, Witten, etc...

The student learns also to utter simple sentences demonstrating that he has learned the desired conditionings. This is important for career. Proud parents who hear the baby say their first word encourage the child. In the same manner environment reinforces the learning of "correct" opinions by a positive feedback. The discussion with A and B ga a quite a collection of these simple sentences. "I guessed that he is a crank" from A is a good example intended to express the long he life experience and wisdom of the youngster.

These conditionings make it also easy "recognize" whether someone is a crank/crackpot/etc... and even to carry out personal investigations - analogous with witchcraft investigations at middle ages - whether some-one is a crank or not. This is what A and B in their young and foolish arrogance indeed decided to carry out.

2. Learning by Eureka experience

There is also second kind of learning. Learning by discovery. Computers are not able to do this. I mentioned in the discussion what happens when you look certain kind of image consisting of mere random looking spots in plane. After enough staring suddenly a beautiful 3-D patterns emerges. This is a miracle like phenomenon, Eureka experience. Quantum consciousness based explanation is the emergence of quantum coherence in the scale of the neuronal cognitive representation in visual cortex at least. New 3-D mental image emerges from purely 2-D one. One goes outside of the context.

The increase of dimension might provide an important hint about what happens more generally: and this would indeed occur for the dimension of extension of rationals in Eureka quantum jump in TGD based model of what could occur. Physically this would correspond to the increase of the effective Planck constant heff= n×h0, h=6×h0 assignable to the mental image created by the image. n is indeed the dimension of extension of rationals and would increase and also scale of quantum coherence would increase from that of single spot to that for the entire pictures.

This kind of learning by Eureka is probably very common for children: they are said to be genii. Later the increasing dominance on the learning by conditioning often eliminates this mode of learning and the worst outcome is a mainstream scientist who is hard-nosed skeptic. Solving genuine problems is the manner to gain these learning experiences but they come only now and then. Some of them are really big: during my professional career there have been - I would guess about 10 really big experiences of this kind involving discovery of a new principle or totally new physical idea.

3. How to understand what is wrong with vulgar skeptics?

The discussion was very interesting since it led me to ponder why it is so hopeless to explain something extremely simple for skeptics. There is a beautiful connection with a learning based on Eureka experience. Physically this corresponds in TGD to a phase transition increasing scale of quantum coherence and algebraic complexity: more technically effective Planck constant heff increases at some levels. More intelligent mental images become possible and Eureka experience happens as in the situation when chaotic 2-D set of points becomes beautiful 3-D object.

Biological evolution at the level of species is based on this: we humans are more intelligent than banana flies. This evolution occurs at all levels - also at the level of individuals but it is not politically correct to say this aloud. Some of us are in their intellectual evolution at higher level than others, either congenitally or by our own efforts or both. This creates of cause bitter feelings. Intellectual superiority irritates and induces hatred. This is why so many intellectuals spend most of their life in jail.

Take seeing as an example. If person has become blind at adult age, he understands that he is blind and also what it feels to see. Also congenitally blind person believes that he is blind: this because most people in his environment tell that it is possible to see and that he is blind. He does not however feel what it is to see. Suppose now that most of us are blind and then comes some-one and tells that he sees. How many would believe him? They cannot feel what it to see. Very probably they conclude that this fellow is a miserable crank.

Suppose now that certain person - call him MP - has used 4 decades to develop a TOE based on generalization of superstring model made 5 years before the first superstring evolution and explaining also consciousness. MP tries his best to explain his TOE to a couple of skeptics but finds it hopeless. They even arrange "investigation" following the best traditions of witch hunt to demonstrate his crackpotness. And indeed, they conclude that they were correct: all that this person writes is totally incoherent non-sense just as this 2-D set of random points.

These two young fellows are simply intellectually blind since their personal hierarchy of Planck constants does not contain the required higher values. A Eureka experience would be required. MP could of course cheat and tell that he believes in superstrings and give a hint that the is a good friend of Witten. This would help but would only lead to pretended understanding. The fellows would take MP seriously only because MP agrees with Witten and claims to be a friend of Witten but still they would not have a slightest idea what TGD is. They cannot feel what it is to understand TGD.

The only hope is personal intellectual evolution increasing the needed Planck constants in the personal hierarchy. This is possible only if these fellows admit that they are intellectually blind in some respects but if they are young arrogant skeptics they furiously deny this and therefore also the possibility of personal intellectual evolution.

See the article Two manners to learn and what goes wrong with vulgar skeptics?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, October 07, 2018

TGD view about ANITA anomalous events

I read an article (see this) telling about 2 anomalous cosmic ray events detected by ANITA (The Antarctic Impulsive Transient Antenna) collaboration. Also ICECUBE collaboration has observed 3 events of this kind. What makes the events anomalous is that the cosmic ray shower emanates from Earth: standard model does not allow the generation of this kind of showers. The article proposes super-partner of tau lepton known as stau as a possible solution of the puzzle.

Before continuing it is good to summarize the basic differences between TGD and standard model at the level of elementary particle physics. TGD differs from standard model by three basic new elements: p-adic length scale hypothesis predicting a fractal hierarchy of hadron physics and electroweak physics; topological explanation of family replication phenomenon; and TGD view about dark matter.

  1. p-Adic length scale hypothesis states that Mersenne primes Mn and Gaussian Mersennes MG,n give rise to scaled variants of ordinary hadron and electroweak physics with mass scale proportional to Mn1/2= 2n/2.

    M127 would correspond to electron and possibly also to what I have called lepto-hadron physics. Muon and nuclear physics would correspond to MG,113 and τ and hadron physics would correspond to M107. Electroweak gauge bosons would correspond to M89. nG= 73, 47, 29, 19, 11,7,5,3,2 would correspond to Gaussian Mersennes and n= 61,31,19,17,13,7,5,3,2 to ordinary Mersennes. There are four Gaussian Mersennes corresponding to nG∈{151,157,163,167} in biologically relevant length scale range 10 nm-2.5 μm (from cell membrane thickness to nucleus size): this can be said to be a number theoretical miracle.

  2. The basic assumption is that the family replication phenomenon reduces to the topology of partonic 2-surfaces serving as geometric correlates of particles. Orientable topology is characterized by genus - the number of handles attached to sphere to obtain the topology. 3 lowest genera are assumed to give rise to elementary particles. This would be due to the Z2 global conformal symmetry possible only for g=0,1,2. By this symmetry single handle behaves like particle and two handles like a bound state of 2 particles. Sphere corresponds to a ground state without particles. For the higher genera handles and handle pairs would behave like a many-particle states with mass continuum.

  3. The model of family replication is based on U(3) as dynamical "generation color" acts as a combinatorial dynamical symmetry assignable to the 3 generations so that fermions correspond to SU(3) multiplet and gauge bosons to U(3) octet with lowest generation associated with U(1). Cartan algebra of U(2) would correspond to two light generations with masses above intermediate boson mass scale.

    3 "generation neutral" (g-neutral) weak bosons (Cartan algebra) are assigned with n=89 (ordinary weak bosons), nG= 79 and nG=73 correspond to mass scales m(79) = 2.6 TeV and m(73) =20.8 TeV. I have earlier assigned third generation with n=61. The reason is that the predicted mass scale is same as for a bump detected at LHC and allowing interpretation as g-neutral weak boson with m(61)=1.3 PeV.

    3+3 g-charged weak bosons could correspond to n=61 with m(61)= 1.3 PeV (or nG=73 boson with m(73) =20.8 TeV) and to nG= 47,29, 19 and n= 31,19. The masses are m(47)= .16 EeV, m(31)=256× m(47)=40 EeV, m(29)=80 EeV, m(19)= 256 EeV, m(17)= .5× 103 EeV, and m(13)= 2× 103 EeV. This corresponds to the upper limit for the energies of cosmic rays detected at ANITA.

    In TGD framework the most natural identification of Planck length would be as CP2 length R which is about 103.5 times the Planck length as it is usually identified. Newton's constant would have spectrum and its ordinary value would correspond to G= R2/&bar;effeff which &bar;effeff∼ 107. UHE cosmic rays would allow to get information about physics near Planck length scale in TGD sense!

  4. TGD predicts also a hierarchy of Planck constants heff=n× h0, h=6h0, labelling phases of ordinary matter identified as dark matter. The phases with different values of n are dark matter relative to each other but phase transitions changing the value of n are possible. The hypothesis would realize quantum criticality with long length scale quantum fluctuations and it follows from what I call adelic physics.

    n corresponds to the dimension of extension of rationals defining one level in the hierarchy of adelic physics defined by extensions of rationals inducing extensions of p-adic number fields serving as correlates for cognition in TGD inspired theory of consciousness. p-Adic physics would provide extremely simple but information rich cognitive representations of the real number based physics and the understanding of p-adic physics would be easy manner to understand the real physics. This idea was inspired by the amazing success of p-adic mass calculations, which initiated the progress leading to adelic physics.

It is natural to ask what TGD could say about the Anita anomaly serving as very strong (5 sigma) evidence for new physics beyond standard model. Consider first the basic empirical constraints on the model.
  1. According to the article. there are 2 anomalous events detected by ANITA collaboration and 3 such events detected by ICECUBE collaboration. For these events there is cosmic ray shower coming Earth's interior. Standard model does not allow this kind of events since the incoming particle - also neutrino - would dissipate its energy and never reach the detector.

    This serves as a motivation for the SUSY inspired model of the article proposing that stau, super-partner of tau lepton, is created and could have so weak interactions with the ordinary matter that it is able to propagate through the Earth. There must be however sufficiently strong interaction to make the detection possible. The mass of stau is restricted to the range .5-1.0 TeV by the constraints posed by LHC data on SUSY.

  2. The incoming cosmic rays associated with anomalous events have energies around εcr=.5× 1018 eV. A reasonable assumption is that the rest system of the source is at rest with respect to Earth in an energy resolution, which corresponds to a small energy EeV scale. No astrophysical mechanism producing higher energy cosmic rays about 1011 GeV based on standard physic is known, and here the p-adic hierarchy of hadron physics and electroweak physics suggests mechanisms.

In TGD framework the natural question is whether the energy scale correspond to some Mersenne or Gaussian Mersenne so that neutrino and corresponding lepton could have been produced in a decay of W boson labelled by this prime. By scaling of weak boson mass scale Gaussian Mersenne MG,47 =(1+i)47-1 would correspond to a weak boson mass scale m(47)= 2(89-47)/2× 80 GeV = .16 EeV. This mass scale is about roughly a factor 1/3 below the energy scale of the incoming cosmic ray. This would require that the temperature of the source is at least 6× m(47) at source if neutrino is produced in the decay of MG,47 W boson. This option does not look attractive to me.

Could cosmic rays be (possibly dark) protons of MG,47 hadron physics.

  1. The scaling of the mass of the ordinary proton about mp(107)≈ 1 GeV gives mp(47)= 2(107-47)/2 GeV ≈ 1 EeV! This is encouraging! Darkness in TGD sense could make for them possible to propagate through matter. In the interactions with matter neutrinos and leptons would be generated.

    The article tells that the energy εcr of the cosmic ray showers is εcr∼ .6 EeV, roughly 60 per cent the rest mass of cosmic ray proton. I do not how precise the determination of the energy of the shower is. The production of dark particles during the generation of shower could explain the discrepancy.

  2. What could one say about the interactions of dark M(47) proton with ordinary matter? Does p(47) transform to ordinary proton in stepwise manner as Mersenne prime is gradually reduced or in single step. What is the rate for the transformation to ordinary proton. The free path should be a considerable fraction of Earth radius by the argument of the article.

    The transformation to ordinary proton would generate a shower containing also tau leptons and tau neutrinos coming pion decays producing muons and electrons and their neutrinos. Neutrino oscillations would produce tau neutrinos: standard model predicts flavor ratio about 1:1:1.

  3. What could happen in the strong interactions of dark proton with nuclei? Suppose that dark proton is relativistic with Ep =x Mp= x EeV, x>1, say x∼ 2. The total cm energy Ecm in the rest system of ordinary proton is for a relativistic)!) EeV dark proton + ordinary proton about Ecm=(3/2)x1/2 (mpMp1/2= x1/2× 5 TeV, considerably above the rest energy mp(89)=512 mp=.48 TeV of M89 dark proton. The kinetic energy is transformed to rest energy of particles emanating from the collision of dark and ordinary proton.

    If the collision takes place with a quark of ordinary proton with mass mq= 5 MeV, Ecm is reduced by a factor of 51/210-3/2 giving E=x1/2 1.3 TeV, which is still above for the threshold for transforming the cosmic ray dark proton to M89 dark proton.

    This suggests that the interaction produce first dark relativistic M89 protons, which in further interactions transform to ordinary protons producing the shower and neutrinos. I have proposed already more than two decades ago that strange cosmic ray events such as Centauros generate hot spot involving M89 hadrons. At LHC quite a number of bumps with masses obtained by scaling from the masses of mesons of ordinary hadron physics are observed. I have proposed that they are associated with quantum critically assignable to a phase transition analogous to the generation of quark gluon plasma, and are dark in TGD sense having heff/h=512 so that their Compton wavelengths are same as for ordinary hadrons.

  4. The free path of (possibly) dark MG,47 proton in ordinary matter should be a considerable fraction of the Earth's radius since the process of tau regeneration based on standard physics cannot explain the findings. The interaction with ordinary matter possibly involving the transformation of the dark proton to ordinary one (or vice versa!) must be induced by the presence of ordinary matter rather than being spontaneous.

    Also the flux of cosmic ray protons at EeV energies must be high enough. It is known that UHE cosmic rays very probably are not gamma rays. Besides neutrinos dark MG,47 protons would be a natural candidate for them.

See the article Topological description of family replication and evidence for higher gauge boson generations, the shorter article TGD based explanation of two new neutrino anomalies. or the chapter New Particle Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, October 05, 2018

New indications for the third generation weak bosons

There are indications (see this) that electron neutrinos appear observed by ICECUBE more often than other neutrinos. In particular, the seems to be a deficit of τ neutrinos. The results are very preliminary. In any case, there seems to be an inconsistency between two methods observing the neutrinos. The discrepancy seems to come from higher energy end of the energy range [13 TeV, 7.9 PeV] from energies above 1 PeV.

The article "Invisible Neutrino Decay Could Resolve IceCube's Track and Cascade Tension" by Peter Denton and Irene Tamborra tries to explain this problem by assuming that τ and μ neutrinos can decay to a superparticle called majoron (see this).

The standard model for the production of neutrinos is based on the decays of pions producing e+νe and
μ+ νμ. Also μ+ can travel to the direction of Earth and decay to e+ νe νμ and double the electron neutrino fraction. The flavor ratio would be 2:1:0.

Remark: The article at (see this) claims that the flavor ratio is 1:2:0 in pion decays, which is wrong: the reason for the lapsus is left as an exercise for the reader.

Calculations taking into account also neutrino oscillations during the travel to Earth to be discussed below leads in good approximation to a predicted flavor ratio 1:1:1. The measurement teams suggest that measurements are consistent with this flavor ratio.

There are however big uncertainties involved. For instance, the energy range is rather wide [13 TeV, 7.9 PeV] and if neutrinos are produce in decay of third generation weak boson with mass about 1.5 PeV as TGD predicts, the averaging can destroy the information about branching fractions.

In TGD based model (see this) third generation weak bosons - something new predicted by TGD - at mass around 1.5 TeV corresponding to mass scale assignable to Mersenne prime M61 (they can have also energies above this energy) would produce neutrinos in the decays to antilepton neutrino pairs.

  1. The mass scale predicted by TGD for the third generation weak bosons is correct: it would differ by factor 2(89-61)/2= 214 from weak boson mass scale. LHC gives evidence also for the second generation corresponding to Mersenne prime M79: also now mass scale comes out correctly. Note that ordinary weak bosons would correspond to M89.

  2. The charge matrices of 3 generations must be orthogonal and this breaks the universality of weak interactions. The lowest generation has generation charge matrix proportional to (1,1,1) - this generation charge matrix describes couplings to different generations. Unit matrix codes for universality of ordinary electroweak and also color interactions. For higher generations of electro-weak bosons and also gluons universality is lost and the flavor ratio for the produced neutrinos in decays of higher generation weak bosons differs from 1:1:1.

    One example of charge matrices would be 3/21/2×(0,1,-1) for second generation and (2,-1,-1)/21/2 for the third generation. In this case electron neutrinos would be produced 2 times more than muon and tau neutrinos altogether. The flavor ratio would be 0:1:1 for the second generation and 4:1:1 for the third generation in this particular case.

  3. This changes the predictions of the pion decay mechanism. The neutrino energies are above the energy about 1.5 PeV in the range defined by the spectrum of energies for the decaying weak boson. If they are nearly at rest the energie are a peak around the rest mass of third generation weak boson. The experiments detect neutrinos at energy range [13 TeV, 7.9 PeV] having the energy of the neutrinos produced in the decay of third generation weak bosons in a range starting from 1.5 PeV and probably ending below 7.9 PeV. Therefore their experimental signature tends to be washed out if pion decays are responsible for the background.

These fractions are however not what is observed at Earth.
  1. Suppose that L+νL pair is produced. It can also happen that L+, say μ+ travels to the direction of Earth. It can decay to e+νμνe. Therefore one obtains both νμ and νe. From the decy to τ+ντ one obtains all three neutrinos. If the fractions of the neutrinos from the generation charge matrix are (Xe,Xμ,Xτ), the fractions travelling to each are proportional to

    xα↔ Xα=(Xe,Xμ,Xτ) =(xe +xμ+xτ,xμ+ xτ,xτ) .

    and the flavor ratio in the decays would be

    Xe:Xμ:Xτ =xe +xμ+xτ: xμ+xτ:xτ .

    The decays to lower neutrino generations tend to increase the fraction of electronic and muonic neutrinos
    in the beam.

  2. Also neutrino oscillations due to different masses of neutrinos (see this) affect the situation. The analog of CKM matrix describing the mixing of neutrinos, the mass squared differences, and the distance to Earth determines the oscillation dynamics.

    One can deduce the mixing probabilities from the analog of Schrödinger equation by using approximation E= p+m2/2p which is true for energies much larger than the rest mass of neutrinos. The masses of mass eigenstates, which are superpositions of flavour eigenstates, are different.

    The leptonic analog of CKM matrix Uα i (having in TGD interpretation in terms of different mixings of topologies of partonic 2-surfaces associated with different charge states of various lepton families allows to express the flavor eigenstates να as superpositions of mass eigenstates νi. As a consequence, one obtains the probabilities that flavor eigenstate να transforms to flavour eigenstate νβ during the travel. In the recent case the distance is very large and the dependence on the mass squared differences and distance disappears in the averaging over the source region.

    The matrix Pαβ telling the transformation probabilities α→β is given in Wikipedia article (see this) in the general case. It is easy to deduce the matrix at the limit of very long distances by taking average over source region to get exressions having no dependence

    Pαβ= δαβ- 2 ∑i>j Re[Uβ iUi αUα jU] .

    Note that ∑β Pαβ=1 holds true since in the summation second term vanishes due to unitary condition U†U=1 and i>j condition in the formula.

  3. The observed flavor fraction is Ye:Yμ:Yτ, where one has

    Yα = PαβXβ .

    It is clear that if the generation charge matrix is of the above form, the fraction of electron neutrinos increases both the decays of τ and μ and by this mechanism. Of course, the third generation could have different charge matrix, say (3/21/2(0,1,-1). In this case the effects would tend to cancel.

See the article Topological description of family replication and evidence for higher gauge boson generations.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Wednesday, October 03, 2018

Atyiah, fine structure constant, and TGD view based view about coupling constant evolution

Atyiah has recently proposed besides a proof of Riemann Hypothesis also an argument claiming to derive the value of the structure constant (see this). The mathematically elegant arguments of Atyiah involve a lot of refined mathematics including notions of Todd exponential and hyper-finite factors of type II (HFFs) assignable naturally to quaternions. The idea that 1/α could result by coupling constant evolution from π looks however rather weird for a physicist.

What makes this interesting from TGD point of view is that in TGD framework coupling constant evolution can be interpreted in terms of inclusions of HFFs with included factor defining measurement resolution. An alternative interpretation is in terms of hierarchy of extensions of rationals with coupling parameters determined by quantum criticality as algebraic numbers in the extension.

In the following I will explain what I understood about Atyiah's approach. My critics includes the arguments represented also in the blogs of Lubos Motl (see this) and Sean Carroll (see this). I will also relate Atyiah's approach to TGD view about coupling evolution. The hasty reader can skip this part although for me it served as an inspiration forcing to think more precisely TGD vision.

There are two TGD based formulations of scattering amplitudes.

  1. The first formulation is at the level of infinite-D "world of classical worlds" (WCW) uses tools like functional integral. The huge super-symplectic symmetries generalizing conformal symmetries raise hopes that this formulation exists mathematically and that it might even allow practical calculations some day. TGD would be an analog of integrable QFT.

  2. Second - surprisingly simple - formulation is based on the analog of micro-canonical ensemble in thermodynamics (quantum TGD can be seen as complex square root of thermodynamics). It relates very closely to TGD analogs of twistorialization and twistor amplitudes.

    During writing I realized that this formulation can be regarded as a generalization of cognitive representations of space-time surfaces based on algebraic discretization making sense for all extensions of rationals to the level of scattering amplitudes. This formulation allows a continuation to p-adic sectors and adelization and adelizability is what leads to the concrete formula - something new - for the evolution of Kähler coupling strength αK forced by the adelizability condition.

    The condition is childishly simple: the exponent of complex action S (more general than that of Kähler function) equals to unity: exp(S)=1 and thus is common to all number fields. This condition allows to avoid the grave mathematical difficulties cause by the requirement that exp(S) exists as a number in the extension of rationals considered. Second necessary condition is the reduction of twistorial scattering amplitudes to tree diagrams implied by quantum criticality.

  3. One can also understand the relationship of the two formulations in terms of M8-H duality. This view allows also to answer to a longstanding question concerning the interpretation of the surprisingly successful p-adic mass calculations: as anticipated, p-adic mass calculations are carried out for a cognitive representation rather than for real world particles and the huge simplification explains their success for preferred p-adic prime characterizing particle as so called ramified prime for the extension of rationals defining the adeles.

  4. I consider also the relationship to a second TGD based formulation of coupling constant evolution in terms of inclusion hierarchies of hyper-finite factors of type II1 (HFFs). I suggest that this hierarchy is generalized so that the finite subgroups of SU(2) are replaced with Galois groups associated with the extensions of rationals. An inclusion of HFFs in which Galois group would act trivially on the elements of the HFFs appearing in the inclusion: kind of Galois confinement would be in question.

See the article TGD view about coupling constant evolution or the chapter of "Towards M-matrix" with same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.