https://matpitka.blogspot.com/

Friday, July 04, 2025

Could M4 Kähler force have observable effects?

M4 Kähler potential should be felt also by covariantly constant right-handed neutrino so that right-handed neutrino would not completely decouple from gauge interactions. TGD predicts that both quarks and leptons, in particular right- and left-handed neutrinos have an infinite number of color partial waves with CP2 mass scale. As found, there is a mechanism neutralizing color partial waves of leptons and giving rise to massless neutrinos, which become massive by p-adic thermodynamics. Right-handed covariantly constant neutrinos would already be color singlets and massless so that this mechanism would not be needed. There would however be coupling to the induced M4 gauge potential. Could this coupling relate to the poorly understood massivation of neutrinos involving the mixing of right-handed and left-handed neutrinos?

The following simple model makes it possible to estimate the size of the effect of M4 Kähler force for elementary fermions at space-time level. Induced Dirac equation is assumed.

  1. Both nucleons and leptons create a classical induced M4 Kähler potential, which contributes to the U(1) part of the induced electroweak gauge potentials in the background space-time assignable to the nucleus.
  2. The gauge forces are felt at the light-like fermion lines at 3-D light-like partonic orbits. A string world sheet connecting say electron and nucleon could mediate the interaction.
  3. Consider 1-D light-like fermion line at the partonic orbit of a fermion. Idealize the fermion line as a light-like geodesic line in M4× S1, where S1⊂ CP2 is a geodesic circle. 8-D masslessness implies p2- R2ω2=0 (see this), where ω is expected to be the order of the particle mass and characterizes the rotation velocity associated with S1. A physically motivated guess is that ω is a geometric correlate for the Compton time of the fermion so that fermion can be said to have an internal clock.
  4. Consider M4 and CP2 contributions to the Kähler potential. Denote by u the CP2 coordinate serving as a coordinate for the fermion line at the partonic orbit as the interface between Euclidean CP2 type region identifiable as a wormhole contact connecting two Minkowskian space-time sheets and Minkowskian region. The CP2 part of the induced Kähler potential is of order ACP2u ∼ 1/R, where R is CP2 radius. The M4 part of the induced Kähler potential is AM4k∂ mk/∂u ∼ ω ∼ m. For electrons, the ratio of the two contributions is ω R ∼ me/m(CP2) ∼ 10-17 and therefore extremely small. This guarantees that the induced M4 Kähler form has negligible effects.
See the article About Dirac equation in H= M4 × CP2 assuming Kähler structure for M4 or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Calcium anomaly as evidence for a new boson with mass in the range 10 eV to 10 MeV

I learned about findings giving support for the view about a new interaction implying that the energies of electrons depend on the neutron number of the atom in a way, which is not explainable in the standard model (see this). A new interaction mediated by a scalar boson with mass in the range 10 eV-10 MeV is proposed as an explanation for the findings. There are many other anomalies, which a boson with a mass ∼ 17 MeV could explain.

The following gives the abstract of the article published in Phys Rev Letters.

Nonlinearities in King plots (KP) of isotope shifts (IS) can reveal the existence of beyond-standard-model (BSM) interactions that couple electrons and neutrons. However, it is crucial to distinguish higher-order standard model (SM) effects from BSM physics. We measure the IS of the transitions 3P03P1 in Ca14+ and 2S1/22D5/2 in Ca+ with sub-Hz precision as well as the nuclear mass ratios with relative uncertainties below 4× 10-11 for the five stable, even isotopes of calcium (40,42,44,46,48Ca).
Combined, these measurements yield a calcium KP nonlinearity with a significance of ∼ 103σ. Precision calculations show that the nonlinearity cannot be fully accounted for by the expected largest higher-order SM effect, the second-order mass shift, and identify the little-studied nuclear polarization as the only remaining SM contribution that may be large enough to explain it. Despite the observed nonlinearity, we improve existing KP-based constraints on a hypothetical Yukawa interaction for most of the new boson masses between 10 eV/c2 and 107 eV/c2.

My understanding of what has been done is as follows.

  1. Nonlinear isotope shift (IS) in KP for Ca isotopes A=42,44,46,48 relative to the isotope A=40 observed. Note that or 3P03P1 in Ca14+ 14-fold electronic ionization so that the electronic configuration is [He]2s2 2p2.
  2. What is measured are the differences δ νA570 and δ νA729 of the frequencies (equivalently energies) of the initial and final electronic configuration for these two transitions as function of A ∈ {10,42,44,46,48}. From these shifts the differences δ νA,40570=δ νA570-δ ν40570 and δ νA,40729 = δ νA729- δ ν40729 of these shifts for δ νA,40570=δ νA570-δ ν40570 with A ∈{42,44,46,48} are deduced.
  3. If the effects of the neutron number on the electron energies equals to that predicted by standard model, δ νA,40729 should be a linear function of δ νA,40570. In the graphical representation the linearity allows to replaced the shifts with δ νA,40570 → δ νA,40570 -452 [GHz amu]==X and δ νA,40729 → δ νA,40729- 2327 [GHz amu]==Y are performed. This gives the King plot representing Y as function of X.
  4. Fig 1. of the article gives the shifts for Ca isotopes for A∈{42,44,46,48} and from the 4 boxes magnifying the graph for the isotope shifts for these values of A show a small non-linearity: the red ellipses are not located at the blue vertical lines. For A= 42,44,48 the red ellipse is shifted to the right but for A=46 it is shifted to the left.
A Yukava scalar with mass in range 10 eV to 107 eV proposed as an explanation. I am not able to conclude whether the scalar property is essential or whether also pseudoscalar is possible. The coupling to the boson affects the binding energies of electrons so that they have additional dependence on the neutron number. If the King plot were linear, the difference would be proportional to the neutron number implying proportional to A-40. The slope of the curve would be 45 degrees. Note that from the table I the differences are in the range .1 meV to 1 meV about δ E/E ∼ 10-4. One neutron pair corresponds to an energy difference of order .1 meV.

One can consider two options in the TGD framework.

  1. The upper bound for the boson mass is about 10 MeV and this suggests 17 MeV pseudoscalar which could explain several earlier nuclear physics anomalies (see this and this) and for which I have proposed a TGD inspired model (see this). In particular, X boson explains Yb anomaly for which also non-linearity of the King plot was observed. This anomaly to the deformations of nuclei caused by adding neutrons.
  2. In the TGD framework one can consider also a second option, which is M4 Kähler force as a new interaction. M4 Kähler potential contributes to electroweak U(1) force if total Kähler potential replaces CP2 Kähler potential in classical U(1) gauge potential. Could M4 Kähler potential give a contribution of the required size to the neutron-electron interaction? I have discussed this contribution in (see this). A simple model shows that the effects are extremely small. This implies that the new interaction does not imply any obvious anomalies. At the level of the embedding space Dirac equation, the effects are dramatic. The basic implication is that colored states of fermions have mass of order CP2 mass and only color singlets can be light. One implication is that the g-2 anomaly is real since the calculation using hadronic data as input rather than lattice QCD gives the anomaly (see this).
See the article X boson as evidence for nuclear string model or the chapter Nuclear String Hypothesis.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, July 02, 2025

How to avoid Babelian confusion in theoretical physics?

Avril Emil wondered in discussion group of The Finnish Society for Natural Philosophy, how it is possible to deduce explanations from so different premises (see this). The discussion was related to the deepening crisis of cosmology caused by the findings of James Webb space telescope and suggesting that the big cosmic narrative is entirely wrong: even the origin of the CMB is challenged by FWST. I glue below my response.

Principles are needed, a mathematician would talk about an axiomatic approach. Otherwise, the result is a confusion of languages at Babel.

  1. If we demand that the description of gravity and also other interactions be geometrized and that the classical conservation laws hold true as a consequence of Noether's theorem, we end up to H=M4×CP2 if we demand the symmetries of the standard model.
  2. If we also demand a number-theoretic description complementary to the geometric one (the equivalent of Langlands duality), we end up with M8-H duality and classical number systems are an essential part of the theory. M8 corresponds to octonions. The dynamics in M8 are also fixed by the quaternionicity/associativity requirement. The symmetries of the standard model correspond to the number-theoretic symmetries of octonions.
  3. If we demand a generalization of 2-D conformal symmetry to 4-dimensionality, we end up with holography= holomorphism vision. The dynamics of spacetime surfaces is unique everywhere except at singularities, regardless of the action principle, if it is general coordinate invariant and constructible using induced geometry. Spacetime surfaces are minimal surfaces (analogous to solutions of massless field equations) and the field equations reduce to purely algebraic local conditions. The theory is classically exactly solvable.
  4. One can claim that the theory is uniquely determined simply because it exists mathematically. The requirement for the existence of a twistor gradient in the theory leads to H=M4×CP2. Only these two 4-spaces in H have a twistor space with the Kähler structure needed to define the classical theory.

    The 4-dimensionality of spacetime surfaces follows in several ways: as an extension of conformal invariance from 2 to 4 dimensions and also from the requirement that, assuming free fermion fields in H, one obtains a vertex on the spacetime surface geometrically corresponding to the creation of a fermion pair. The requirement that a free theory gives interactions sounds impossible to implement, but a special feature of 4-D spacetimes is the exotic diff structures, which are standard diff structures with defects corresponding to vertices. The creation of a fermion pair intuitively corresponds to the turning of a fermion line back in time and the edge associated with this turning corresponds to the defect, the vertex.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, July 01, 2025

Can one understand the transparency of the early Universe without a thorough revolution in cosmology?

There is a highly interesting post by Ethan Siegel in Bigh-Think talking about the somewhat mysterious transparency of the Early Universe to radiation (see (see this). The views are especially interesting also from the point of view of TGD narrative about cosmology (see this and this).

If the primordial plasma is neutralized and forms atoms, the radiation from the very early Universe in the energy range of atoms would not reach us. The neutralization process would have meant the generation of cosmic microwave background (CMB) since the photons would have decoupled from the thermal equilibrium.

This raises a question: How does the radiation from the early Universe reach us? The findings of JWST have made the problem even worse than it was before JWST.

One can imagine several explanations.

  1. Reionization took place by some mechanism. This is not a well-understood problem. The formation of stars and galaxies should have made this possible somehow. Ions did not absorb and propagation became possible.
  2. A more radical explanation explaining how JWST was able to observed very early galaxies, considered by Ethan Siegel in Big-Think (see this), is that there was no dust consisting of neutral atoms so that no absorption occurred! But what about CMB? If there was no neutral matter, there should have been no CMB generated in its formation!

    Very interestingly, the recent findings of JWST suggest that the CMB might have an origin different from what has been assumed in the standard cosmology. JWST has identified very early large galaxies and stars, whose formation is not understood in the standard cosmology. They could have generated dust and radiation in thermal equilibrium with it. The radiation would have decoupled from the thermal equilibrium and given rise to the observed CMB.

    This option forces us to completely reconsider the cosmological narrative before this event. Was there any primordial plasma? Was there any formation of hadrons from quarks? Were nuclei produced by cosmic nucleosynthesis? Were they able to form a considerable amount of atoms? Was there any need for reionization? Could the narrative of the standard cosmology be completely wrong?

What is the TGD based narrative? The newest piece in this story is the observation that the origin of cosmic microwave background need not be what it is though to be, that is the decoupling of radiation from matter as ions formed neutral atoms (see this).

A very brief summary of the TGD view of cosmology (see this) is first in order.

  1. The primordial phase would have been dominated by cosmic strings, which are 4-surfaces with 2-D M4 and CP2 projections. This phase could have been in Hagedorn temperature of the order of CP2 mass defining the mass scale of color partial waves of quarks and leptons.
  2. Galaxies and stars would have been produced as cosmic strings collided, thickened and liberated matter giving rise to the ordinary matter. This process would have served as the TGD counterpart for inflation and would have lasted much longer than 10-32 seconds (see this and this). Therefore it might be better to talk about the TGD counterpart of eternal inflation. Cosmic string tangles decaying to ordinary would correspond to the bubbles of the inflationary scenario.
This view allows us to consider two basic options explaining the transparency.
  1. The formation of very early large galaxies and stars generated ions as solar wind. The stars would have produced ions as solar wind. Was this enough to preserve the charged plasma state and prevent neutralization leading to a loss of transparency? Did this give rise to charged plasma so that the radiation could propagate freely in this plasma?

    Note however that if there was no neutral matter (dust) present before this event, the Universe was transparent also before it.

  2. The second option is that the propagation took place along a network of monopole flux tubes as dark photons. The network would have acted like a communication network making precisely targeted propagation without dissipative losses and without 1/r2 weakening of the signal associated with 3-D propagation.
See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, June 30, 2025

Nothing new under the Sun: really?: comments on the flux tube video of Sabine Hossenfelder

Sabine Hossenfelder was ranting in the Youtube video What is "gravitic propulsion" and could the US government hide it? (see this). Sabine has an excellent skill of humour and she tries to get the facts right but I don't like her aggressive attitude.

At this time, one theme was various exotic crackpot theories such as antigravity lacking mathematical foundations and physical motivations apart from some effects, which need not correspond to a genuine antigravity. I share Sabine's non-enthusiastic attitude here. Neither can I believe that existing scientific understanding would be hidden by the government except when the hiding is to maintain a technological competitive advantage.

The third theme was "Nothing new under the Sun". Particle physicists think like this, have been thinking so for half a century, that is, the time during which theoretical particle physics has not produced anything genuinely new. Sabine, who usually is not very empathetic towards particle theorists, shares this belief. The justification is that if there were some new physics, it would have already been observed. The problem is that Sabine, as a good reductionist, believes that all new physics must emerge in particle physics.

A couple of counterexamples. In my youth, fractals and solitons were invented. Suddenly, they were seen everywhere. A century ago, theorists told lay people that classical physics allowed us to understand everything. Then atomic physics and quantum physics came along and everything changed. The basic lesson here is that we only see what we believe is possible to see.

The enormous perspective bias of particle physicists is born out of reductionism: they believe that particle physics is the fundamental level from which everything follows. For a while, they even believed that all physics emerges from the physics of awe inspiring tiny strings at the Planck scale, but this led to a catastrophe: the theory was unable to predict anything and string theory is now perceived as an embarrassing topic of conversation.

At least Sabine admits that solving the fundamental problem of quantum measurement theory requires new physics. Or not new actually. Sabine believes that the world is deterministic as it was believed to be before the quantum revolution century ago. Such a return to the past is radical but not very inspiring.

At this point I have nothing to lose and I can safely predict that TGD is the next revolution and in fact will continue where the previous revolution left off when general relativity and quantum theory solidified into dogmas. The great narrative will change dramatically without being in conflict with what we observe now with our recent instruments and armed with the reductionistic belief system.

The belief system of particle physicists, astrophysicists, and cosmologists will experience the same fate as the belief system of classical physics more than a hundred years ago. Fractality will replace reductionism and already this changes everything. Even the standard model eill experience a revolution in the color sector leading to revolutions in hadron physics, nuclear physics, and even the physics of the Sun. At the level of observed particle physics the changes will be small. Cosmology and astrophysics are already in the grip of a revolution. Also biologist's and neuroscientist's belief systems share the same fate as a TGD-based view of consciousness and life will inevitably replace them. This is because logically inconsistent belief systems cannot survive the fight for memetic survival.

See for instance the articles at About the structure of Dirac propagator in TGD, About Dirac equation in H=M4×CP2 assuming Kähler structure for M4, and Holography= holomorphy vision and a more precise view of partonic orbits.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Could inflation really explain the very early super massive blackholes?

I visited Räsänen's blog (see this) and found a posting related to inflation theory.

The posting told about Jacopo Fumagalli's talk at the cosmology seminar of the Department of Physics of the University of Helsinki. The topic of the talk was the controversy between inflationary theorists that has lasted for years. I have already previously clarified for myself the analogies between inflationary theory and TGD (see this, this, andthis) and I will try to clarify my thoughts again.

First, a short, slightly edited summary that Google gives of inflationary theory. The cosmic inflation, producing a huge bubble containing the observable universe, lasted for a cosmic time about 10-32 seconds, which is about 1011 Planck times about 10-43 seconds. At the end of cosmic inflation, the "bubbles" that form are incredibly large, with the observable universe contained within a single bubble. The universe's size is estimated to have increased by a factor of 1050 during inflation. This means a region the size of a proton expanded to a size of 1019 light-years. The huge expansion meant that all gradients disappeared and this explains why CMB temperature is constant in precision of 10-5.

Here is a list of the basic assumptions.

  1. Cosmic inflation is characterized by an extremely rapid, exponential expansion of the very early universe.
  2. Inflation doesn't end everywhere at once. Instead, it ends in patches, or "bubbles," within the larger inflating space.
  3. The size of these bubbles at the end of inflation is enormous. According to the CMS Experiment, the universe went from the size of a proton to a vast expanse of 1019 light-years. UCLA Astronomy notes that even with this expansion, the observable universe is still relatively small compared to the overall size of the bubble.
  4. The observable universe, which is what we can see and study, is just a tiny fraction of a single bubble.
  5. The theory of eternal inflation suggests that inflation continues forever throughout much of the universe, creating an infinite multiverse of these bubbles, each potentially with its own physical laws.
Within the framework of the inflationary theory, attempts have also been made to understand dark matter by identifying dark matter as primordial black holes (see this), whose upper mass limit is from empirical facts of the order of 10-18 solar masses and corresponds to a Schwartzschildt radius of about 10-15 meters, or the Compton wavelength of a proton. This idea does not conform with the idea that the fluctuations of the mass density are Gaussian and approach to zero with an exponential rate during inflation.

When JWST found evidence for supermassive black holes in the very early universe, it was a natural attempt to identify them as an outcome of inflation (see this). Now the blackhole masses can be on the order of the Milky Way mass, or about 1.5× 1012 solar masses. The radius of the Schwartzchild would be of the order of 4.5× 1015 km, which is of the order of 103 light-years (ly=9.46× 1012 km).

Such a black hole would be created by a quantum fluctuation in the energy density of the inflaton field due to a curvature fluctuation. The spatial size scale of these curvature fluctuations would be about 1033 times the size of the fluctuations for the primordial blackholes proposed to explain primordial blackholes.

But can we talk about quantum coherence and quantum fluctuations at this huge scale when even the theory of quantum gravitation is missing? It has been argued that fluctuations on a smaller scale destroy quantum coherence at much longer scales, which are truly enormous in relation to the size of primordial black holes (the size of a proton as the upper limit). Inflation theorists have argued about this for years, and according to Räsänen's posting, the view has now been reached that it does indeed work. The fact that these black hole-like objects seem to be real, certainly makes it easier to accept this view. The unpleasant alternative would be to abandon the entire inflation theory. In terms of career development this option is not attractive.

It is instructive to compare inflation theoretic narrative with the TGD view. The TGD view of cosmic evolution relies on the new view of space-time. Space-times are 4-D surfaces in H=M4× CP2. Holography= holomorphy principle makes it possible to reduce the field equations to completely local algebraic equations (see this) and also the Dirac equation for fermions in H=M4× CP2 can be solved exactly. If M4 is assumed to have a generalized K\"ahler structure, the field equations predict that colored fermions have mass of order of CP2 mass, which is of order 10-4 Planck masses. Also a mechanism, which allows the construction of massless color singlets, which get small mass by p-adic thermodynamics (see this), emerges.

The highly non-trivial prediction, in conflict with QCD picture, is that massless (and light) quarks and gluons are impossible. This means the model for the g-2 of muon must be based on data about hardons rather than lattice QCD so that the g-2 anomaly is actual (see this). TGD predicts the new physics which might explain the anomaly. It is clear that this picture challenges the views about cosmic evolution.

  1. The starting point of also TGD based cosmology could have been the approximate constancy of the CMB. In inflationary cosmology the exponential expansion is believed to solve this problem.

    In TGD the solution is the possibility of quantum coherence in arbitrarily long scales. This follows the hierarchy of effective Planck constants predicted by the number-theoretic vision of physics complementary to the geometric vision. Exponential expansion is not needed in the TGD framework. The implications of this hierarchy are central also for the TGD view of consciousness and quantum biology.

  2. Zero energy ontology (ZEO) (see this) makes it possible to solve the basic mystery of quantum measurement theory without interpreting. ZEO predicts that in the ordinary state function reduction the arrow of time changes. This could have dramatic implications also for the evolution of galaxies. Living forth and back in geometric time could have given rise to a very rapid galactic evolution and could explain stars and galaxies older than the Universe.
  3. TGD view of space-time predicts that cosmic strings as 4-D surfaces with string world sheet as M4 projection and complex 2-surface as CP2 projection are possible. The very early cosmology would be cosmic string dominated. Einsteinian space-time with 4-D M4 projection would have emerged rather early, maybe at the same time as inflation would have ended.

    The cosmic strings could explain how it is possible to see the objects in the very early Universe. Cosmic strings/flux tubes would form an analog of a communication network along which photons with large heff behaving like dark photons would propagate in a precisely targeted way and without dissipation.

  4. The transition to radiation dominated cosmology would have been due to the instability of the 2-dimensional M4 projection and the collisions of cosmic strings would have led to the liberation of their dark energy as ordinary matter as the cosmic strings thickened to monopole flux tubes and formed tangles along long cosmic strings identifiable as galaxies. This does not require exponential increase of the thickness of the cosmic strings. The magnetic fields of monopole flux tubes are possible with currents and could explain the stability of magnetic fields in cosmic scales and also that of the Earth's magnetic field. Importantly, gravitational condensation of ordinary matter would not have created galaxies and stars. The process would have proceeded from long to short scales and eventually generated ordinary matter.
  5. Eventually this process would have led to the formation of quasars, galaxies and blackhole and whitehole like objects with opposite arrow of time could be naturally associated with the galactic nuclei. These objects would tangles of monopole flux tubes filling the entire volume rather than singularities of the theory. This would have occurred much later. Cosmic strings precedessors of the galaxies would have been present already in the primordial cosmology.

    The recent finding that the origin of CMB might relate to the rapid formation of the very early large galaxies forces us to reconsider the standard narrative about cosmic evolution. I have discussed this from the TGD perspective in (see this and this). TGD also challenges the standard view about QCD, hadron physics, nuclear physics and the physics of the Sun (see this and this).

See the article About the TGD counterpart of the inflationary cosmology or the chapter About the recent TGD based view concerning cosmology and astrophysics

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, June 28, 2025

Strange correlation between the strength of the Earth's magnetic field and the atmospheric oxygen content

Sabine Hossenfelder (see this) talked about the recently discover strong correlation between the strength of the Earth's magnetic field and O2 content the atmosphere (see this).

This is an extremely interesting finding from the TGD point of view. TGD is a theory of everything but unlike its competitors, it inspires a quantum theory of consciousness and of biology. The new view about magnetic fields differing from the Maxwellian view leads to the notion of field body/magnetic body accompanying any system, also the Earth. It predicts that the biosphere very probably behaves quantum coherently at the level of the magnetic body in the scale of Earth and even at very long scales.

In the TGD framework, space-time is identified as a 4-surface in H=M4× CP2. This leads to a new view of classical, in particular electromagnetic fields differing in many respects from the Maxwellian view. One can assign a field body to a given physical system. Field bodies are carriers of phases of ordinary matter with non-standard value of effective Planck constant of heff, which can have very large values implying quantum coherence in even astrophysical scales. The field body is a key notion in the TGD based view of living matter.

The homology of CP2 is non-trivial and this implies the existence of closed monopole flux tubes carrying monopole magnetic flux although no monopole charges are predicted. The monopole magnetic fields need no current as a source and are therefore stable unlike ordinary magnetic fields which disappear when the current generating them dissipates. This explains the stability of the Earth's magnetic field (see this) and also the existence of magnetic fields in cosmological scales. The monopole magnetic fields are ideal from the point of view of biology (see this, this, this and this).

It is known that the changes in the magnetic field strength and its orientation have had strong effects on biology and consciousness. The local weakening of the magnetic field strength and Schumann resonance have been assigned also with disorders of societies by Callahan (see this and this). TGD suggests a model for the reversal of the magnetic field (see this and this).

About 500 million years ago, the Cambrian Explosion occurred and meant the mysterious appearance of highly evolved multicellular organisms. Where did they emerge from? In the TGD based view of cosmology (this and this), the cosmic expansion takes place as rapid jerks for astrophysical objects, including planets and the Earth. This suggests that this kind of rapid increase of the radius of the Earth by factor 2 took place about 500 million years ago and led to the burst of highly evolved multicellular life from the underground oceans to the surface. Oceans were formed and the photosynthesizing multicellular life forms oxygenated the oceans and also the atmosphere. There is indeed evidence for life in underground oceasn able to perform photosynthesis and TGD leads to an explanation for how this is possible (see for instance this and this).

As the figure 1 of the article (see this) about the correlation between magnetic field strength and O2 percent of the atmosphere shows, both the oxygen percent in atmosphere and the strength of the magnetic field characterized by virtual geometric axial dipole moment started to grow about 500 My ago and is maximum about 300 My ago. TGD view of Cambrian Explosion suggests that the strengthening of the magnetic field, perhaps by the emergence of monopole flux tubes, implied a rapid evolution of the oxygen based life forms in the oceans created by the rapid expansion and caused oxygenation of the atmosphere.

See the article Empirical support for the Expanding Earth hypothesis or the chapter Expanding Earth Hypothesis and Pre-Cambrian Earth.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, June 27, 2025

Confession of a very stupid mistake

The most painful events in the life of theorists are the sudden realizations that one has made a stupid mistake a year or two ago and must start charting the consequences. How to correct them and how to preserve some respectability. Should I edite my history after the error? Can anyone take seriously the mess that I have produce during almost half a century?

At this time I noticed a really stupid mistake in my attempts to understand the generalized complex structure of M4 combining complex structure for Euclidian E2 and hypercomplex structure for Minkowski M2. This generalizes to Hamilton-Jacobi structure which corresponds to a integrable local decomposition M2\times E2.

  1. I had interpreted the hypercomplex conjugation for some mysterious reason completely wrong. The correct form is of course (u,v)→ (u,-v) as in the complex case and satisfies the and commutes with arithmetic operations. The first half of me had however interpreted hypercomplex conjugation as u\leftrightarrow v although second half of me had of course understood that it corresponds to a multiplication by the hypercomplex unit e, e2=1 (by an imaginary unit for the ordinary complex numbers).

    This led to a wrong identification of u and v as a hypercomplex number h and its conjugate h}. The correct identification is as h=(u,v) and its conjugate h=(u,-v). The important implication is that hypercomplex analytic functions involving powers of (u,v) must be defined using hypercomplex arithmetics.

  2. This implies that the identification of functions f(u,w,ξ12) resp. f =(f1,f2)(v,w,ξ1, ξ2) identified as generalized analytic (antianalytic) functions is wrong. The same applies to their conjugates in which v appears.
  3. This challenges the picture of holography= holomorphy vision developed hitherto. The general solution ansatz is not lost but the surfaces constructed as roots of f=(f1,f2)(u,w,ξ12) are not 4-dimensional having v as a passive degree of freedom not visible in dynamics and therefore being effectively 3-dimensional. They are genuinely 3-dimensional. The only sensible looking interpretation is in terms of 3-D surfaces serving as holographic data.

    The identification of these 3-surfaces as roots of (f1,f2) makes sense since if h=(u,v) is real h=(u,0) or imaginary h=(0,v), the hypercomplex arithmetics reduces to ordinary arithmetics. The earlier solutions therefore describe only the 3-D holographic data and the intersection of the 3-D roots of f=(f1,f2)(u,w,ξ12) and its conjugate f=(f1,f2(v,w,ξ1, ξ2) has an interpretation as a 2-D surface identifiable as a partonic 2-surface.

  4. This interpretation is consistent with the ordinary complex analysis. One can construct ordinary complex analytic functions from real analytic functions defined by the data provided by their poles. These data serve as holographic data. The continuation need not be unique and cuts can be chosen in several ways (consider only the function \sqrt{z}) but real is number-theoretically preferred since complex arithmetics reduces to real arithmetics at it. In the recent situation both u and v are real and this means that they both serve as analogs of the real axis. The different choices of H-J structure would correspond to different continuations. One could interpret the partonic orbits as counterparts of cuts of an analytic function whereas partonic 2-surfaces might serve as analogs of poles.

    The physical interpretation is that the 3-D partonic orbits as bundles of v=0- u=0 light-like curves intersect at the partonic 2-surface representing the vertex. At the vertex u=v=0 analogous to a tip of the light-cone the branching has interpretation in terms of the defect of the standard smooth structure and having interpretation as exotic smooth structure. The two branches emerging in u and v directions correspond physically to a creation of a fermion-antifermion pair in classical fields having interpretation as induced fields.

The question is how to construct the full 4-D space-time surfaces determined by the generalized analytic functions. Wick rotation comes to rescue here.
  1. The analytic functions of h=(u,v) can be defined by using hypercomplex arithmetics to define powers of (u,v) The problem is how to define the products of hyper-complex numbers with complex numbers appearing as arguments of the functions fi. This is not a problem for the restrictions to 3-D holographic data.
  2. Wick rotation is an obvious guess for how to construct the space-time surfaces. The Wick rotation (u,v)→ (x+iy)=(u+iv) transforms f(h)=(u,v),w,ξ12) to an analytic function of 4 complex coordinates. One can find the roots of f=(f1,f2) and map the roots to space-time surfaces by the inverse map (x+iy)→ (u,v).
The corrected view challenges the suggestions made on basis of the earlier picture.
  1. u or v is not a passive dynamical variable anymore. The conjugate of hyper complex conjugate h= (u,-v) of h=(u,v) can be regarded as a passive variable only in the sense that the space-time surface is determined by f and the vanishing of f=0 implies the vanishing of f.
  2. This means that some suggestions that follow from the passive role of the v-variable are wrong. One such suggestion was that the intersection of the surfaces X4 and Y4 with the same H-J structure is 2-D since the intersection is effectively an intersection of 2 3-D surfaces. The proposed interpretation was as a string world sheet.

    If one believes on a generic topological argument holding true in the absence of symmetries, the intersection of surfaces X4 and Y4 and self intersection of X4 with its infinitesimal deformation is discrete rather than 2-D. It would corresponds to the intersection form for the two surfaces playing a key role in 4-dimensional topology and in knot theory.

    Here one must be very cautious. String world sheets are physically very attractive and holography is a symmetry reducing space-time surfaces to effectively 3-dimensional objects. Could this imply that the intersection is actually a 2-D string world sheet as a kind of dual for the partonic 2-surface.

  3. The form of the metric deduced for the space-time surface does not hold true. The induced metric has the general form dictated by the H-J structure and reduces to the proposed simple form only at the 3-D holographic data possibly having interpretation as partonic orbits.
  4. What happens to the 3-surfaces det(g4)=0 serving as candidates for the interfaces of Minkowskian and Euclidean space-time regions. Could they correspond to the partonic orbits?

    Why would the condition det(g4)=0 be necessary at the intersection of the two branches with u=v=0 as the analog of a tip of the light-cone. For the ordinary light-cone, det(g4) vanishes at the tip for the Robertson-Walker coordinates. If det(g4) is non-vanishing and differs for the branches, the definition of 4-D volume element becomes problematic. If the volume elements are identical, this problem disappears but in the case of Robertson Walker metric the tip would correspond to a genuine hole for g4 ≠ 0. Note that one can consider the H-J structure with coordinates analogous to Robertson-Walker coordinates. The complex coordinate w would correspond to that for sphere rM= constant and u and v would correspond to t-r and t+r.

    The analogy with Robertson-Walker metric suggests an interpretation of the condition guv=0, when the partial derivatives of complex coordinates correspond to the two holographic continuations.

CP2 type extremals having 1-D light-like curve as M4 projection provide a tests for the holography= holomorphy vision.
  1. For the wrong proposal one finds that one obtains only 3-D sub-manifolds of CP2 rather than the full CP2 or its deformation. The correct view of generalized holomorphy explains this: the 3-D section that was obtained represents only the holographic data.
  2. But is it possible to obtain CP2 type extremals and their deformation from the correct formulation? Wick rotation of a solution should give the CP2 extremal from its Wick rotate version. One should understand the emergence of light-like CP2 geodesic emerging at the 3-D throats of the CP2 type extremal as a wormhole contact. The M4 metric contributes to the induced metric of CP2 and it should be possible to choose (u,v) pairs as one half of coordinates. The induced Euclidean metric would reduce to a metrically 2-D form as the throat is approached. Thi could make sense but means that the M4 metric contributes to the induced metric unlike in the case of CP2 type extremals.
  3. Should one give up CP2 extremals as unrealistic because the gluing to Minkowskian space-time sheets is not taken into account. One can drill two holes to CP2 deformed to have a 1-D light-like M4 projection but one cannot satisfy the boundary conditions at the resulting boundaries. For a realistic solution the 1-D light-like projection would be replaced with 3-D light-like partonic orbit.
See the article Holography= holomorphy vision and a more precise view of partonic orbits or the chapter Holography= holomorphy vision: analogues of elliptic curves and partonic orbits .

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, June 24, 2025

Have we misidentified the origin of CMB and what implications this might have?

Sabine Hossenfelder talked about a highly interesting recent theoretical finding related to the origin of the cosmic microwave background (CMB) (see this). The Youtube video tells about the article of Gjergo and Kroup (see this) raising the question that the so called early-type galaxies (ETGs) found by James Webb telescope, could give to CMB an additional contribution, which according to the most conservative estimate is 1.4 per cent and can even dominate of the ordinary contribution if present. This could mean a revolution in cosmology and is therefore extremely interesting from the TGD point of view.

Consider first some background.

  1. Consider first the standard model for the origin of CMB. The standard cosmology assumes plasma phase. In the very early stages quarks and gluons were free. The nucleosynthesis took place and eventually the formation of atoms became possible as the temperature of plasma consisting mostly of hydrogen was low enough.

    Thermal radiation decoupled from thermal equilibrium and the universe became opaque. The radiation temperature started to decrease like 1/a, a is the scale factor of the Universe, which in TGD is identifiable as light-cone proper time for causal diamond (CD). The age of the Universe neutralization was about t∼ .379 My. Later a reheating occured and ionized the atoms and the Universe became transparent. This was due to the formation of stars which generated radiation and the solar wind.

    Some numbers are in order. The recent age of the universe is about t= 1.4× 104 My. The temperature in the decoupling was 3000 K so that a0/a(t)= (3/2.75)× 103∼ 103. t0/t ∼ 1.4/.379 × 104.

  2. Very massive early-type galaxies (ETG) were studied theoretically by using the data provided by James Webb telescope. a0/a(t(ETG))= 1+z, z was in the range. They produced dust and radiation in thermal equilibrium with it. When the age of the Universe was was roughly t(ETG)∼ 500 My (considerably longer than t= 1.4× 104 My) the radiation ecoupled from the thermal equilibrium and gave an additional contribution to CMB. The lower bound for the contribution is 1.4 per cent but it could be even of order one and could even dominate.

    LambdaCDM view of dark is assumed in the theoretical considerations of the article (see this) so one must be cautious with comparison with the TGD view. The vision is that rapid star formation generated dust and radiation, which was thermalized. Decoupling from matter occurred and the analog of CMB was generated.

  3. This finding does not challenge the Big Bang but can challenge the narrative about how stars and galaxies emerged. This finding could in fact change the entire cosmology of the time before these very rapidly forming galaxies appeared.

    This together with the TGD view of cosmic evolution (see this), forces to challenge the narrative about the cosmic evolution before the nucleosynthesis and even after that the plasma, formed from hydrogen atoms and light nuclei need not have been present in considerable amounts. The assumption that the gravitational condensation of hydrogen and other atoms give rise to the formation of stars and galaxies, might be wrong.

    Was there any plasma phase? Was there any primordial nucleosynthesis? Was there any CMB in the standard sense? Was there any gravitational condensation of the ordinary matter to form stars?

TGD indeed suggests a completely different cosmology and astrophysics before the formation of these strange galaxies (see this, this, this, this, this, and this).
  1. In the TGD framework, the decay of cosmic strings by forming tangles and thickening would produce ordinary particles as liberated energy giving rise to stars and galaxies. The long cosmic strings would thicken and produce ordinary matter in a way similar to the decay of the vacuum expectations of inflationary fields to ordinary matter. Galaxies and even stars need not form as ordinary matter undergoes gravitational condensation.

    This mechanism together with the zero energy ontology (ZEO) allowing time reversal in ordinary state function reductions, could explain the rapid formation of early-type galaxies. The decay of the cosmic strings could have produced ordinary matter and also stars and galaxies. An elegant explanation for the galactic dark matter and predictions for the flat velocity spectrum of distant stars around galaxies emerges. It is not clear whether the primordial plasma, formed from hydrogen atoms and light nuclei, has been present in considerable amounts.

  2. Dirac equation for H=M4× CP2, assuming that M4 has K\"ahler structure predicts, that colored states, in particular quarks and gluons cannot exist as light particles. Only hadrons and leptons are possible and also their heavier counterparts (see this).

    This distinguishes dramatically between the standard model and TGD. The infinite hierarchy of color partial waves of quarks and leptons gives rise to corresponding hierarchies of massless hadrons and leptons, which generate thermal mass squared by p-adic thermodynamics. There would be no "desert" predicted by GUTs. Quark gluon plasma has not been present in the early Universe. Instead, cosmic strings would have dominated and colored states would have been present only at temperatures was very near to Hagedorn temperature of order CP2 mass assignable to cosmic strings, 4-D objects with 2-D string world sheet as M4 projection, which dominated the mass density. Einsteinian space-time with 4-D M4 projection did not yet exist and was generated in the transition to radiation dominated phase.

  3. p-Adically scaled versions of hadron physics are predicted. They correspond to light colorless hadrons formed from fermion modes corresponding to different color partial waves (see this and this). This could completely revolutionize the nuclear physics of the Sun (see this).This could also revolutionize the physics of the early Universe, at least before the stabilization of atoms, because quarks and glums would not exist except at temperatures of order CP2 mass scale, which is of order 10-4 Planck masses. This could revolutionize cosmology even after that if the plasma, consisting of protons and light nuclei, is actually created as galaxies and stars were born as tangles of cosmic strings.
  4. The decay of cosmic strings to ordinary matter by a step-wise p-adic cooling from Hagedorn temperature determined by CP2 mass scale (see this and this). Last year, I wrote an article considering the possibility that the nuclear physics of the Sun could differ dramatically from the standard view. At the solar surface, the M89 hadrons with a mass scale, which is 512 the mass scale of ordinary hadrons, associated with monopole flux tubes connecting Sun to the galactic nucleus would decay into ordinary hadrons and produce solar radiation and the solar wind. The interior of the Sun would be something completely different from what has been assumed, being analogous to the cell nucleus.

    This decay, occurring by p-adic cooling (see this), could produce a plasma consisting of hydrogen atoms and light atoms as galaxies and stars were formed. This plasma would be created much later than has been assumed and would not be primordial! Weinberg's classic "The first 3 minutes" would become a historical curiosity!

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.