https://matpitka.blogspot.com/

Friday, January 17, 2025

What are the mysterious structures observed in the lower mantle?

I learned of very interesting results related to geology. The Dailymail popular article (see this) tells about massive structures in the Earth's deep mantle below Pacific Ocean near the mantle-core boundary. The article "Full-waveform inversion reveals diverse origins of lower mantle positive wave speed anomalies" of Schouten et al published in Scientific reports (see this) describes the findings.

There are cold regions deep within the Earth where seismic waves behave in unexpected ways. The chemical composition can involve heavy elements in these regions. These in-homogenities lead to the increase of the sound velocity. These regions, located 900 to 1,200 kilometers beneath the Pacific Ocean, defy expectations based on conventional plate tectonics theories. These kinds of structures can result from the subduction of continental plates leading to the sinking of a plate to the mantle. There are however no subduction records in the Ocean regions so that the mechanism must be different.

It seems that the recent view of the dynamics of the Earth's mantles is in a need of a profound updating. It has been proposed that the structures could be the remnants of ancient, silica-rich materials from the early days of the Earth when the mantle was formed billions of years ago. Alternatively, they may be areas where iron-rich rocks have accumulated over time due to the constant movement of the mantle. However, researchers are still unsure about the exact composition of these deep Earth structures.

Here is the abstract of the article of Schouten et al.

Determining Earth s structure is paramount to unravel its interior dynamics. Seismic tomography reveals positive wave speed anomalies throughout the mantle that spatially correlate with the expected locations of subducted slabs. This correlation has been widely applied in plate reconstructions and geodynamic modelling. However, global travel-time tomography typically incorporates only a limited number of easily identifiable body wave phases and is therefore strongly dependent on the source-receiver geometry.

Here, we show how global full-waveform inversion is less sensitive to source-receiver geometry and reveals numerous previously undetected positive wave speed anomalies in the lower mantle. Many of these previously undetected anomalies are situated below major oceans and continental interiors, with no geologic record of subduction, such as beneath the western Pacific Ocean. Moreover, we find no statistically significant correlation positive anomalies as imaged using full-waveform inversion and past subduction. These findings suggest more diverse origins for these anomalies in Earth s lower mantle, unlocking full-waveform inversion as an indispensable tool for mantle exploration.

Here some terminology is perhaps in order. Seismic waves are acoustic waves and their propagation in the mantle is studied. Positive speed anomaly means that sound speed is higher than expected. The lowering of temperature or increase of density such as presence of iron, silica, or magnesium can cause this kind of anomalies. The Pacific ocean and the interior regions of plates do not have any subduction history so that the slabs cannot be "slabs" as pieces of continental plates, which have sunk to the mantle.

Why these findings are interesting from the TGD point of view is that TGD suggests that the Cambrian Explosion roughly 500 million years ago was accompanied by a rather rapid increase of the Earth's radius by factor 2 (see this), this and this). In the TGD inspired cosmology, the cosmic expansion occurs as rapid jerks and Cambrian Explosion would be associated with this kind of jerk. This sudden expansion would have broken the crust to pieces and led to the formation of oceans as the underground oceans bursted to the surface. The multicellular life evolved in the underground oceans would have bursted to the surface and this could explain the mysterious sudden appearance of complex multicellular life forms in the Cambrian Explosion. In this event tectonic plates and subduction would have emerged.

I have not earlier considered what happened in the lower mantle in the sudden expansion of Earth. Did these kinds of cracks occur also in the mantle-core boundary and lead to the formation of the recently observed structures also below regions where there is no geologic record for subduction? Could at least some regions which are believed to be caused by the sinking of parts of continental plates have such structure?

See the article Expanding Earth Hypothesis and Pre-Cambrian Earth.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 16, 2025

Holography = holomorphy vision and elliptic functions and curves in TGD framework

Holography=holomorphy principle allows to solve the extremely nonlinear partial differential equations for the space-time surfaces exactly by reducing them to algebraic equations involving an identically vanishing contraction of two holomorphic tensors of different types. In this article, space-time counterparts for elliptic curves and doubly periodic elliptic functions, in particular Weierstrass function, are considered as an application of the method.

See the article Holography = holomorphy vision and elliptic functions and curves in TGD framework.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, January 14, 2025

What could happen in the transition between hadronic phase and quark gluon plasma?

Quanta Magazine (see this) told about the work of Rithya Kunnawalkam Elayavalli, who studies the phase transition between quark-gluon phase and hadron phase, which is poorly understood in QCD. Even hadrons are poorly understood. The reason for this is believed to be that perturbative QCD does not exist mathematically at low energies (and long distances) since the running QCD coupling strength diverges.

Neither hadrons nor the transition between quark gluon phase and hadron phase are well-understood. The transition from hadron phase to quark-gluon phase interpreted in QCD as color deconfinement is assumed to occur but the empirical findings are in conflict with the theoretical expectations. In TGD the interpretation for the observed transition is very different from that inspired by QCD (see this and this).

  1. In TGD hadrons correspond to geometric objects, space-time surfaces, and one way to end up with TGD is to generalize hadronic string models by replacing hadronic strings with string-like 3-surfaces. These string-like 3-surfaces are present in the TGD Universe in all scales and I call them monopole flux tubes and they appear as "body" parts of field bodies for the geometrization of classical fields in TGD.
  2. The TGD counterpart of the deconfinement transition need not be deconfinement as in QCD. What is clear is that this transition should involve quantum criticality and therefore long range fluctuations and quantum coherence.
What could this mean? Number theoretic vision of TGD comes to the rescue here.
  1. TGD predicts a hierarchy of effective Planck constants labelling phases of ordinary matter. The larger the value of heff, the longer the quantum coherence length, which in TGD has identification as the geometric size scale of the space-time surface, say hadronic string-like object, assignable to the particle.
  2. Does the transition involve quantum criticality so that a superposition of space-time surfaces with varying values of heff≥ h is present. The size scale of hadron proportional to heff would quantum fluctuate.
  3. The number theoretic view of TGD also predicts a hierarchy of p-adic length scales. p-Adic mass calculations strongly suggest that p-adic primes near certain powers of 2 are favored. A kind of period doubling would be involved. In particular, Mersenne primes and their Gaussian counterparts are favored. p-Adic prime p is identified as ramified prime for an extension E of rationals heff= nh_0 to the dimension of E. p and heff correlate. p-Adic prime p characterizes p-adic length scale proportional to p1/2. Mass scale is inversely proportional to 1/p1/2.
  4. In particular, the existence of p-adic hierarchies of strong interaction physics and electroweak physics are highly suggestive. Mersenne primes M(n)= 2n-1 and their Gaussian counterpars M(G,n)= (1+i)n-1 would label especially interesting candidates for the scaled up variants of these physics.

    Ordinary hadron physics would correspond to M107. The next hadron physics corresponding to M89 would have a baryon mass scale 512 times higher than that of ordinary hadronic physics. This is the mass scale studied at LHC and there are several indications for bumps having interpretation as M89 mesons having masses scaled by factor 512. People tried to identify these bumps in terms of SUSY but these attempts failed so that bumps were forgotten.

So, what might happen in the TGD counterpart of the deconfinement transition?
  1. Could the color deconfinement be replaced by a transition from M107 hadron physics to M89 hadron physics in which hadrons for the ordinary value heff=h have size 1/512 smaller than the size of the ordinary hadrons. At quantum criticality however the size would be that of ordinary hadrons. This is possible if one has heff=512h. At high enough energies heff =h holds true and M89 hadrons are really small.
  2. Various exotic cosmic ray events (fireballs, Gemini, Centauro, etc...) could correspond to these events (see this and this). In the TGD inspired model of the Sun, M89 hadrons forming a surface layer of the Sun would play a fundamental role. They would produce solar wind and solar energy as they decay to ordinary M107 hadrons (see this).

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, January 13, 2025

The difference between real and imagined

Gary Ehlenberg sent a link to an interesting Quanta Magazine article discussing the differenced between imagination and perception (see this).

Some time ago I had discussions with my friend who claimed that she really sees the things that she imagines. She also has a very good memory for places, almost like a sensory memory. I had thought that this ability is very rare, for instance idiot savants have sensory memories.

So, do I suffer from aphantasia, inability to imagine sensorily? I have sensory perceptions during dreams. I can see and can hear in the hypnagogic state at the border of sleep and awake. In my great experience I quite concretely saw my thoughts and this led to the urge to understand what consciousness is. I can imagine but I do not usually see any images: only after emotionally intense discussions with some-one can I almost-hear the spoken words. So, do I suffer from aphantasia in my normal state of mind?

TGD inspired view of neuroscience leads to a model for the difference between the real and imagined percepts based on my own experience (see this, this, this and this). Imagined percepts would be generated by a virtual sensory input from the field body realized as dark photon signals. They would not reach the retinas but end up at some higher level in the visual neural pathway such as lateral geniculate nuclei of the pineal gland, the "third eye". Pineal gland is a more plausible candidate. In some animals it serves as a real third eye located outside the head. Could it serve as the seat of auditory and other imagined mental images?

At least in my own case, seeing with the pineal gland would usually be sub-conscious to me. What about people who really see their imaginations? Could they consciously see also with their pineal glands so that the pineal gland would define mental image as a subself? Or could some fraction of the virtual signals from the field body reach the retinas? For the people suffering aphantasia, the first option predicts that pineal gland corresponds to a sub-sub-self, which does no give rise to a mental image but a mental image of a sub-self.

Also sensory memories are possible. Does this proposal apply also to these. My grandson Einar is 4 years old. He read to me a story in a picture book that his parents had read to him. Einar does not yet recognize letters nor can he read. He seems to have a sensory memory and repeated what he heard. Maybe all children have this kind of sensory memories but as cognitive skills develop they are replaced by conceptual memories, "house" as representative for the full picture of house means a huge reduction in the number of bits and therefore in the amount of metabolic energy needed. Could it be that aphantasia is the prize paid for a high level of cognition?Could this distinguish between artists and thinkers?

See the chapter TGD Inspired Model for Nerve Pulse.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, January 12, 2025

Could space-time or the space of space-time surfaces be a Lagrangian manifold in some sense?

Gary Ehlenberg sent a link to a tweet to X (see this) by Curt Jainmungal. The tweet has title "Everything is a Lagrangian submanifold". The title expresses the idea of Alan Weinstein (see this), which states that space-time is a Lagrangian submanifold (see this) of some symplectic manifold. Note that the phase space of classical mechanics represents a basic example of symplectic manifold.

Lagrangian manifolds emerge naturally in canonical quantization. They reduce one half of the degrees of freedom of the phase space. This realizes the Uncertainty Principle geometrically. Also holography= holomorphy principle realizes Uncertainty Principle by reducing the degrees of freedom by one half.

What about the situation in TGD (see this, this and this). Does the proposal of Alan Weinstein have some analog in the TGD framework?

Consider first the formulation of Quantum TGD.

  1. The original approach of TGD relied on the notion of Kähler action (see this). The reason was that it had exceptional properties. The Lagrangian manifolds L of CP2 gave rise to vacuum extremals for Kähler action: any 4-surface of M4×L ⊂ H= M4×CP2 with M4 is a vacuum extremal for this action. At these space-time surfaces, the induced Kähler form vanishes as also Kähler action as a non-linear analog of Maxwell action.

    The small variations of the Kähler action vanish in order higher than two so that the action would not have a kinetic term and the ordinary perturbation theory in QFT sense (based on path integral) would completely fail. The addition of a volume term to the action cures the situation and in the twistorialization of TGD it emerges naturally and does not bring in the analog of cosmological constant as a fundamental constant but as a dynamically generated parameter. Therefore scale invariance would not be broken at the level of action.

  2. This was however not the only problem. The usual perturbation theory would be plagued by an infinite hierarchy of infinities much worse than those of ordinary QFTs: they would be due to the extreme non-linearity of any general coordinate invariant action density as function of H coordinates and their partial derivatives.
These problems eventually led to the notion of the "world of classical worlds" (WCW) as an arena of dynamics identified as the space of 4-surfaces obeying what I call now holography and realized in some sense (see this, this, this and this). It took decades to understand in what sense the holography is realized.
  1. The 4-D general coordinate invariance would be realized in terms of holography. The definition of WCW geometry assigns to a given 3-surface a unique or almost unique space-time surface at which general coordinate transformations can act. The space-time surfaces are therefore analogs of Bohr orbits so that the path integral disappears or reduces to a sum in the case that the classical dynamics is not completely deterministic. The counterparts of the usual QFT divergences disappear completely and Kähler geometry of WCW takes care of the remaining diverges.

    It should be noticed in passing, that year or two ago, I discussed space-times surfaces, which are Lagrangian manifolds of H with M4 endowed with a generalization of the Kähler metric. This generalization was motivated by twistorialization.

  2. Eventually emerged the realization of holography in terms of generalized holomorphy based on the idea that space-time surfaces are generalized complex surfaces of H having a generalized holomorphic structure based on 3 complex coordinates and one hyper complex coordinate associated which I call Hamilton-Jacobi structure.

    These 4-surfaces are universal extremals of any general coordinate invariant action constructible in terms of the induced geometry since the field equations reduce to a contraction of two complex tensors of different type having no common index pairs. Space-time surfaces are minimal surfaces and analogs of solutions of both massless field equations and of massless particles extended from point-like particles to 3-surfaces. Field particle duality is realized geometrically.

    It is now clear that the generalized 4-D complex submanifolds of H are the correct choice to realize holography (see this).

  3. The universality realized as action independence, in turn leads to the view that the number theoretic view of TGD in principle could make possible purely number theoretic formulation of TGD (see this). There would be a duality between geometric and number theoretic views (see this), which is analogous to Langlands duality. The number theoretic view is extremely predictive: for instance, it allows to deduce the spectrum for the exponential of action defining vacuum functional for Bohr orbits does not depend on the action principle.

    The universality means enormous computational simplification as also does the possibility to construct space-time surfaces as roots for a pair of (f1,f2) of generalized analytic functions of generalized complex coordinates of H. The field equations, which are usually partial differential equations, reduce to algebraic equations. The function pairs form a hierarchy with an increasing complexity starting with polynomials and continuing with analytic functions: both have coefficients in some extension of rationals and even more general coefficients can be considered.

So, could Lagrangian manifolds appear in TGD in some sense?
  1. The proposal that the WCW as the space of 4-surfaces obeying holography in some sense has symplectomorphisms of H as isometries, has been a basic idea from the beginning. If holography= holomorphy principle is realized, both generalized conformal transformations and generalized symplectic transformations of H would act as isometries of WCW (see this). This infinite-dimensional group of isometries must be maximal possible to guarantee the existence of Riemann connection: this was already observed for loop spaces by Freed. In the case of loop spaces the isometries would be generated by a Kac-Moody algebra.
  2. Holography, realized as Bohr orbit property of the space-time surfaces, suggests that one could regard WCW as an analog of a Lagrangian manifold of a larger symplectic manifold WCWext consisting of 4-surfaces of H appearing as extremals of some action principle. The Bohr orbit property defined by the holomorphy would not hold true anymore.

    If WCW can be regarded as a Lagrangian manifold of WCWext, then the group of Sp(WCW) of symplectic transformations of WCWext would indeed act in WCW. The group Sp(H) of symplectic transformations of H, a much smaller group, could define symplectic isometries of WCWext acting in WCW just as color rotations give rise to isometries of CP2.

See the article Could space-time or the space of space-time surfaces be a Lagrangian manifold in some sense? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 02, 2025

A new experimental demonstration for the occurrence of low energy nuclear reactions

I learned of highly interesting new experimental results related to low energy luclear reactions (LENR) from a popular article published in New energy times (see this) giving a rather detailed view of what is involved. There is also a research article by Iwamura et al with the title "Anomalous heat generation that cannot be explained by known chemical reactions produced by nano-structured multilayer metal composites and hydrogen gas" published in Japanese Journal of Applied Physics (see this).

Note that LENR replaces the earlier term "cold fusion", which became a synonym for pseudoscience since standard nuclear physics does not allow these effects. In practice, the effects studied are however the same. LENR often involves Widom-Larsen theory (see this) based on the assumption that the fundamental step in the process not strong interaction but weak interaction producing of electron with a large effective mass (in condensed matter sense) with a proton producing a neutron which is very nearly at rest and is able to get near the target nucleus. The assumption that an electron has a large effective mass and is very nearly at rest can be challenged. The understanding of detailed mechanisms producing the observed nuclear transmutations is not understood in the model.

1. Experiments of the Tohoku group

Consider first the experimental arrangement and results.

  1. The target consists of alternating layers consisting of 6 Cu layers of thickness 2 nm and 6 Ni layers of thickness of thickness 14 nm. The thickness of this part is 100 nm. Below this layer structure is a bulk consisting of Ni. The thickness of the Ni bulk is 105 nm. The temperature of the hydrogen gas is varied during the experiment in the range 610 - 925 degrees Celsius. This temperature range is below the melting temperatures of Cu (1085 C) and Ni (1455 C).
  2. The target is in a chamber, pressurized by feeding hydrogen gas, which is slowly adsorbed by the target. Typically this takes 16 hours. In the second phase, when the hydrogen is fully adsorbed, air is evacuated from the chamber and heaters are switched on. During this phase excess heat is produced. For instance, in the first cycle the heating power was 19 W and the excess heat was 3.2 W and lasted for about 11 hours. At the end of the second cycle heat is turned off and the cycle is restarted.

    The experiment ran for a total of 166 hours, the input electric energy was 4.8 MJ and the net thermal energy output was .76 MJ.

  3. The figure of the popular article (see this) summarizes the temporal progress of the experiment and pressures and temperatures involved. Pressures are below 250 Pa: note that one atmosphere corresponds to 101325 Pa.

    The energy production is about 109 Joule per gram of hydrogen fuel. A rough estimate gives thermal energy production of about 10 keV per hydrogen atom. Note that the thermal energy associated with the highest temperature used (roughly 1000 K) is about .1 eV. In hot nuclear fusion the power gain is roughly 300 times higher and about 3 MeV per nucleon. The fraction of power gain to the input power is below 16 per cent typically in a given phase of the experiment.

  4. The second figure (see this) represents the the depth profiles in the range 0-250 nm for the abundances of Ni-,Cu-,C-, Si- and H- ions for the initial and final situations for an experiment in which excess heat of 9 W was generated. The original layered structure has smoothed out, which suggests that melting has occurred. This cannot be due to the feed of the heat energy. The melting of Ni requires a temperature above 1455 C.

    Earlier experiments were carried out in the adsorption phase. The recent experiments were performed in the desorption phase and the heat production was higher. The proposal is that the fact that the desorption is a faster process than the absorption could somehow explain this.

The Tohoku group has looked for changes in the abundances of elements and for unusual isotopic ratios after the experiments. Iwamura reports that they have seen many unusual accumulations.
  1. However, the most prevalent is an unusually high percentage of the element oxygen showing up below the surface of the multilayer composite, within the outer areas of the bulk.

    Pre-experiment analysis for the presence of oxygen concentration, after fabrication of the multilayer composite, has indicated a concentration of 0.5 to a few percent down to 1,000 nm from the top surface. The Tohoku group has observed many accumulations of oxygen in post-experimental analyses exceeding 50 percent in specific areas.

    Iwamura says that once the multilayer is fabricated, there is no way for atmospheric oxygen to leak below the top surface, at least beyond the first few nanometers. As a cross-check, researchers looked for nitrogen (which would suggest contamination from the atmosphere) but they detected no nitrogen in the samples.

  2. Coulomb wall makes the low energy reactions of protons with the nuclei of the target extremely slow. If one assumes that the Widom-Larsen model is a correct way to overcome the Coulobm wall, it is natural to look what kinds of stable end products the reactions p + Ni and p + Cu, made possible by the Widom-Larsen mechanism, could yield.The most abundant isotope of Ni has charge and mass number (Z,A=Z+N)=(28,59) (see this). Ni has other stable isotopes with A ∈ {58,60,61,62,64}. The reaction Ni+p could lead from stable Ni isotope (28,62)resp. (28,64) to stable Cu isotope (29,63) resp. (29,65).

    Cu has (Z,A)=(29,63) (see this) and stable isotopes with A ∈{63,65}. The reaction Cu+p could lead from (Z,A) ∈(29,{63,65}) to (Z,A) ∈ (30,{64,66}). This could be followed by alpha decay to (Z,A) ∈(28,{60,62}). Iron has 4 stable isotopes with A∈{54,56,57,58}. 60Fe is a radionuclide with half life of 2.6 million years decaying to 60Ni. The alpha particle could in turn induce the transmutation of C to O.

    2. Theoretical models

    Krivit has written a 3-part book "Hacking the atom: Explorations in Nuclear Research " about LENR (see this, this, and this). I have written an article (see this) about "cold fusion"/LENR in the TGD framework inspired by this book.

    The basic idea of Widom-Larsen theory (see this) is as follows. First, a heavy surface electron is created by electromagnetic radiation in the LENR cells. This heavy electron binds with a proton to form an ultra-low momentum (ULM) neutron and neutrino. A weak reaction would be basically in question. The heaviness of the surface electron implies that the kinetic tunnelling barrier due Uncertainty Principle is very low and allows electron and proton get very near to each other so that the weak transition p+e → n+ν can occur. Neutron has no Coulomb barrier and has very low momentum so that it can be absorbed by a target nucleus at a high rate.

    The difference of proton and neutron masses is mn-mp= 2.5 me. The final state neutron produced in p+e → n+ν is almost at rest. One can argue that at the fundamental level ordinary kinematics should be used. The straight forward conclusion would be that the energy of the electron must be 2.5 me so it would be relativistic.

    Second criticism relates to the heaviness of the surface electron. I did not find from the web any support for heavy electrons in Cu and Ni. Wikipedia article (see this) and web search suggest that they quite generally involve f electrons and they are absent in Cu and Ni.

    I also found a second model involving heavy electrons but no weak interactions (see this). Heavy electrons would catalyze nuclear transmutations. There would be three systems involved: electron, proton and nucleus. There would be no formation of an ultralow energy neutron. An electron would form a bound state with a proton with nuclear size. Although Coulomb attraction is present, the Uncertainty Principle would prevent the tunnelling of ordinary electrons to a nuclear distance. It is argued that a heavy electron has a much smaller quantum size and can tunnel to this distance. After this, the electron is kicked out of the system and by energy conservation its energy is compensated by a generation of binding energy between proton and nucleus so that heavier nucleus is formed. The same objection applies to both the Widom-Larsen model and this model.

    What about the TGD based model derived to exlain the electrolysis based "cold fusion" (see this). The findings indeed allow to sharpen the TGD based model for "cold fusion" based on generation of dark nuclei as dark proton sequences with binding energies in keV range instead of MeV range. One can understand what happens by starting from 3 mysteries.

    1. The final state contains negatively charged Ni-, Cu-, C-, S-i, O-, and H- ions. What causes their negative charge? In particular, the final state target contains O- ions although there is no oxygen present in the target in the initial state!
    2. A further mystery is that the Pollack effect requires water. Where could the water come from?
    Could O2 and 2 H2 molecules present in the chamber in the initial state give rise to oxygen ions in the final state? Could the spontaneously occurring reaction 2H2+O2 → 2H2O in the H2 pressurized chamber liberating energy of about 4 eV generate the water in the target volume so that the Pollack effect, induced by heating could take place for the water. Note that the reverse of this reaction occurs in photosynthesis. It would transform ordinary protons to dark protons and generate a negatively charged exclusion zone involving Ni-, Cu-, C-, S-i, O-, and H- ions in the final state. The situation would effectively reduce to that in systems involving electrolyte studied in the original "cold fusion" experiments.

    The spontaneous transformation of dark nuclei to ordinary ones would liberate essentially all the ordinary nuclear binding energy. It is of course not obvious whether the transformation to ordinary nuclei is needed to explain the heat production: it is however necessary to explain the nuclear transmutations, which are not discussed in the article of Tohoku group. The resulting dark nuclei could be rather stable and the X-ray counterpart for the emission of gamma rays could explain the heating. That gamma rays of ordinary nuclear physics have not been observed in "cold fusion" is the killer objection against "cold fusion" based on standard nuclear physics. In TGD gamma rays would be replaced by X rays in keV range, which is also the average thermal energy produced per hydrogen atom.

    3. TGD inspired models of "cold fusion"/LENR or whatever it is

    TGD suggests dark fusion (see this and this) as the mechanism of "cold fusion". One can consider two models explaining these phenomena in the TGD Universe. Both models rely on the hierarchy of Planck constants heff=n× h (see this, this, this, this) explaining dark matter as ordinary matter in heff=n× h phases emerging at quantum criticality. heff implies scaled up Compton lengths and other quantal lengths making possible quantum coherence at longer scales than usual.

    The hierarchy of Planck constants heff=n× h has now a rather strong theoretical basis and reduces to number theory (see this). Quantum criticality would be essential for the phenomenon and could explain the critical doping fraction for cathode by D nuclei. Quantum criticality could help to explain the difficulties to replicate the effect.

    3.1 Simple modification of WL does not work

    The first model is a modification of WL and relies on dark variants of weak interactions. In this case LENR would be an appropriate term.

    1. Concerning the rate of the weak process e+p→ n+ν the situation changes if heff is large enough and rather large values are indeed predicted. heff could be large also for weak gauge bosons in the situation considered. Below their Compton length weak bosons are effectively massless and this scale would scale up by factor n=heff/h to almost atomic scale. This would make weak interactions as strong as electromagnetic interactions and long ranged below the Compton length and the transformation of proton to neutron would be a fast process. After that a nuclear reaction sequence initiated by neutrons would take place as in WL. There is no need to assume that neutrons are ultraslow but electron mass remains the problem. Note that also proton mass could be higher than normal perhaps due to Coulomb interactions.
    2. As such this model does not solve the problem related to the too small electron mass. Nor does it solve the problem posed by gamma ray production.

    3.2 Dark nucleosynthesis

    Also the second TGD inspired model involves the heff hierarchy. Now LENR is not an appropriate term: the most interesting things would occur at the level of dark nuclear physics, which is now a key part of TGD inspired quantum biology.

    1. One piece of inspiration comes from the exclusion ones (EZs) of Pollack (see this), which are negatively charged regions (see this and this). Also the work of the group of Prof. Holmlid (see this and this), not yet included in the book of Krivit, was of great help. TGD proposal (see this) is that protons causing the ionization go to magnetic flux tubes having interpretation in terms of space-time topology in the TGD Universe. At flux tubes they have heff=n× h and form dark variants of nuclear strings, which are basic structures also for ordinary nuclei.
    2. The sequences of dark protons at flux tubes would give rise to dark counterparts of ordinary nuclei proposed to be also nuclear strings but with dark nuclear binding energy, whose scale is measured using as natural unit MeV/n, n=heff/h, rather than MeV. The most plausible interpretation is that the field body/magnetic body of the nucleus has heff= n× h and is scaled up in size. n=211 is favoured by the fact that from Holmlid's experiments the distance between dark protons should be about electron Compton length.

      Besides protons also deuterons and even heavier nuclei can end up in the magnetic flux tubes. They would however preserve their size and only the distances between them would be scaled to about electron Compton length on the basis of the data provided by Holmlid's experiments (see this and this).

      The reduced binding energy scale could solve the problems caused by the absence of gamma rays: instead of gamma rays one would have much less energetic photons, say X rays assignable to n=211 ≈ m_p/m_e. For infrared radiation the energy of photons would be about 1 eV and the nuclear energy scale would be reduced by a factor about 10^{-6}-10^{-7}: one cannot exclude this option either. In fact, several options can be imagined since the entire spectrum of heff is predicted. This prediction is testable.

      Large heff would also induce quantum coherence is a scale between electron Compton length and atomic size scale.

    3. The simplest possibility is that the protons are just added to the growing nuclear string. In each addition one has (A,Z)→ (A+1,Z+1) . This is exactly what happens in the mechanism proposed by Widom and Larsen for the simplest reaction sequences already explaining reasonably well the spectrum of end products.

      In WL the addition of a proton is a four-step process. First e+p→ n+ν occurs at the surface of the cathode. This requires large electron mass renormalization and fine tuning of the electron mass to be very nearly equal but higher than the n-p mass difference.

      There is no need for these questionable assumptions of WL in TGD. Even the assumption that weak bosons correspond to large heff phase might not be needed but cannot be excluded with further data. The implication would be that the dark proton sequences decay rather rapidly to beta stable nuclei if a dark variant of p → n is possible.

    4. EZs and accompanying flux tubes could be created also in electrolyte: perhaps in the region near the cathode, where bubbles are formed. For the flux tubes leading from the system to the external world most of the fusion products as well as the liberated nuclear energy would be lost. This could partially explain the poor replicability for the claims about energy production. Some flux tubes could however end at the surface of the catalyst under some conditions. Flux tubes could end at the catalyst surface. Even in this case the particles emitted in the transformation to ordinary nuclei could be such that they leak out of the system and Holmlid's findings indeed support this possibility.

      If there are negatively charged surfaces present, the flux tubes can end to them since the positively charged dark nuclei at flux tubes and therefore the flux tubes themselves would be attracted by these surfaces. The most obvious candidate is catalyst surface, to which electronic charge waves were assigned by WL. One can wonder whether Tesla observed in his experiments the leakage of dark matter to various surfaces of the laboratory building. In the collision with the catalyst surface dark nuclei would transform to ordinary nuclei releasing all the ordinary nuclear binding energy. This could create the reported craters at the surface of the target and cause heating. One cannot of course exclude that nuclear reactions take place between the reaction products and target nuclei. It is quite possible that most dark nuclei leave the system.

      It was in fact Larsen, who realized that there are electronic charge waves propagating along the surface of some catalysts, and for good catalysts such as Gold, they are especially strong. This would suggest that electronic charge waves play a key role in the process. The proposal of WL is that due to the positive electromagnetic interaction energy the dark protons of dark nuclei could have rest mass higher than that of neutron (just as in the ordinary nuclei) and the reaction e + p → n+ν would become possible.

    5. Spontaneous beta decays of protons could take place inside dark nuclei just as they occur inside ordinary nuclei. If the weak interactions are as strong as electromagnetic interactions, dark nuclei could rapidly transform to beta stable nuclei containing neutrons: this is also a testable prediction. Also dark strong interactions would proceed rather fast and the dark nuclei at magnetic flux tubes could be stable in the final state. If dark stability means the same as the ordinary stability then also the isotope shifted nuclei would be stable. There is evidence that this is the case.
    Neither CF nor LENR is an appropriate term for the TGD inspired option. One would not have ordinary nuclear reactions: nuclei would be created as dark proton sequences and the nuclear physics involved is on a considerably smaller energy scale than usual. This mechanism could allow at least the generation of nuclei heavier than Fe not possible inside stars and supernova explosions would not be needed to achieve this. The observation that transmuted nuclei are observed in four bands for nuclear charge Z irrespective of the catalyst used suggest that the catalyst itself does not determine the outcome.

    One can of course wonder whether even "transmutation" is an appropriate term now. Dark nucleosynthesis, which could in fact be the mechanism of ordinary nucleosynthesis outside stellar interiors to explain how elements heavier than iron are produced, might be a more appropriate term.

    3.3 The TGD based model and the findings of Iwamura et al

    The presence of ions Ni-, Cu-, C-, Si- and H- ions in the target is an important guideline. LENR involves negatively charged surfaces at which the presence of electrons is thought to catalyze transmutations: the WL model relies on this idea. The question concerns the ionization mechanism.

    1. The appearance of Si- in the entire target volume could be understood in terms of melting. It is difficult to understand its appearance as being due to nuclear transmutations.
    2. What is remarkable is the appearance of O-. The Coulomb wall makes it very implausible that the absorption of an ordinary alpha particle in LENR could induce the transmutation of C to O.

      Could the oxygen be produced by dark fusion? It is difficult to see why oxygen should have such a preferred role as a reaction product in dark fusion favouring light nuclei?

      Could the oxygen enter the target during the first phase when the pressurized hydrogen gas is present together with air, as the statement that air was evacuated after the first stage suggests. Iwamura has also stated that nitrogen N, also present in air, is not detected in the target so that the leakage of O to the target looks implausible. Could the leakage of oxygen rely on a less direct mechanism?

    3. Oxygen resp. hydrogen appears as O2 resp. H2 molecules. O2 resp. H2 has a binding energy of 5.912 eV and resp. 4.51 eV. Therefore the reaction 2H2+O2→ 2H2O could occur during the pressurization phase. The energy liberated in this reaction is estimated to be about 4.88 eV (see this).
    4. What is remarkable is that water plays a key role in the Pollack effect interpreted as a formation of dark proton sequences. Pollack effect generates negatively exclusion zones as negatively charged regions and Ni-, Cu-, C-, Si- and H- ions would serve as a signature of these regions. In the "cold fusion" based on electrolysis, the water would be present from the beginning but now it would be generated by the proposed mechanism.

      The difference of the bonding energy of OH and binding energy of O- is about .33 eV in absence of electric fields and corresponds to the thermal energy at temperature of 630 C. This would suggest that the heating replaces IR photons in ordinary Pollack effect as energy source inducing the formation of dark protons and exclusion zones consisting of negative ions.

    5. In fact, Pollack effect suggests a deep connection between computers, quantum computers and living matter based on the notion of OH-O- + dark proton qubit and its generalizations (see this) .
    6. The earlier TGD based model for "cold fusion" as dark fusion suggests that the value of heff for dark protons is such that the Compton length is of order electron Compton length. Dark proton sequences as dark nuclei would spontaneously decay to ordinary nuclei and produce the heat. In TGD, ordinary nuclei also form nuclear strings as monopole flux tubes (see this).

      TGD assigns a large value of heff to systems having long range strong enough classical gravitational and electric fields (see this and this). For gravitational fields the gravitational Planck constant is very large and the gravitational Compton length is one half of the Schwartschild radius of the system with large mass (Sun or Earth). In biology, charged systems such as DNA, cells and Earth itself involve large negative charge and therefore large electric Planck constant proportional to the total charge of the system. Pollack effect generates negatively charged exclusion zones, which could be characterized by gravitational or electric Planck constant. In the recent case, the electric Compton length of dark protons should be of the order of electron Compton length so that heff/h≈ mp/me≈ 211 is suggestive.

    3.4 Summary

    In the TGD based model, the reaction 2H2+O2 → 2H2O transforms the situation to that appearing in electrolysis and Pollack effect would be also now the basic mechanism producing dark nuclei as dark proton sequences transforming spontaneously to ordinary nuclei. Whether this mechanism is involved should be tested.

    The TGD based model predicts much more than is reported in the article of Iwamura et al. A spectrum of light nuclei produced in the process and containing at least alpha particles but there is no information about this spectrum in the article.

    1. The article reports only the initial and final state concentrations of Ni-, Cu-, -C, O-, and H- but does not provide information about all nuclei produced by transmutations. Melting has very probably occurred for Ni and Cu.
    2. The heat production rate is higher during the desorption phase than during the adsorption phase. The TGD explanation would be that the dark proton sequences have reached a full length during desorption and can produce more nuclei as they decay.
    3. The finding that the maximum of the energy production per hydrogen atom is roughly 1/100 times smaller than the binding energy scale of nuclei of nuclei, forces to challenge dark fusion as a reaction mechanism. The explanation could be that the creation of dark nuclei from hydrogen atoms is the rate limiting step. If roughly 1 percent of hydrogen atoms generates dark protons, the rate of heat production could be understood.
    4. The basic prediction of Widom-Larsen model about (A,Z)→ (A+1,Z+1)→ .. follows trivially from TGD inspired model in which dark nuclei with binding energy scale much lower than for ordinary nuclei and Compton length of order electron Compton length are formed as sequences consisting of dark protons, deuterons or even heavier nuclei, which then transform to ordinary nuclei and liberate nuclear binding energy. This occurs at negatively charged surfaces (that of cathode for instance) since they attract positively charged flux tubes. On the other hand, the negative surface charge could be generated in the Pollack effect for the water molecules generating exclusion zone and dark protons at the monopole flux tubes.

      The energy scale of dark variants of gamma rays liberated in dark nuclear reactions is considerably smaller than that of gamma rays since it is scaled down from few MeV to few keV which indeed corresponds to the thermal energy liberated per hydrogen atom. This could explain why gamma rays are not observed. The questionable assumptions of the Widom-Larsen model are not needed.

      The maximum length of dark nucleon sequences determines how heavy nuclei can emerge. The minimum length corresponds to a single alpha nucleus and it could induce nuclear transformation such as the transmutation of C to O. Part of the dark nuclei could escape from the target volume and remain undetected. Dark nuclei could also directly interact with the target nuclei, in particular Ni and Cu.

    See A new experimental demonstration for the occurrence of low energy nuclear reactions or the chapter Cold Fusion Again.

    For a summary of the earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 30, 2024

From blackholes to time reversed blackholes: comparing the views of time evolution provided by general relativity and TGD

The TGD inspired very early cosmology is dominated by cosmic strings. Zero energy ontology (ZEO) suggests that this is the case also for the very late cosmology, or that emerging after "big" state function reduction (BSFR) changing the arrow of time. Could this be a counterpart for the general relativity based vision of black holes as the endpoint of evolution: now they would also be starting points for a time evolution as a mini cosmology with reversed time direction. This would conform with the TGD explanation for stars and galaxies older than the Universe and with the picture produced by JWST.

First some facts about zero energy ontology (ZEO) are needed.

  1. In ZEO space-time surfaces satisfying almost deterministic holography and having their ends at the boundaries of causal diamond (CD), which is the intersection of future and past directed light-cones and can have varying scale? The 3-D states at the passive boundary of CD are unaffected in "small" state function reductions (SSFRs) but change at the active boundary. In the simplest scenario CD itself suffers scaling in SSFRs.
  2. In BSFRs the roles of passive and active boundaries of CD change. The self defined by the sequence of SSFRs "dies" and reincarnates with an opposite arrow of geometric time. The hierarchy of effective Planck constants predicted by TGD implies that BSFRs occur even in cosmological scales and this could occur even for blackhole-like objects in the TGD counterpart of evaporation.
Also some basic ideas related to TGD based astrophysics and cosmology are in order.
  1. I have suggested that the counterpart of the GRT black hole in the TGD, I call it blakchole-like object (BH), is a maximally dense volume-filling flux tube tangle (see this). Actually an entire hierarchy of BHs with quantized string tension is predicted (see this) and ordinary BHs would correspond to flux tubes consisting of nucleons (they correspond to Mersenne prime M107 in TGD) and would be essentially giant nuclei.

    M89 hadron physics and corresponding BHs are in principle also possible and have string tension which is 512 higher than the flux tubes associated with ordinary blackholes. Surprisingly, they could play a key part in solar physics.

  2. The very early cosmology of TGD (see this) corresponds to the region near the passive boundary of CD that would be cosmic string dominated. The upper limit for the temperature would be Hagedorn temperature. Cosmic strings are 4-D objects but their CP2 projection is extremely small so that they look like strings in M4.

    The geometry of CD strongly suggests a scaled down analog of big bang at the passive boundary and of big crunch at the active boundary as time reversal of big bang as BSFR. This picture should also apply to the evolution of BH? Could one think that a gas of cosmic strings evolves to a BH or several of them?

  3. In ZEO, the situation at the active future boundary of the CD after BSFR should be similar to that at the passive boundary before it. This requires that the evaporation of the BH at the active boundary must occur as an analog of the big bang, and gives rise to a gas consisting of flux tubes as analog of cosmic string dominated cosmology. Symmetry would be achieved between the boundaries of the CD.
  4. In general relativity, the fate of all matter is to end up in blackholes, which possibly evaporate. What about the situation in TGD?: does all matter end up to a tangle formed by volume filling flux tubes which evaporates to a gas of flux tubes in an analog of Big Bang?

    Holography = holomorphy vision states that space-time surfaces can be constructed as roots for pairs (f1,f2) of analytic functions of 3 complex coordinates and one hypercomplex coordinate of H=M4× CP2. By holography the data would reside at certain 3-surfaces. The 3-surfaces at the either end of causal diamond (CD), the light-like partonic orbits, and lower-dimensional surfaces are good candidates in this respect.

    Could the matter at the passive boundary of CDs consist of monopole flux tubes which in TGD form the building bricks of blackhole-like objects (BHs) and could the BSFR leading to the change of the arrow of geometric time transform the BH at the active boundary of CD to a gas of monopole flux tubes? This would allow a rather detailed picture of what space-time surfaces look like.

Black hole evaporation as an analog of time reversed big bang can be a completely different thing in TGD than in general relativity.
  1. Let's first see whether a naive generalization of the GRT picture makes sense.
    1. The temperature of a black hole is T=ℏ/8πGM. For ordinary hbar it would therefore be extremely low and the black hole radiation would therefore be extremely low-energy.
    2. If hbar is replaced by GMm/β0, the situation changes completely. We get T= m/β0. The temperature of massive particles, mass m, is essentially m. Each particle in its own relativistic temperature. What about photons? They could have very small mass in p-adic thermodynamics.
    3. If m=M, we get T=M/β0. This temperature seems completely insane. I have developed the quantum model of the black hole as a quantum system and in this situation the notion of temperature does not make sense.
  2. Since the counterpart of the black hole would be a flux tube-like object, the Hagedorn temperature TH is a more natural guess for the counterpart of evaporation temperature and also blackhole temperature. In fact, the ordinary M107 BH would correspond to a giant nucleus as nuclear string. Also M89 BH can be considered. The straightforward dimensionally motivated guess for the Hagedorn temperature is suggested by p-adic length scale hypothesis as TH= xℏ/L(k) , where x is a numerical factor. For blackholes as k=107 objects this would give a temperature of order 224 MeV for x=1. Hadron Physics giving experimental evidence for Hagedorn temperature about T=140 MeV near to pion mass and near to the scale determined by ΛQCD, which would be naturally related to the hadronic value of the cosmological constant Λ.
  3. One can of course ask whether the BH evaporation in this sense is just the counterpart for the formation of a supernova. Could the genuine BH be an M89 blackhole formed as an M107 nuclear string transforms to an M89 nuclear string and then evaporates in a mini Big Bang? Could the volume filling property of the M107 flux tube make possible touching of string portions inducing the transition to M89 hadron physics just as it is proposed to do in the process which corresponds to the formation of QCD plasma in TGD (see this ).
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.