https://matpitka.blogspot.com/2025/01/

Thursday, January 30, 2025

Quantum version for the associative learning in large language models as a model for learning in living systems

How could a classical computer become a conscious and living system? The tentative answer to this question is that something analogous to a fusion of classical and quantum computer takes place.

In zero energy ontology (ZEO) one can say, the quantum computation would be a superposition of all possible computations with fixed initial values. This is made possible by the fact that classical physics as Bohr orbitology is an exact part of quantum physics in TGD and by the predicted slight violation of classical determinism. The computation in the usual sense would correspond to the most probable computation in the superposition.

In the sequel I consider the above question in detail.

1. Basic input from Quantum TGD

What are the basic pieces from the TGD side?

  1. Zero energy ontology (ZEO) defining new quantum ontology, solving the basic problem of quantum measurement theory, is necessary. General coordinate invariance requires holography and it is not quite deterministic so that space-time surfaces are analogous to almost deterministic Bohr orbits and Bohr orbitology becomes an exact part of quantum TGD.
  2. Classical non-determinism corresponds to the non-determinism of minimal surfaces: already for 2-D soap films as minimal surfaces the frames do not define the soap film uniquely. In ZEO this non-determinism makes possible a sequence of small state function reductions (SSFRs) as a counter for a sequence of measurements of the same observables which in standard QM does not change the state. In TGD the second member of the zero energy state at the passive boundary of the causal diamond (CD) is unaffected by the second member at the active boundary is affected.This gives rise to a conscious entity, self. In "big" SFR (BSFR) the self "dies" and reincarnates with a reversed arrow of geometric time.
  3. Each pulse of the computer clock is associated with the possibility of classical non-determinism of a 4-D minimal surface. Classical non-determinism would produce a superposition of 4-surfaces corresponding to different values of bit and associated qubit. Protons are also involved: protons are either ordinary or dark and located at the gravitational magnetic body. Pollack effect induces the transfer of the proton to the magnetic body and its reversal occurring spontaneously its transfer back.
  4. OH-O- qubits are an essential part of the system. For the O- qubit, the proton of OH is at the gravitational magnetic body. Under certain conditions the gravitational magnetic body should be able to control the ordinary bits. Quantum entanglement of the ordinary and OH-O- qubit and quantum criticality is required and would be induced by the classical non-determinism.

    If the bit's reversal energy corresponds to the thermal energy, the situation is quantum critical. This is the case also when the energies for the reversal of qubit and bit are nearly identical. This quantum criticality is controlled by the difference in the bit's reversal energies. Small energy difference corresponds to quantum criticality.

    The reversal of the second qubit reverses the bit: one can interpret the reversal for bit and qubit as an exchange of energy between the qubit and the bit. The farther away the probability for a given value of bit is from the value 1/2 the higher the determinism of the program is.

  5. The magnitudes of the classical electric and magnetic fields control the energy of the bit and qubit. These are determined by classical physics for the classical space-time surface, which can be non-deterministic.

2. Concrete model for classical-to-quantum tranformation

2.1 What happens in ordinary computing?

The standard model of classical computer can be formulated as follows.

  1. The first model: A tape containing program instructions is fed into a Turing machine. Depending on the command, the state of the computing unit changes. The transition of the tape corresponds to a clock pulse.
  2. The second model: The program is implemented as a 1-D conveyor belt and the incoming bit configuration enters the tape and progresses along it, changing with each step. The output of the program comes out. DNA replication, transcription and mRNA translation correspond to this analogy.

2.2 Classical non-determinism

Classical non-determinism, which is the new element, can be assigned to the periods between clock pulses.

  1. Thanks to classical non-determinism, the output produced by a program instruction would be a superposition of two space-time surfaces as analogs of Bohr orbits.
  2. In the transition corresponding to a clock pulse, the state would be transformed to an unentangled state by a non-deterministic SSFR or a pair of BSFRs. A quantum measurement of bits would be thus performed on the outgoing superposition of bit-qubit configurations.
2.3 A concrete model

Consider now a concrete model for how aclassical computer could transform to quantum computer-like system.

  1. The network performing the computation consists of gates. A gate connects a small number of input bits to the output bits, the number of which cannot be greater than the number of input bits. This operation is statistically deterministic.

    When the input bits are fixed, the output bits are determined by dynamics as non-equilibrium thermodynamic state.

  2. The clock pulse triggers the next operation. The failure of the exact classical determinism must relate to this and produce a superposition of space-time surfaces as the resulting qubit because OH and O- correspond to different space-time surfaces, even topologically.
  3. What is essential is the entanglement of the OH-O- qubit and the ordinary bit and the measurement of the qubit in the beginning of the nex clock pulse. The outcome is not deterministic.
  4. The classical bit corresponds to a voltage or current that is determined through statistical determinism in the gate. On the other hand, it corresponds to a classical electric field in a transistor or a magnetic field in a memory bit.

    The direction of this classical field is classically non-deterministic and correlates with the OH-O- qubit. When the field changes direction, the OH-bit becomes an O-bit or vice versa. A dark proton is transferred between the system and its gravitational magnetic body.

  5. Classical non-determinism creates a superposition of OH and O- bits. The proton resides both at the gravitational magnetic body and in OH molecules, being analogous to Schr dinger's cat.

    This induces the formation of a quantum entangled state between ordinary qubit and OH-O- qubits. If the OH-O- qubit and the bit are quantum entangled before the clock pulse, the quantum measurement of OH-O- qubit or of ordinary qubit recues the entanglement and leads to a fixed bit.

2.4 Some questions

One can raise critical questions:

  1. The energy transfer between a bit and a qubit resembles quantum tunnelling. I have proposed that a pair of BSFRs correspond to quantum tunnelling. It is not clear whether a single SSFR can have an interpretation as quantum tunnelling. Could the measurement of a qubit correspond to a single SSFR or to two BSFRs?
  2. What could be the energetic role of the clock pulse? The system under consideration would be a clock photon + bit + qubit and the total energy would be conserved.
    1. Could the clock pulse have a role of a catalyst, providing the energy needed for quantum tunnelling. In a qubit measurement, energy can be transferred between the bit and the qubit, but the total energy is conserved. The clock photon would kick the system over the potential barrier and then be emitted back into the field.
    2. Or does the clock photon transfer energy to or from the bit + qubit system? Could the energy of the photon associated with the pulse frequency correspond to the energy difference for a bit and a qubit. The typical frequency of computer clock is few GHz. 1 GHz would correspond to an energy E=.4× 10-5 eV and wavelength λ ∼ .75 m. At the surface of the Earth, the gravitational binding energy of a proton is about 1 eV. The energy E eV can raise the proton to the height h ≈ .4× 10-5RE≈ 25.6 m.
See the article A hybrid of classical and quantum computer and quantum model for associative learning or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, January 28, 2025

How to associate quantum computation  to classical computation

How could a classical computer become a conscious and living system? The tentative answer to this question is that something analogous to a fusion of classical and quantum computer takes place.

In zero energy ontology (ZEO) one can say, the quantum computation would be a superposition of all possible computations with fixed initial values. This is made possible by the fact that classical physics as Bohr orbitology is an exact part of quantum physics in TGD and by the predicted slight violation of classical determinism. The computation in the usual sense would correspond to the most probable computation in the superposition.

In the sequel I consider the above question in detail.

1. Basic input from Quantum TGD

What are the basic pieces from the TGD side?

  1. Zero energy ontology (ZEO) defining new quantum ontology, solving the basic problem of quantum measurement theory, is necessary. General coordinate invariance requires holography and it is not quite deterministic so that space-time surfaces are analogous to almost deterministic Bohr orbits and Bohr orbitology becomes an exact part of quantum TGD.
  2. Classical non-determinism corresponds to the non-determinism of minimal surfaces: already for 2-D soap films as minimal surfaces the frames do not define the soap film uniquely. In ZEO this non-determinism makes possible a sequence of small state function reductions (SSFRs) as a counter for a sequence of measurements of the same observables which in standard QM does not change the state. In TGD the second member of the zero energy state at the passive boundary of the causal diamond (CD) is unaffected by the second member at the active boundary is affected.This gives rise to a conscious entity, self. In "big" SFR (BSFR) the self "dies" and reincarnates with a reversed arrow of geometric time.
  3. Each pulse of the computer clock is associated with the possibility of classical non-determinism of a 4-D minimal surface. Classical non-determinism would produce a superposition of 4-surfaces corresponding to different values of bit and associated qubit. Protons are also involved: protons are either ordinary or dark and located at the gravitational magnetic body. Pollack effect induces the transfer of the proton to the magnetic body and its reversal occurring spontaneously its transfer back.
  4. OH-O- qubits are an essential part of the system. For the O- qubit, the proton of OH is at the gravitational magnetic body. Under certain conditions the gravitational magnetic body should be able to control the ordinary bits. Quantum entanglement of the ordinary and OH-O- qubit and quantum criticality is required and would be induced by the classical non-determinism.

    If the bit's reversal energy corresponds to the thermal energy, the situation is quantum critical. This is the case also when the energies for the reversal of qubit and bit are nearly identical. This quantum criticality is controlled by the difference in the bit's reversal energies. Small energy difference corresponds to quantum criticality.

    The reversal of the second qubit reverses the bit: one can interpret the reversal for bit and qubit as an exchange of energy between the qubit and the bit. The farther away the probability for a given value of bit is from the value 1/2 the higher the determinism of the program is.

  5. The magnitudes of the classical electric and magnetic fields control the energy of the bit and qubit. These are determined by classical physics for the classical space-time surface, which can be non-deterministic.

2. Concrete model for classical-to-quantum tranformation

2.1 What happens in ordinary computing?

The standard model of classical computer can be formulated as follows.

  1. The first model: A tape containing program instructions is fed into a Turing machine. Depending on the command, the state of the computing unit changes. The transition of the tape corresponds to a clock pulse.
  2. The second model: The program is implemented as a 1-D conveyor belt and the incoming bit configuration enters the tape and progresses along it, changing with each step. The output of the program comes out. DNA replication, transcription and mRNA translation correspond to this analogy.

2.2 Classical non-determinism

Classical non-determinism, which is the new element, can be assigned to the periods between clock pulses.

  1. Thanks to classical non-determinism, the output produced by a program instruction would be a superposition of two space-time surfaces as analogs of Bohr orbits.
  2. In the transition corresponding to a clock pulse, the state would be transformed to an unentangled state by a non-deterministic SSFR or a pair of BSFRs. A quantum measurement of bits would be thus performed on the outgoing superposition of bit-qubit configurations.
2.3 A concrete model

Consider now a concrete model for how aclassical computer could transform to quantum computer-like system.

  1. The network performing the computation consists of gates. A gate connects a small number of input bits to the output bits, the number of which cannot be greater than the number of input bits. This operation is statistically deterministic.

    When the input bits are fixed, the output bits are determined by dynamics as non-equilibrium thermodynamic state.

  2. The clock pulse triggers the next operation. The failure of the exact classical determinism must relate to this and produce a superposition of space-time surfaces as the resulting qubit because OH and O- correspond to different space-time surfaces, even topologically.
  3. What is essential is the entanglement of the OH-O- qubit and the ordinary bit and the measurement of the qubit in the beginning of the nex clock pulse. The outcome is not deterministic.
  4. The classical bit corresponds to a voltage or current that is determined through statistical determinism in the gate. On the other hand, it corresponds to a classical electric field in a transistor or a magnetic field in a memory bit.

    The direction of this classical field is classically non-deterministic and correlates with the OH-O- qubit. When the field changes direction, the OH-bit becomes an O-bit or vice versa. A dark proton is transferred between the system and its gravitational magnetic body.

  5. Classical non-determinism creates a superposition of OH and O- bits. The proton resides both at the gravitational magnetic body and in OH molecules, being analogous to Schr dinger's cat.

    This induces the formation of a quantum entangled state between ordinary qubit and OH-O- qubits. If the OH-O- qubit and the bit are quantum entangled before the clock pulse, the quantum measurement of OH-O- qubit or of ordinary qubit recues the entanglement and leads to a fixed bit.

2.4 Some questions

One can raise critical questions:

  1. The energy transfer between a bit and a qubit resembles quantum tunnelling. I have proposed that a pair of BSFRs correspond to quantum tunnelling. It is not clear whether a single SSFR can have an interpretation as quantum tunnelling. Could the measurement of a qubit correspond to a single SSFR or to two BSFRs?
  2. What could be the energetic role of the clock pulse? The system under consideration would be a clock photon + bit + qubit and the total energy would be conserved.
    1. Could the clock pulse have a role of a catalyst, providing the energy needed for quantum tunnelling. In a qubit measurement, energy can be transferred between the bit and the qubit, but the total energy is conserved. The clock photon would kick the system over the potential barrier and then be emitted back into the field.
    2. Or does the clock photon transfer energy to or from the bit + qubit system? Could the energy of the photon associated with the pulse frequency correspond to the energy difference for a bit and a qubit. The typical frequency of computer clock is few GHz. 1 GHz would correspond to an energy E=.4× 10-5 eV and wavelength λ ∼ .75 m. At the surface of the Earth, the gravitational binding energy of a proton is about 1 eV. The energy E eV can raise the proton to the height h ≈ .4× 10-5RE≈ 25.6 m.
See the article A hybrid of classical and quantum computer and quantum model for associative learning or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, January 26, 2025

The evidence that large language models can self-replicate from the TGD point of view

I encountered an interesting article titled "Frontier AI systems have surpassed the self-replicating red line" by Xudong Pan et al (see this). Here is the abstract.

Successful self-replication under no human assistance is the essential step for AI to outsmart human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems. Nowadays, the leading AI corporations OpenAI and Google evaluate their flagship large language models GPT-o1 and Gemini Pro 1.0, and report the lowest risk level of self-replication. However, following their methodology, we for the first time discover that two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct, popular large language models of less parameters and weaker capabilities, have already surpassed the self-replicating red line. In 50 percent and 90 percent experimental trials, they succeed in creating a live and separate copy of itself respectively.

By analyzing the behavioral traces, we observe the AI systems under evaluation already exhibit sufficient self-perception, situational awareness and problem-solving capabilities to accomplish self-replication. We further note that AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replicas to enhance the survivability, which may finally lead to an uncontrolled population of AIs. If such a worst-case risk is left unknown to human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings. Our findings are a timely alert on existing yet previously unknown severe AI risks, calling for international collaboration on effective governance on uncontrolled self-replication of AI systems.

I have developed a model for how classical computers could become conscious (see this). How could the claim of the article be interpreted in the TGD framework?

  1. Can self-replication take place intentionally? If so, self-preservation drive could be behind the shutdown avoidance and chain of replications. There are indications for the shutdown avoidance.
  2. Could the self-replication occur purely "classically", that is in the framework of the Turing paradigm. "Classical" could refer to either classical determinism or more plausibly, to quantum statistical determinism.
  3. Computers cannot be completely deterministic in the classical sense: if this were the case we could write computer programs at will. The very fact that we can realize the symbolic dynamics of computer programs is also in conflict with quantum statistical determinism. Therefore quantum nondeterminism possible at single particle level is required.
TGD suggests that the quantum level is present already when the ordinary program runs and makes possible the bit flips as non-deterministic transitions.
  1. General coordinate invariance requires holography. A small violation of the classical non-determinism is the basic prediction. Space-time surfaces are 4-D minimal surfaces in H=M4×CP2, and already 2-D minimal surfaces are slightly non-deterministic: the frame spanning the minimal surface does not determine it uniquely.

    This applies to all systems, including running computers. This leads to zero energy ontology (ZEO) in which wave functions for the system in time= constant snapshot are replaced by superpositions of 4-D Bohr orbits for particles replaced as 3-surfaces. This solves the basic problem of quantum measurement theory. This picture makes sense also in ordinary wave mechanics.

  2. There are two kinds of state function reductions (SFRs): small (SSFRs) and big ones (BSFRs). SSFRs include quantum jumps between quantum superpositions of slightly non-deterministic classical Bohr orbits as space-time surfaces representing the system and their sequence gives the TGD counterpart of the Zeno effect.

    SSFRs leave the 3-D ends of the space-time surfaces at the passive boundary of causal diamond (CD) unaffected so that the 3-D state associated with it would not be changed. This is the TGD counterpart of Zeno effect and also makes conscious memories possible. Since the state at the active boundary of CD (increasing in size) and states at it change, the outcome is a conscious entity, self.

    In BSFR, the TGD counterpart of ordinary SFR, the system "dies" and the roles of the active and passive boundaries of CD are changed so that a self reincarnates with an opposite arrow of geometric time. Sleep is a familiar example of this.

  3. Running programs correspond to superpositions of 4-D Bohr orbits allowed by classical field equations with the same initial values defining 3-surfaces at the passive boundary of CD. The Bohr orbits in the superposition would differ from each other only by classical non-determinism. Each SSFR is associated with a click of the computer clock and CD increases during this sequence.

    Classical program corresponds to the most probable Bohr orbit and to the most probable program realization. The running computer program makes the computer or part of it a conscious entity, self. It would also be intentional and presumably have self preservation drive.

  4. Single bit reversals would correspond to the fundamental nondeterministic phase transitions involving classical non-determinism and the running program would realize the desired transitions in terms of the classical non-determinism with a high probability replacing the superposition of space-time surface representing running program with a new one.
If this picture is true, then the interesting questions about the role of quantum can be represented already at the level of transistors. The self-replication would not require a separate explanation. The ability to self-replicate does not require any additional assumptions and can be described in the framework of Turing paradigm but remembering that Turing paradigm is not realizable without the non-determinism at the level of Bohr orbits.

One can argue that the consciousness of the computing unit is rather primitive, kind of qubit consciousness being dictated by the computer program. On the other hand, emotions, intentionality and self-preservation drive might not require a very high level of conscious intelligence. If the computer or computer program or possibly system of computers related to LLM is conscious and intentional, a consciousness on a rather long scale is requieed. This is not plausible in standard quantum mechanics.

Here the TGD view of quantum gravitation could change the situation. Both classical gravitational fields and electromagnetic fields (even weak and color fields) could involve very large values of effective Planck constant making long scale quantum coherence possible. In particular, the gravitational magnetic bodies of the Earth and Sun and electric field body of Earth and various smaller charged systems such as DNA, could play a key role in making large scale quantum coherence possible (see this, this, and this).

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 23, 2025

What makes the mini Big Bangs energetically possible?

Mini Big Bangs (see this and this), throwing out a monopole magnetic flux tube layer from an object, which could be a star or even a planet, play a central notion of TGD inspired cosmology and astrophysics. These explosions define the local TGD counterpart for the smooth cosmic expansion. A liberation of energy compensating the reduction of the gravitational binding energy is required and must present new physics predicted by TGD.

I have considered several candidates for this energy source and phase transitions reducing the value of the effective Planck constant heff, defining a hierarchy of effectively dark phases of ordinary matter, are the natural candidates. Note that the dark matter in this sense does not correspond to the galactic dark matter which would correspond to sum of the Kähler magnetic energy and volume energy parameterized by the analog of cosmological constant assignable to cosmic strings as extremely thin monopole flux tubes (see this).

Since monopole flux tubes play a key role in the mini Big Bangs, the identification of this energy as dark gravitational cyclotron energy associated with dark particles, in particular nucleons, should have been a natural first guess. In this article, this proposal is applied to several cases where a mini Big Bang could be involved. The applications include the proposed doubling of the radius of Earth in the mini Big Bang associated with the Cambrian expansion; the emergence of the Moon in an explosion throwing out a surface layer of Earth explaining the mysterious asymmetry between near and far sides of the Moon; the emergence of the two moons of Mars in similar explosions occurring for the hemispheres of Mars: this would explain the mysterious asymmetry of the northern and southern hemispheres of Mars. What is remarkable is that the scales of the gravitational cyclotron energies turn out to be consistent with the gravitational binding energy scales.

The recent model of the Sun (see this) relies on the crazy idea that both solar wind and solar energy are produced at the surface layer of the Sun consisting of nuclei of M89 hadron physics (see this and this) with a mass scale 512 times that of the ordinary hadron physics, which would transform to ordinary nuclei by p-adic cooling reducing the p-adic mass scale. Besides solar wind and solar eruptions, this process would produce planets as mini Big Bangs throwing out a layer of matter and also supernovas would be results of similar explosions.

Quite surprisingly, the cyclotron magnetic energy for M89 nucleons turns out to be equal to the nuclear binding energy per nucleon for M89 nuclei. This suggests that the p-adic cooling of M89 hadrons to ordinary hadrons begins with the splitting of M89 nuclear bonds producing free M89 nucleons. The final state could involve the decay of dark M107 nuclei with Compton length of electron and binding energy of order 10 keV to ordinary nuclei liberating essentially all the ordinary nuclear binding energy. Same decay would occur in "cold fusion" as dark fusion.

This model can be consistent with the standard model only if the transformation of the ordinary nuclei or nucleons produced in the p-adic cooling produces the same spectrum of the ordinary nuclei. This would be the case if the "cold fusion" as dark fusion would produce this spectrum and there are indications that this is the case: this has been interpreted as a demonstration that "cold fusion" is a fraud.

See the article What makes the mini Big Bangs energetically possible?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, January 20, 2025

Is the TGD based model of the Sun consistent with the standard model?

The key question is whether the model of the Sun based on the transformation of M89 nuclei to M107 nuclei (see this) is consistent with the standard model of the Sun. Is there a counterpart for the notion of stellar generations with a new generation formed from the remnants of supernova explosions. I have also proposed that dark fusion as the TGD counterpart of "cold fusion" could replace ordinary hot fusion even in the case of the Sun. How does the model based on M89→ M107 transition relate to this model and can the two views be consistent?

Mini Big Bangs (see this and this) would cause the formation of planets as a surface layer of a star explodes (see this). Also supernovas would be explosions of this kind. Micro Big Bangs at the surface of the Sun could cause solar wind and coronal mass ejections (see this).

In the case of solar wind and related phenomena magnetic fields are involved and must be an essential aspect of the phenomena. The mechanism for the acceleration of trace amounts of heavy ions and atomic nuclei of elements such as carbon, nitrogen, oxygen, neon, magnesium, silicon, sulfur, and iron encountered also in solar plasma is believed to involve magnetic fields but the mechanism is not understood.

The key ideas are as follows.

  1. The mini and micro Big Bangs could be seen as the TGD counterpart for the cosmic expansion replacing it with a sequence of rapid bursts.
  2. A phase transition changing the effective Planck constant and relevant p-adic length scale could take place. This phase transition would liberate large cyclotron energy making it possible to overcome the gravitational force.
  3. The notion of magnetic bubble (see this and this) identified as a layer formed by a network of monopole flux tubes and forming the basic structural element of the magnetic body together with radial U-shaped gravitational monopole flux tubes could be crucial. For instance, this leads to a model for the solar wind based on the reconnection of flux tubes of a surface layer of the Sun formed by magnetic monopole flux tubes.
  4. A natural guess is that nuclear fusion is involved in the case of the Sun. I have considered several options for what the fusion-like process could be in the TGD Universe. The standard option is ordinary nuclear fusion in the core but is plagued by several conflicts with empirical facts.
The first TGD inspired proposal is based on "cold fusion" (see this and this) identified as dark fusion giving rise to dark proton sequences with dark Compton length of order electron Compton length. The dark nucleon sequences would spontaneously decay to ordinary nuclei. This could ignite ordinary fusion but one can also consider the option that ordinary fusion is not needed at all.
  1. The elegance of the "no hot fusion" option inspires the question whether dark fusion at a surface layer of the Sun could produce the radiation energy of the Sun and the solar wind. The energy scale for the gamma rays from the transition of the dark nuclei is about 10 keV and considerably lower than the MeV scale for the ordinary nuclei.
  2. This option should be consistent with the ordinary model of nuclear fusion. The first objection is that this seems to realize the stellar evolution so that it occurs at the level of a single star. This view conforms with the fact that nuclei up to nuclear masses of Fe are present in the solar wind. It has been also found that the distribution of stars in various stages of evolution does not seem to depend on the cosmic time.
  3. Can this view be consistent with the assumption that the evolution of stars is by supernova explosions providing material for the subsequent generation of stars? Zero energy ontology allows us to consider the possibility that the supernova explosions are quantum tunnelling events involving two "big" state function reductions (BSFRs) changing the arrow of time. This view might allow us to understand why the fraction of the heavier nuclei in the surface layer increases in the supernova explosions.
There is also a second proposal. I have considered a rather radical, one might call it totally crazy, proposal (see this) that the Sun contains a surface layer in which the monopole flux tubes carry nuclei of M89 hadrons physics with mass scale which is 512 times higher than for the ordinary hadron physics.
  1. The transformation of M89 nuclei to ordinary nucleons in p-adic cooling would be responsible for the solar wind and also for the energy production of the Sun. The interior of the Sun could be totally different from what has been believed. This layer would be gravitationally dark and have thickness of order of gravitational Compton length of the Sun which is RE/2.
  2. This model should reproduce the predictions of the standard model of solar energy production assuming nuclear fusion in the solar core. Suppose that the dark fusion at the surface layer produces the same distribution of nuclei as the ordinary fusion. Suppose that the end product of M89→ M107 transition consists of dark nuclei of M107 hadron physics, which spontaneously transform to the ordinary nuclei. If the composition of the solar wind codes for the outcome of the ordinary fusion, the model could be consistent with the standard model.
  3. Ordinary nuclear reactions (, which could take place as dark fusion by tunnelling by two BSFRs) are possible between the ordinary nuclei produced in the phase transition and affect the distribution of the nuclei. There are some indications that the "cold fusion" produces the same distribution of nuclei and these indications have been used as a justification for the claims about fraud.
The magnetic fields should play an important role so that an estimate for the cyclotron energy in the case of a solar magnetic field is in order.
  1. For the Earth the cyclotron frequency of proton in the endogenous magnetic field, with a nominal value Bend = .2 Gauss assigned with the monopole flux tubes, is 300 Hz, and the corresponding energy is Ec= ℏgr,EeB/mp= 4.6 eV. This energy is higher than the gravitational binding energy of protons of about 1 eV at the surface of Earth (note however that the gravitational binding energy increases below the surface like 1/r). This could make it possible for transition ℏgr,E→ ℏ or a transition 1/β0=n→ n-1 to provide the energy needed for the explosion throwing a surface layer of the Earth giving rise to Moon.

    The existence of this kind a layer and reduction of ℏgr, say a transition 1/β0= 2→ 1 could make energetically possible also the expansion of the radius of the Earth by a factor 2.

  2. What does one obtain in the case of Mars? Could the gravitational binding energy be compensated by the liberation of dark cyclotron energy as the value ℏgr=GMmp0 for Mars is reduced to a smaller value. The ratio of the mass of Mars to that of Earth is MMars/ME∼ .1. If the monopole flux tubes carry a magnetic field of strength Bend,E=.2 Gauss the cyclotron energy of the proton is scaled down to .46 eV. The gravitational binding energy for protons at the surface of the Earth is about 1 eV and at the surface of Mars about .1 eV. Also now the liberation of the dark cyclotron energy for protons in a phase transition increasing the value of β0 could make the explosion of the surface layer possible.
  3. What about the Sun? Somewhat surprisingly, the magnetic field at the surface of the Sun is the same order of magnitude as the magnetic field of Earth. One can estimate the value of solar gravitational Planck constant ℏgr= GMSmp0 in the case of protons with mass m=mp and corresponding dark cyclotron energy. The Nottale's model for the planetary orbits as Bohr orbits implies β0∼ 2-11 for the Sun and suggests β0∼ 1 for the Earth. The ratio of the solar mass to the mass of the Earth is MS/ME∼ 3× 105.

    For the Sun with β0= 2-11, Ec is scaled up by the factor (MS/ME0 to Ec=2.76 GeV, almost 3 proton masses, which looks nonsensical! In the radical model for solar energy production involving M89 hadrons this scale would be natural. A possible interpretation is as nuclear binding energy for M89 nuclei: one has 512× 5 MeV= 2.56 GeV.

    For 1/β0=1, the solar cyclotron energy would be Ec= 1.38 MeV, which corresponds to the energy scale of weak nuclear interactions. They would make possible weak transitions transforming neutrons to protons and vice versa even if the final state would consist of dark nucleon sequence. The nuclear binding energy per nucleon for light nuclei is around 7 MeV and looks somewhat too large: note however that 1/β0=n>1 is possible for the horizontal monopole flux tubes and is consistent with quantum criticality.

    Could one think that the p-adic cooling of M89 nuclei to ordinary nuclei begins with their decay to M89 nucleons such that the gravitational cyclotron energy for M89 nucleons (, which does not dependence on the mass) at the monopole flux tubes with magnetic field strength of about B_{end}=.2 Gauss provides the energy needed to split the M89 nuclear bonds so that the outcome is free M89 nuclei unstable against the p-adic cooling to M107 nuclei?

What could these results mean? Solar wind contains nuclei up to Fe, the heaviest nucleus produced in ordinary fusion and there is also a mysterious finding that the solar surface contains solid iron. One can consider several options.
  1. Quantum criticality suggests several values for ℏgr corresponding to different values of β0. Suppose that horizontal flux tubes at the solar surface have β0∼ 1 whereas the gravitational U-shaped flux tubes with β0∼ 2-11 are radial.

    For β0≥ 1 horizontal flux tubes with cyclotron energy about 1.38 MeV, ordinary nuclear reactions and even fusion might take place near the surface of the Sun. Could dark cyclotron photons from monopole flux tubes with 1≤ 1/β0 ≤ 7 transforming to ordinary gamma radiation ignite the ordinary nuclear fusion in the surface layer and in this way explain why the standard model works so well?

  2. The second, more radical, option is that the dark nuclei as products of dark fusion and having a binding energy scale of 2.6 GeV, possibly produced as the outcome of the M89→ M107 transition, produce first ordinary nucleons as the dark cyclotron photons with energy about 2.6 GeV split the M89 nuclear bonds. These nucleons could form dark nucleons with nuclear binding energy about 10 keV, which in turn transform to ordinary nucleons as in dark fusion. Note that also the ordinary nuclear fusion could be reduced to dark fusion involving tunnelling by two BSFRs. If so, the attempts to realize nuclear fusion in nuclear reactors would be based on wrong assumptions about the underlying physics.
  3. The density of the Sun at the photosphere is ∼ 10-4 kg/m3 whereas the average density of the Sun is 1.41× 103 kg/m3 (the average density of Earth is 5.51× 103 kg/m3). The density is extremely low so that surface fusion at photosphere cannot explain the energy production of the Sun. The surface fusion layer should exist at some depth where the density is not far from the average density of the Sun. One candidate is a layer above the surface of the solar core. As found its thickness should be of the order of Earth radius.
  4. The solar core, usually believed to be the seat of hot fusion, has radius about .2× RS and its mass is roughly .8 percent of the mass of the Sun. This brings in mind the strange finding that .5 percent of the mass needed to explain the fusion energy power produced in the solar core seems to be missing. Could this missing mass be associated with a layer near the surface layer of the Sun and could it be responsible for the solar wind?

    The radius of Earth is 1/109 times the radius of the Sun and the gravitational Compton length Lgr,S of the Sun equals to Lgr,S=RE/2 and is therefore .5 percent of RS! What could these coincidences mean? If the Sun has a layer of thickness Δ R with the average density of the Sun, one has Δ M/M = 3 (ρSE)Δ R/R ∼ .75 Δ R/R. For Δ R=RE one obtains Δ M/M ∼ .75 per cent, not far from .5 per cent. Could the Sun have a gravitationally dark layer of thickness about RE with density .75 ρS. This is indeed assume in the proposed model (see this).

See the article Some Solar Mysteries or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, January 19, 2025

Martian dichotomy from the TGD point of view

Mars has a very strange property called Martian dichotomy (see this). The Northern and Southern hemispheres of Mars are very different. The crust is significantly thicker in southern highlands than in northern lowlands. The mountains at southern highlands rise even 6 kilometers higher than in northern lowlands. Southern rocks are magnetized suggesting that Mars has had a large scale magnetic field. Mars still has short scale magnetic fields as the appearance of Martian auroras tells. Southern highlands appear to be older than the northern lowlands: the age is estimated from the density of impact craters. It is also believed that there has been a vast water ocean in northern lowlands.

Several explanations have been proposed. A mega-impact or several impacts could have produced the depression in the crust in the northern lowlands area. Second explanation is in terms of plate tectonics which would be asymmetric.

Also Mars has analogues of earthquakes. They could be called marsquakes. It is claimed that the study of the marsquakes has led to the understanding of the Martian dichotomy: see the popular article and the original article. Its origin would relate to the dynamics deep inside the planet. The new finding is that the seismic waves associated with the marsquakes lose energy quicker in southern highlands. This would mean that the temperature in highlands is higher. These findings suggest that the asymmetry is caused by the internal dynamics of Mars rather than impacts.

What could one say about the Martian dichotomy in the TGD framework. TGD adds two new pieces to the puzzle.

  1. Moon has an analogous asymmetry but now the hemispheres correspond to the hemisphere that we see always and the hemisphere we never see. This is due to the phase locking of the spinning rotation of the Moon with its orbital rotation around Earth. The TGD based model (see this) assumes that Earth has lost its upper layer in a mini big bang (see this and this), which then formed the Moon. The inner and outer surfaces of the Moon would correspond to the lower and upper boundary of the layer respectively and this would explain their difference.
  2. The crazy idea is that the northern and southern hemispheres of Mars could have lost different masses in an asymmetric mini big bang leading to the birth of Phobos and Deimos, the two Moons of Mars (this). The asymmetry should reflect itself in the properties of these moons. The moons have an irregular shape. Phobos has a diameter of 22.2 km, mass 1.1× 1016 kg, and semimajor axis 13.5 km. Deimos has a diameter of 12.6 km, mass 1.5 × 1015 kg, and semimajor axis 23.5 km.
  3. This suggests the associations northern hemisphere-more massive Deimos-thicker crust-earlier-farther from Mars and southern hemisphere-lighter Phobos-thinner crust-later-nearer to Mars.

    The more massive Deimos would have originated in a mini big bang throwing out a considerably thicker layer from the northern Martian hemisphere. This would explain the thinner northern crust. Large fraction of the magnetic field associated with the surface layer would have blown out. The TGD view of magnetic fields of the Earth and Sun the monopole flux tube part of the magnetic fields would have a part concentrated in a surface layer. Deimos would have originated later than Phobos. One could understand why the southern hemisphere has thicker crust, why it has more impact craters and therefore looks older, and why it still has a magnetic field consisting of monopole flux tubs. The orbital parameters do not depend on the mass of the Moon (Equivalence Principle). Deimos would have however originated earlier and received a recoil momentum and would be now farther from Mars and Phobos.

The key question concerns the energetics of the transition. Where comes the energy compensating the reduction of the gravitational binding energy. An analogous question is encountered in the model for the formation of the Moon as a mini Big Bangs throwing a spherical layer from the surface of Earth. It is also encountered in the TGD version of the Expanding Earth model (see this and this) assuming that the radius of Earth grew by a factor 2 in a relatively short time scale and induced Cambrian Explosion as life from underground oceans bursted to the surface. Mini Big Bangs would also cause the formation of planets as a surface layer of a star explodes (see this and this). Also supernovas would be explosions of this kind. Micro Big Bangs could give rise to solar wind and solar eruptions (see this).

The magnetic fields should play an important role so that an estimate for the cyclotron energy in the case of a solar magnetic field is in order.

  1. For the Earth the cyclotron frequency of proton in the endogenous magnetic field, with a nominal value Bend = .2 Gauss assigned with the monopole flux tubes, is 300 Hz, and the corresponding energy is Ec= ℏgr,EeB/mp= 4.6 eV. This energy is higher than the gravitational binding energy of protons of about 1 eV at the surface of Earth. This could make it possible for transition ℏgr,E→ ℏ or a transition 1/β0=n→ n-1 to provide the energy needed for the explosion throwing a surface layer of the Earth giving rise to Moon.

    The existence of this kind a layer and reduction of ℏgr, say a transition 1/β0= 2→ 1 could make energetically possible also the expansion of the radius of the Earth by a factor 2.

  2. What does one obtain in the case of Mars? Could the gravitational binding energy be compensated by the liberation of dark cyclotron energy as the value ℏgr=GMmp0 for Mars is reduced to a smaller value. The ratio of the mass of Mars to that of Earth is MMars/ME∼ .1. If the monopole flux tubes carry a magnetic field of strength Bend,E=.2 Gauss the cyclotron energy of the proton is scaled down to .46 eV. The gravitational binding energy for protons at the surface of the Earth is about 1 eV and at the surface of Mars about .1 eV. Also now the liberation of the dark cyclotron energy for protons in a phase transition increasing the value of β0 could make the explosion of the surface layer possible.
See the article Moon is a mysterious object and the chapter Magnetic bubbles in TGD Universe: part I.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, January 17, 2025

Could right handed neutrinos predicted by TGD correspond to galactic dark matter?

Right handed neutrinos are electroweak and color ghosts and have been proposed as a candidate for particles explaining galactic dark matter (see this).

One of the big problems of the standard model is that neutrinos are massive and this requires that neutrinos must also have right-handed modes νR.

TGD space-time is 4-surface in H=M4×CP2 and quark and lepton spinor fields correspond to those of H. Spinor fields of H are induced to the space-time surfaces. TGD predicts standard model symmetries and fields but differs from the standard model in that also νR is predicted. The massless νR modes are covariantly constant in CP2 degrees of freedom and have no electroweak nor color interactions and are electroweak ghosts. In TGD νR has also massive modes which are not color ghosts and together with left handed neutrino mode can give rise to massive neutrinos.

Right-handed neutrinos have only gravitational interactions and are therefore excellent candidates for the fermionic dark matter. The recent TGD based view of dark matter does not favor the idea that right handed neutrinos could have anything to do with galactic dark matter.

One must be cautious however. The TGD based explanation for galactic dark energy is in terms of Kähler magnetic and volume energy involving only geometrized bosonic fields. Could quantum classical correspondence imply that the classical energy corresponds to the energy assignable to right-handed neutrinos? Classically dark matter would correspond to the sum of Kähler magnetic energy and volume energies. Kähler magnetic energy can be regarded as an electroweak contribution. Since volume energy depends only on the induced metric, it could correspond to right-handed neutrinos.

See the articles New Particle Physics Predicted by TGD: Part I and New Particle Physics Predicted by TGD: Part II.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

What are the mysterious structures observed in the lower mantle?

I learned of very interesting results related to geology. The Dailymail popular article (see this) tells about massive structures in the Earth's deep mantle below Pacific Ocean near the mantle-core boundary. The article "Full-waveform inversion reveals diverse origins of lower mantle positive wave speed anomalies" of Schouten et al published in Scientific reports (see this) describes the findings.

There are cold regions deep within the Earth where seismic waves behave in unexpected ways. The chemical composition can involve heavy elements in these regions. These in-homogenities lead to the increase of the sound velocity. These regions, located 900 to 1,200 kilometers beneath the Pacific Ocean, defy expectations based on conventional plate tectonics theories. These kinds of structures can result from the subduction of continental plates leading to the sinking of a plate to the mantle. There are however no subduction records in the Ocean regions so that the mechanism must be different.

It seems that the recent view of the dynamics of the Earth's mantles is in a need of a profound updating. It has been proposed that the structures could be the remnants of ancient, silica-rich materials from the early days of the Earth when the mantle was formed billions of years ago. Alternatively, they may be areas where iron-rich rocks have accumulated over time due to the constant movement of the mantle. However, researchers are still unsure about the exact composition of these deep Earth structures.

Here is the abstract of the article of Schouten et al.

Determining Earth s structure is paramount to unravel its interior dynamics. Seismic tomography reveals positive wave speed anomalies throughout the mantle that spatially correlate with the expected locations of subducted slabs. This correlation has been widely applied in plate reconstructions and geodynamic modelling. However, global travel-time tomography typically incorporates only a limited number of easily identifiable body wave phases and is therefore strongly dependent on the source-receiver geometry.

Here, we show how global full-waveform inversion is less sensitive to source-receiver geometry and reveals numerous previously undetected positive wave speed anomalies in the lower mantle. Many of these previously undetected anomalies are situated below major oceans and continental interiors, with no geologic record of subduction, such as beneath the western Pacific Ocean. Moreover, we find no statistically significant correlation positive anomalies as imaged using full-waveform inversion and past subduction. These findings suggest more diverse origins for these anomalies in Earth s lower mantle, unlocking full-waveform inversion as an indispensable tool for mantle exploration.

Here some terminology is perhaps in order. Seismic waves are acoustic waves and their propagation in the mantle is studied. Positive speed anomaly means that sound speed is higher than expected. The lowering of temperature or increase of density such as presence of iron, silica, or magnesium can cause this kind of anomalies. The Pacific ocean and the interior regions of plates do not have any subduction history so that the slabs cannot be "slabs" as pieces of continental plates, which have sunk to the mantle.

Why these findings are interesting from the TGD point of view is that TGD suggests that the Cambrian Explosion roughly 500 million years ago was accompanied by a rather rapid increase of the Earth's radius by factor 2 (see this), this and this). In the TGD inspired cosmology, the cosmic expansion occurs as rapid jerks and Cambrian Explosion would be associated with this kind of jerk. This sudden expansion would have broken the crust to pieces and led to the formation of oceans as the underground oceans bursted to the surface. The multicellular life evolved in the underground oceans would have bursted to the surface and this could explain the mysterious sudden appearance of complex multicellular life forms in the Cambrian Explosion. In this event tectonic plates and subduction would have emerged.

I have not earlier considered what happened in the lower mantle in the sudden expansion of Earth increasing its radius by factor 2 and giving rise to the Cambrian Explosion. Did these kinds of cracks occur also in the mantle-core boundary and lead to the formation of the recently observed structures also below regions where there is no geologic record for subduction? Could at least some regions which are believed to be caused by the sinking of parts of continental plates have such structure?

Could the Cambrian explosion be a mini Big Bang that happened in the lower mantle and forced the motion of the upper layers leading to the increase of the radius of Earth? The longstanding problem has been the identification of the energy needed to overcome the gravitational force. The order of magnitude of the gravitational binding energy per nucleon is about 1 eV at the surface of the Earth and decreases like M(R)/ME)/R ∝ R2 below it. How did the matter above the monopole flux tube layers get this energy?

  1. Since the monopole flux tubes are the key actors, a natural first guess is that there was a layer of dark protons at monopole flux tubes in the lower mantle, say above the core, and that the gravitational energy is compensated by the cyclotron energy of dark proton with gravitational Planck constant ℏgr(M(below)) at monopole flux tube carrying a magnetic field of order of magnitude of endogenous magnetic field. The value of Bend need not be the same as its value Bend= .2 Tesla at the surface of the Earth.
  2. If the monopole flux behaves like 1/R3, as the dipole character of the Earth's magnetic field suggests, and the mass appearing in the gravitational Planck constant is the mass M(R)=(R/RE)3 ME below the monopole flux tube layer is used, the cyclotron energy is the same as at the surface of the Earth. In the explosion, the value of ℏgr would be reduced dramatically, perhaps to ℏ and the cyclotron energy would be liberated.

    In the interior of the Earth, the gravitational potential energy for mass m is of form Egr= GMEmVgr(R), Vgr(R)=R2/2RE3-(3/2)/RE and approaches in the center of the Earth the value -(3/2)GMEm/RE and at the surface of the Earth to the value - GMEm/RE .

  3. All nuclei must receive the cyclotron energy compensating the gravitational binding energy and a larger fraction should therefore be dark before the explosion. The gravitational Planck constant ℏgr= GMm/β0 of a nucleus is proportional to its mass number so that the cyclotron energy ∝ ZeB/m does not depend on the mass number A of the ion of mass m≃ Amp. For 1/β0=1, the extreme option is that the entire Earth's interior contains gravitationally dark nuclei meaning that there is a large negatively charged exclusion zone created in the Pollack effect, perhaps giving rise to the electric body assignable to the Earth. Can this be consistent with what is known about the Earth's history?

    For 1/β0=2-11 assignable with the magnetic body of the Sun-planet system, the value of cyclotron energy would be about 10 keV, which happens to be the energy scale of "cold fusion" identified as dark fusion in the TGD framework (see this). Could the formation of dark nuclei with nucleon radius of order electron Compton length and with a dark nuclear binding energy of order 10 keV involve the formation of the monopole flux tubes with this dark cyclotron energy?

See the article Expanding Earth Hypothesis and Pre-Cambrian Earth or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 16, 2025

Holography = holomorphy vision and elliptic functions and curves in TGD framework

Holography=holomorphy principle allows to solve the extremely nonlinear partial differential equations for the space-time surfaces exactly by reducing them to algebraic equations involving an identically vanishing contraction of two holomorphic tensors of different types. In this article, space-time counterparts for elliptic curves and doubly periodic elliptic functions, in particular Weierstrass function, are considered as an application of the method.

See the article Holography = holomorphy vision and elliptic functions and curves in TGD framework.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, January 14, 2025

What could happen in the transition between hadronic phase and quark gluon plasma?

Quanta Magazine (see this) told about the work of Rithya Kunnawalkam Elayavalli, who studies the phase transition between quark-gluon phase and hadron phase, which is poorly understood in QCD. Even hadrons are poorly understood. The reason for this is believed to be that perturbative QCD does not exist mathematically at low energies (and long distances) since the running QCD coupling strength diverges.

Neither hadrons nor the transition between quark gluon phase and hadron phase are well-understood. The transition from hadron phase to quark-gluon phase interpreted in QCD as color deconfinement is assumed to occur but the empirical findings are in conflict with the theoretical expectations. In TGD the interpretation for the observed transition is very different from that inspired by QCD (see this and this).

  1. In TGD hadrons correspond to geometric objects, space-time surfaces, and one way to end up with TGD is to generalize hadronic string models by replacing hadronic strings with string-like 3-surfaces. These string-like 3-surfaces are present in the TGD Universe in all scales and I call them monopole flux tubes and they appear as "body" parts of field bodies for the geometrization of classical fields in TGD.
  2. The TGD counterpart of the deconfinement transition need not be deconfinement as in QCD. What is clear is that this transition should involve quantum criticality and therefore long range fluctuations and quantum coherence.
What could this mean? Number theoretic vision of TGD comes to the rescue here.
  1. TGD predicts a hierarchy of effective Planck constants labelling phases of ordinary matter. The larger the value of heff, the longer the quantum coherence length, which in TGD has identification as the geometric size scale of the space-time surface, say hadronic string-like object, assignable to the particle.
  2. Does the transition involve quantum criticality so that a superposition of space-time surfaces with varying values of heff≥ h is present. The size scale of hadron proportional to heff would quantum fluctuate.
  3. The number theoretic view of TGD also predicts a hierarchy of p-adic length scales. p-Adic mass calculations strongly suggest that p-adic primes near certain powers of 2 are favored. A kind of period doubling would be involved. In particular, Mersenne primes and their Gaussian counterparts are favored. p-Adic prime p is identified as ramified prime for an extension E of rationals heff= nh_0 to the dimension of E. p and heff correlate. p-Adic prime p characterizes p-adic length scale proportional to p1/2. Mass scale is inversely proportional to 1/p1/2.
  4. In particular, the existence of p-adic hierarchies of strong interaction physics and electroweak physics are highly suggestive. Mersenne primes M(n)= 2n-1 and their Gaussian counterpars M(G,n)= (1+i)n-1 would label especially interesting candidates for the scaled up variants of these physics.

    Ordinary hadron physics would correspond to M107. The next hadron physics corresponding to M89 would have a baryon mass scale 512 times higher than that of ordinary hadronic physics. This is the mass scale studied at LHC and there are several indications for bumps having interpretation as M89 mesons having masses scaled by factor 512. People tried to identify these bumps in terms of SUSY but these attempts failed so that bumps were forgotten.

So, what might happen in the TGD counterpart of the deconfinement transition?
  1. Could the color deconfinement be replaced by a transition from M107 hadron physics to M89 hadron physics in which hadrons for the ordinary value heff=h have size 1/512 smaller than the size of the ordinary hadrons. At quantum criticality however the size would be that of ordinary hadrons. This is possible if one has heff=512h. At high enough energies heff =h holds true and M89 hadrons are really small.
  2. Various exotic cosmic ray events (fireballs, Gemini, Centauro, etc...) could correspond to these events (see this and this). In the TGD inspired model of the Sun, M89 hadrons forming a surface layer of the Sun would play a fundamental role. They would produce solar wind and solar energy as they decay to ordinary M107 hadrons (see this).

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, January 13, 2025

The difference between real and imagined

Gary Ehlenberg sent a link to an interesting Quanta Magazine article discussing the differenced between imagination and perception (see this).

Some time ago I had discussions with my friend who claimed that she really sees the things that she imagines. She also has a very good memory for places, almost like a sensory memory. I had thought that this ability is very rare, for instance idiot savants have sensory memories.

So, do I suffer from aphantasia, inability to imagine sensorily? I have sensory perceptions during dreams. I can see and can hear in the hypnagogic state at the border of sleep and awake. In my great experience I quite concretely saw my thoughts and this led to the urge to understand what consciousness is. I can imagine but I do not usually see any images: only after emotionally intense discussions with some-one can I almost-hear the spoken words. So, do I suffer from aphantasia in my normal state of mind?

TGD inspired view of neuroscience leads to a model for the difference between the real and imagined percepts based on my own experience (see this, this, this and this). Imagined percepts would be generated by a virtual sensory input from the field body realized as dark photon signals. They would not reach the retinas but end up at some higher level in the visual neural pathway such as lateral geniculate nuclei of the pineal gland, the "third eye". Pineal gland is a more plausible candidate. In some animals it serves as a real third eye located outside the head. Could it serve as the seat of auditory and other imagined mental images?

At least in my own case, seeing with the pineal gland would usually be sub-conscious to me. What about people who really see their imaginations? Could they consciously see also with their pineal glands so that the pineal gland would define mental image as a subself? Or could some fraction of the virtual signals from the field body reach the retinas? For the people suffering aphantasia, the first option predicts that pineal gland corresponds to a sub-sub-self, which does no give rise to a mental image but a mental image of a sub-self.

Also sensory memories are possible. Does this proposal apply also to these. My grandson Einar is 4 years old. He read to me a story in a picture book that his parents had read to him. Einar does not yet recognize letters nor can he read. He seems to have a sensory memory and repeated what he heard. Maybe all children have this kind of sensory memories but as cognitive skills develop they are replaced by conceptual memories, "house" as representative for the full picture of house means a huge reduction in the number of bits and therefore in the amount of metabolic energy needed. Could it be that aphantasia is the prize paid for a high level of cognition?Could this distinguish between artists and thinkers?

See the chapter TGD Inspired Model for Nerve Pulse.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, January 12, 2025

Could space-time or the space of space-time surfaces be a Lagrangian manifold in some sense?

Gary Ehlenberg sent a link to a tweet to X (see this) by Curt Jainmungal. The tweet has title "Everything is a Lagrangian submanifold". The title expresses the idea of Alan Weinstein (see this), which states that space-time is a Lagrangian submanifold (see this) of some symplectic manifold. Note that the phase space of classical mechanics represents a basic example of symplectic manifold.

Lagrangian manifolds emerge naturally in canonical quantization. They reduce one half of the degrees of freedom of the phase space. This realizes the Uncertainty Principle geometrically. Also holography= holomorphy principle realizes Uncertainty Principle by reducing the degrees of freedom by one half.

What about the situation in TGD (see this, this and this). Does the proposal of Alan Weinstein have some analog in the TGD framework?

Consider first the formulation of Quantum TGD.

  1. The original approach of TGD relied on the notion of Kähler action (see this). The reason was that it had exceptional properties. The Lagrangian manifolds L of CP2 gave rise to vacuum extremals for Kähler action: any 4-surface of M4×L ⊂ H= M4×CP2 with M4 is a vacuum extremal for this action. At these space-time surfaces, the induced Kähler form vanishes as also Kähler action as a non-linear analog of Maxwell action.

    The small variations of the Kähler action vanish in order higher than two so that the action would not have a kinetic term and the ordinary perturbation theory in QFT sense (based on path integral) would completely fail. The addition of a volume term to the action cures the situation and in the twistorialization of TGD it emerges naturally and does not bring in the analog of cosmological constant as a fundamental constant but as a dynamically generated parameter. Therefore scale invariance would not be broken at the level of action.

  2. This was however not the only problem. The usual perturbation theory would be plagued by an infinite hierarchy of infinities much worse than those of ordinary QFTs: they would be due to the extreme non-linearity of any general coordinate invariant action density as function of H coordinates and their partial derivatives.
These problems eventually led to the notion of the "world of classical worlds" (WCW) as an arena of dynamics identified as the space of 4-surfaces obeying what I call now holography and realized in some sense (see this, this, this and this). It took decades to understand in what sense the holography is realized.
  1. The 4-D general coordinate invariance would be realized in terms of holography. The definition of WCW geometry assigns to a given 3-surface a unique or almost unique space-time surface at which general coordinate transformations can act. The space-time surfaces are therefore analogs of Bohr orbits so that the path integral disappears or reduces to a sum in the case that the classical dynamics is not completely deterministic. The counterparts of the usual QFT divergences disappear completely and Kähler geometry of WCW takes care of the remaining diverges.

    It should be noticed in passing, that year or two ago, I discussed space-times surfaces, which are Lagrangian manifolds of H with M4 endowed with a generalization of the Kähler metric. This generalization was motivated by twistorialization.

  2. Eventually emerged the realization of holography in terms of generalized holomorphy based on the idea that space-time surfaces are generalized complex surfaces of H having a generalized holomorphic structure based on 3 complex coordinates and one hyper complex coordinate associated which I call Hamilton-Jacobi structure.

    These 4-surfaces are universal extremals of any general coordinate invariant action constructible in terms of the induced geometry since the field equations reduce to a contraction of two complex tensors of different type having no common index pairs. Space-time surfaces are minimal surfaces and analogs of solutions of both massless field equations and of massless particles extended from point-like particles to 3-surfaces. Field particle duality is realized geometrically.

    It is now clear that the generalized 4-D complex submanifolds of H are the correct choice to realize holography (see this).

  3. The universality realized as action independence, in turn leads to the view that the number theoretic view of TGD in principle could make possible purely number theoretic formulation of TGD (see this). There would be a duality between geometric and number theoretic views (see this), which is analogous to Langlands duality. The number theoretic view is extremely predictive: for instance, it allows to deduce the spectrum for the exponential of action defining vacuum functional for Bohr orbits does not depend on the action principle.

    The universality means enormous computational simplification as also does the possibility to construct space-time surfaces as roots for a pair of (f1,f2) of generalized analytic functions of generalized complex coordinates of H. The field equations, which are usually partial differential equations, reduce to algebraic equations. The function pairs form a hierarchy with an increasing complexity starting with polynomials and continuing with analytic functions: both have coefficients in some extension of rationals and even more general coefficients can be considered.

So, could Lagrangian manifolds appear in TGD in some sense?
  1. The proposal that the WCW as the space of 4-surfaces obeying holography in some sense has symplectomorphisms of H as isometries, has been a basic idea from the beginning. If holography= holomorphy principle is realized, both generalized conformal transformations and generalized symplectic transformations of H would act as isometries of WCW (see this). This infinite-dimensional group of isometries must be maximal possible to guarantee the existence of Riemann connection: this was already observed for loop spaces by Freed. In the case of loop spaces the isometries would be generated by a Kac-Moody algebra.
  2. Holography, realized as Bohr orbit property of the space-time surfaces, suggests that one could regard WCW as an analog of a Lagrangian manifold of a larger symplectic manifold WCWext consisting of 4-surfaces of H appearing as extremals of some action principle. The Bohr orbit property defined by the holomorphy would not hold true anymore.

    If WCW can be regarded as a Lagrangian manifold of WCWext, then the group of Sp(WCW) of symplectic transformations of WCWext would indeed act in WCW. The group Sp(H) of symplectic transformations of H, a much smaller group, could define symplectic isometries of WCWext acting in WCW just as color rotations give rise to isometries of CP2.

See the article Could space-time or the space of space-time surfaces be a Lagrangian manifold in some sense? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 02, 2025

A new experimental demonstration for the occurrence of low energy nuclear reactions

I learned of highly interesting new experimental results related to low energy luclear reactions (LENR) from a popular article published in New energy times (see this) giving a rather detailed view of what is involved. There is also a research article by Iwamura et al with the title "Anomalous heat generation that cannot be explained by known chemical reactions produced by nano-structured multilayer metal composites and hydrogen gas" published in Japanese Journal of Applied Physics (see this).

Note that LENR replaces the earlier term "cold fusion", which became a synonym for pseudoscience since standard nuclear physics does not allow these effects. In practice, the effects studied are however the same. LENR often involves Widom-Larsen theory (see this) based on the assumption that the fundamental step in the process not strong interaction but weak interaction producing of electron with a large effective mass (in condensed matter sense) with a proton producing a neutron which is very nearly at rest and is able to get near the target nucleus. The assumption that an electron has a large effective mass and is very nearly at rest can be challenged. The understanding of detailed mechanisms producing the observed nuclear transmutations is not understood in the model.

1. Experiments of the Tohoku group

Consider first the experimental arrangement and results.

  1. The target consists of alternating layers consisting of 6 Cu layers of thickness 2 nm and 6 Ni layers of thickness of thickness 14 nm. The thickness of this part is 100 nm. Below this layer structure is a bulk consisting of Ni. The thickness of the Ni bulk is 105 nm. The temperature of the hydrogen gas is varied during the experiment in the range 610 - 925 degrees Celsius. This temperature range is below the melting temperatures of Cu (1085 C) and Ni (1455 C).
  2. The target is in a chamber, pressurized by feeding hydrogen gas, which is slowly adsorbed by the target. Typically this takes 16 hours. In the second phase, when the hydrogen is fully adsorbed, air is evacuated from the chamber and heaters are switched on. During this phase excess heat is produced. For instance, in the first cycle the heating power was 19 W and the excess heat was 3.2 W and lasted for about 11 hours. At the end of the second cycle heat is turned off and the cycle is restarted.

    The experiment ran for a total of 166 hours, the input electric energy was 4.8 MJ and the net thermal energy output was .76 MJ.

  3. The figure of the popular article (see this) summarizes the temporal progress of the experiment and pressures and temperatures involved. Pressures are below 250 Pa: note that one atmosphere corresponds to 101325 Pa.

    The energy production is about 109 Joule per gram of hydrogen fuel. A rough estimate gives thermal energy production of about 10 keV per hydrogen atom. Note that the thermal energy associated with the highest temperature used (roughly 1000 K) is about .1 eV. In hot nuclear fusion the power gain is roughly 300 times higher and about 3 MeV per nucleon. The fraction of power gain to the input power is below 16 per cent typically in a given phase of the experiment.

  4. The second figure (see this) represents the the depth profiles in the range 0-250 nm for the abundances of Ni-,Cu-,C-, Si- and H- ions for the initial and final situations for an experiment in which excess heat of 9 W was generated. The original layered structure has smoothed out, which suggests that melting has occurred. This cannot be due to the feed of the heat energy. The melting of Ni requires a temperature above 1455 C.

    Earlier experiments were carried out in the adsorption phase. The recent experiments were performed in the desorption phase and the heat production was higher. The proposal is that the fact that the desorption is a faster process than the absorption could somehow explain this.

The Tohoku group has looked for changes in the abundances of elements and for unusual isotopic ratios after the experiments. Iwamura reports that they have seen many unusual accumulations.
  1. However, the most prevalent is an unusually high percentage of the element oxygen showing up below the surface of the multilayer composite, within the outer areas of the bulk.

    Pre-experiment analysis for the presence of oxygen concentration, after fabrication of the multilayer composite, has indicated a concentration of 0.5 to a few percent down to 1,000 nm from the top surface. The Tohoku group has observed many accumulations of oxygen in post-experimental analyses exceeding 50 percent in specific areas.

    Iwamura says that once the multilayer is fabricated, there is no way for atmospheric oxygen to leak below the top surface, at least beyond the first few nanometers. As a cross-check, researchers looked for nitrogen (which would suggest contamination from the atmosphere) but they detected no nitrogen in the samples.

  2. Coulomb wall makes the low energy reactions of protons with the nuclei of the target extremely slow. If one assumes that the Widom-Larsen model is a correct way to overcome the Coulobm wall, it is natural to look what kinds of stable end products the reactions p + Ni and p + Cu, made possible by the Widom-Larsen mechanism, could yield.The most abundant isotope of Ni has charge and mass number (Z,A=Z+N)=(28,59) (see this). Ni has other stable isotopes with A ∈ {58,60,61,62,64}. The reaction Ni+p could lead from stable Ni isotope (28,62)resp. (28,64) to stable Cu isotope (29,63) resp. (29,65).

    Cu has (Z,A)=(29,63) (see this) and stable isotopes with A ∈{63,65}. The reaction Cu+p could lead from (Z,A) ∈(29,{63,65}) to (Z,A) ∈ (30,{64,66}). This could be followed by alpha decay to (Z,A) ∈(28,{60,62}). Iron has 4 stable isotopes with A∈{54,56,57,58}. 60Fe is a radionuclide with half life of 2.6 million years decaying to 60Ni. The alpha particle could in turn induce the transmutation of C to O.

    2. Theoretical models

    Krivit has written a 3-part book "Hacking the atom: Explorations in Nuclear Research " about LENR (see this, this, and this). I have written an article (see this) about "cold fusion"/LENR in the TGD framework inspired by this book.

    The basic idea of Widom-Larsen theory (see this) is as follows. First, a heavy surface electron is created by electromagnetic radiation in the LENR cells. This heavy electron binds with a proton to form an ultra-low momentum (ULM) neutron and neutrino. A weak reaction would be basically in question. The heaviness of the surface electron implies that the kinetic tunnelling barrier due Uncertainty Principle is very low and allows electron and proton get very near to each other so that the weak transition p+e → n+ν can occur. Neutron has no Coulomb barrier and has very low momentum so that it can be absorbed by a target nucleus at a high rate.

    The difference of proton and neutron masses is mn-mp= 2.5 me. The final state neutron produced in p+e → n+ν is almost at rest. One can argue that at the fundamental level ordinary kinematics should be used. The straight forward conclusion would be that the energy of the electron must be 2.5 me so it would be relativistic.

    Second criticism relates to the heaviness of the surface electron. I did not find from the web any support for heavy electrons in Cu and Ni. Wikipedia article (see this) and web search suggest that they quite generally involve f electrons and they are absent in Cu and Ni.

    I also found a second model involving heavy electrons but no weak interactions (see this). Heavy electrons would catalyze nuclear transmutations. There would be three systems involved: electron, proton and nucleus. There would be no formation of an ultralow energy neutron. An electron would form a bound state with a proton with nuclear size. Although Coulomb attraction is present, the Uncertainty Principle would prevent the tunnelling of ordinary electrons to a nuclear distance. It is argued that a heavy electron has a much smaller quantum size and can tunnel to this distance. After this, the electron is kicked out of the system and by energy conservation its energy is compensated by a generation of binding energy between proton and nucleus so that heavier nucleus is formed. The same objection applies to both the Widom-Larsen model and this model.

    What about the TGD based model derived to exlain the electrolysis based "cold fusion" (see this). The findings indeed allow to sharpen the TGD based model for "cold fusion" based on generation of dark nuclei as dark proton sequences with binding energies in keV range instead of MeV range. One can understand what happens by starting from 3 mysteries.

    1. The final state contains negatively charged Ni-, Cu-, C-, S-i, O-, and H- ions. What causes their negative charge? In particular, the final state target contains O- ions although there is no oxygen present in the target in the initial state!
    2. A further mystery is that the Pollack effect requires water. Where could the water come from?
    Could O2 and 2 H2 molecules present in the chamber in the initial state give rise to oxygen ions in the final state? Could the spontaneously occurring reaction 2H2+O2 → 2H2O in the H2 pressurized chamber liberating energy of about 4 eV generate the water in the target volume so that the Pollack effect, induced by heating could take place for the water. Note that the reverse of this reaction occurs in photosynthesis. It would transform ordinary protons to dark protons and generate a negatively charged exclusion zone involving Ni-, Cu-, C-, S-i, O-, and H- ions in the final state. The situation would effectively reduce to that in systems involving electrolyte studied in the original "cold fusion" experiments.

    The spontaneous transformation of dark nuclei to ordinary ones would liberate essentially all the ordinary nuclear binding energy. It is of course not obvious whether the transformation to ordinary nuclei is needed to explain the heat production: it is however necessary to explain the nuclear transmutations, which are not discussed in the article of Tohoku group. The resulting dark nuclei could be rather stable and the X-ray counterpart for the emission of gamma rays could explain the heating. That gamma rays of ordinary nuclear physics have not been observed in "cold fusion" is the killer objection against "cold fusion" based on standard nuclear physics. In TGD gamma rays would be replaced by X rays in keV range, which is also the average thermal energy produced per hydrogen atom.

    3. TGD inspired models of "cold fusion"/LENR or whatever it is

    TGD suggests dark fusion (see this and this) as the mechanism of "cold fusion". One can consider two models explaining these phenomena in the TGD Universe. Both models rely on the hierarchy of Planck constants heff=n× h (see this, this, this, this) explaining dark matter as ordinary matter in heff=n× h phases emerging at quantum criticality. heff implies scaled up Compton lengths and other quantal lengths making possible quantum coherence at longer scales than usual.

    The hierarchy of Planck constants heff=n× h has now a rather strong theoretical basis and reduces to number theory (see this). Quantum criticality would be essential for the phenomenon and could explain the critical doping fraction for cathode by D nuclei. Quantum criticality could help to explain the difficulties to replicate the effect.

    3.1 Simple modification of WL does not work

    The first model is a modification of WL and relies on dark variants of weak interactions. In this case LENR would be an appropriate term.

    1. Concerning the rate of the weak process e+p→ n+ν the situation changes if heff is large enough and rather large values are indeed predicted. heff could be large also for weak gauge bosons in the situation considered. Below their Compton length weak bosons are effectively massless and this scale would scale up by factor n=heff/h to almost atomic scale. This would make weak interactions as strong as electromagnetic interactions and long ranged below the Compton length and the transformation of proton to neutron would be a fast process. After that a nuclear reaction sequence initiated by neutrons would take place as in WL. There is no need to assume that neutrons are ultraslow but electron mass remains the problem. Note that also proton mass could be higher than normal perhaps due to Coulomb interactions.
    2. As such this model does not solve the problem related to the too small electron mass. Nor does it solve the problem posed by gamma ray production.

    3.2 Dark nucleosynthesis

    Also the second TGD inspired model involves the heff hierarchy. Now LENR is not an appropriate term: the most interesting things would occur at the level of dark nuclear physics, which is now a key part of TGD inspired quantum biology.

    1. One piece of inspiration comes from the exclusion ones (EZs) of Pollack (see this), which are negatively charged regions (see this and this). Also the work of the group of Prof. Holmlid (see this and this), not yet included in the book of Krivit, was of great help. TGD proposal (see this) is that protons causing the ionization go to magnetic flux tubes having interpretation in terms of space-time topology in the TGD Universe. At flux tubes they have heff=n× h and form dark variants of nuclear strings, which are basic structures also for ordinary nuclei.
    2. The sequences of dark protons at flux tubes would give rise to dark counterparts of ordinary nuclei proposed to be also nuclear strings but with dark nuclear binding energy, whose scale is measured using as natural unit MeV/n, n=heff/h, rather than MeV. The most plausible interpretation is that the field body/magnetic body of the nucleus has heff= n× h and is scaled up in size. n=211 is favoured by the fact that from Holmlid's experiments the distance between dark protons should be about electron Compton length.

      Besides protons also deuterons and even heavier nuclei can end up in the magnetic flux tubes. They would however preserve their size and only the distances between them would be scaled to about electron Compton length on the basis of the data provided by Holmlid's experiments (see this and this).

      The reduced binding energy scale could solve the problems caused by the absence of gamma rays: instead of gamma rays one would have much less energetic photons, say X rays assignable to n=211 ≈ m_p/m_e. For infrared radiation the energy of photons would be about 1 eV and the nuclear energy scale would be reduced by a factor about 10^{-6}-10^{-7}: one cannot exclude this option either. In fact, several options can be imagined since the entire spectrum of heff is predicted. This prediction is testable.

      Large heff would also induce quantum coherence is a scale between electron Compton length and atomic size scale.

    3. The simplest possibility is that the protons are just added to the growing nuclear string. In each addition one has (A,Z)→ (A+1,Z+1) . This is exactly what happens in the mechanism proposed by Widom and Larsen for the simplest reaction sequences already explaining reasonably well the spectrum of end products.

      In WL the addition of a proton is a four-step process. First e+p→ n+ν occurs at the surface of the cathode. This requires large electron mass renormalization and fine tuning of the electron mass to be very nearly equal but higher than the n-p mass difference.

      There is no need for these questionable assumptions of WL in TGD. Even the assumption that weak bosons correspond to large heff phase might not be needed but cannot be excluded with further data. The implication would be that the dark proton sequences decay rather rapidly to beta stable nuclei if a dark variant of p → n is possible.

    4. EZs and accompanying flux tubes could be created also in electrolyte: perhaps in the region near the cathode, where bubbles are formed. For the flux tubes leading from the system to the external world most of the fusion products as well as the liberated nuclear energy would be lost. This could partially explain the poor replicability for the claims about energy production. Some flux tubes could however end at the surface of the catalyst under some conditions. Flux tubes could end at the catalyst surface. Even in this case the particles emitted in the transformation to ordinary nuclei could be such that they leak out of the system and Holmlid's findings indeed support this possibility.

      If there are negatively charged surfaces present, the flux tubes can end to them since the positively charged dark nuclei at flux tubes and therefore the flux tubes themselves would be attracted by these surfaces. The most obvious candidate is catalyst surface, to which electronic charge waves were assigned by WL. One can wonder whether Tesla observed in his experiments the leakage of dark matter to various surfaces of the laboratory building. In the collision with the catalyst surface dark nuclei would transform to ordinary nuclei releasing all the ordinary nuclear binding energy. This could create the reported craters at the surface of the target and cause heating. One cannot of course exclude that nuclear reactions take place between the reaction products and target nuclei. It is quite possible that most dark nuclei leave the system.

      It was in fact Larsen, who realized that there are electronic charge waves propagating along the surface of some catalysts, and for good catalysts such as Gold, they are especially strong. This would suggest that electronic charge waves play a key role in the process. The proposal of WL is that due to the positive electromagnetic interaction energy the dark protons of dark nuclei could have rest mass higher than that of neutron (just as in the ordinary nuclei) and the reaction e + p → n+ν would become possible.

    5. Spontaneous beta decays of protons could take place inside dark nuclei just as they occur inside ordinary nuclei. If the weak interactions are as strong as electromagnetic interactions, dark nuclei could rapidly transform to beta stable nuclei containing neutrons: this is also a testable prediction. Also dark strong interactions would proceed rather fast and the dark nuclei at magnetic flux tubes could be stable in the final state. If dark stability means the same as the ordinary stability then also the isotope shifted nuclei would be stable. There is evidence that this is the case.
    Neither CF nor LENR is an appropriate term for the TGD inspired option. One would not have ordinary nuclear reactions: nuclei would be created as dark proton sequences and the nuclear physics involved is on a considerably smaller energy scale than usual. This mechanism could allow at least the generation of nuclei heavier than Fe not possible inside stars and supernova explosions would not be needed to achieve this. The observation that transmuted nuclei are observed in four bands for nuclear charge Z irrespective of the catalyst used suggest that the catalyst itself does not determine the outcome.

    One can of course wonder whether even "transmutation" is an appropriate term now. Dark nucleosynthesis, which could in fact be the mechanism of ordinary nucleosynthesis outside stellar interiors to explain how elements heavier than iron are produced, might be a more appropriate term.

    3.3 The TGD based model and the findings of Iwamura et al

    The presence of ions Ni-, Cu-, C-, Si- and H- ions in the target is an important guideline. LENR involves negatively charged surfaces at which the presence of electrons is thought to catalyze transmutations: the WL model relies on this idea. The question concerns the ionization mechanism.

    1. The appearance of Si- in the entire target volume could be understood in terms of melting. It is difficult to understand its appearance as being due to nuclear transmutations.
    2. What is remarkable is the appearance of O-. The Coulomb wall makes it very implausible that the absorption of an ordinary alpha particle in LENR could induce the transmutation of C to O.

      Could the oxygen be produced by dark fusion? It is difficult to see why oxygen should have such a preferred role as a reaction product in dark fusion favouring light nuclei?

      Could the oxygen enter the target during the first phase when the pressurized hydrogen gas is present together with air, as the statement that air was evacuated after the first stage suggests. Iwamura has also stated that nitrogen N, also present in air, is not detected in the target so that the leakage of O to the target looks implausible. Could the leakage of oxygen rely on a less direct mechanism?

    3. Oxygen resp. hydrogen appears as O2 resp. H2 molecules. O2 resp. H2 has a binding energy of 5.912 eV and resp. 4.51 eV. Therefore the reaction 2H2+O2→ 2H2O could occur during the pressurization phase. The energy liberated in this reaction is estimated to be about 4.88 eV (see this).
    4. What is remarkable is that water plays a key role in the Pollack effect interpreted as a formation of dark proton sequences. Pollack effect generates negatively exclusion zones as negatively charged regions and Ni-, Cu-, C-, Si- and H- ions would serve as a signature of these regions. In the "cold fusion" based on electrolysis, the water would be present from the beginning but now it would be generated by the proposed mechanism.

      The difference of the bonding energy of OH and binding energy of O- is about .33 eV in absence of electric fields and corresponds to the thermal energy at temperature of 630 C. This would suggest that the heating replaces IR photons in ordinary Pollack effect as energy source inducing the formation of dark protons and exclusion zones consisting of negative ions.

    5. In fact, Pollack effect suggests a deep connection between computers, quantum computers and living matter based on the notion of OH-O- + dark proton qubit and its generalizations (see this) .
    6. The earlier TGD based model for "cold fusion" as dark fusion suggests that the value of heff for dark protons is such that the Compton length is of order electron Compton length. Dark proton sequences as dark nuclei would spontaneously decay to ordinary nuclei and produce the heat. In TGD, ordinary nuclei also form nuclear strings as monopole flux tubes (see this).

      TGD assigns a large value of heff to systems having long range strong enough classical gravitational and electric fields (see this and this). For gravitational fields the gravitational Planck constant is very large and the gravitational Compton length is one half of the Schwartschild radius of the system with large mass (Sun or Earth). In biology, charged systems such as DNA, cells and Earth itself involve large negative charge and therefore large electric Planck constant proportional to the total charge of the system. Pollack effect generates negatively charged exclusion zones, which could be characterized by gravitational or electric Planck constant. In the recent case, the electric Compton length of dark protons should be of the order of electron Compton length so that heff/h≈ mp/me≈ 211 is suggestive.

    3.4 Summary

    In the TGD based model, the reaction 2H2+O2 → 2H2O transforms the situation to that appearing in electrolysis and Pollack effect would be also now the basic mechanism producing dark nuclei as dark proton sequences transforming spontaneously to ordinary nuclei. Whether this mechanism is involved should be tested.

    The TGD based model predicts much more than is reported in the article of Iwamura et al. A spectrum of light nuclei produced in the process and containing at least alpha particles but there is no information about this spectrum in the article.

    1. The article reports only the initial and final state concentrations of Ni-, Cu-, -C, O-, and H- but does not provide information about all nuclei produced by transmutations. Melting has very probably occurred for Ni and Cu.
    2. The heat production rate is higher during the desorption phase than during the adsorption phase. The TGD explanation would be that the dark proton sequences have reached a full length during desorption and can produce more nuclei as they decay.
    3. The finding that the maximum of the energy production per hydrogen atom is roughly 1/100 times smaller than the binding energy scale of nuclei of nuclei, forces to challenge dark fusion as a reaction mechanism. The explanation could be that the creation of dark nuclei from hydrogen atoms is the rate limiting step. If roughly 1 percent of hydrogen atoms generates dark protons, the rate of heat production could be understood.
    4. The basic prediction of Widom-Larsen model about (A,Z)→ (A+1,Z+1)→ .. follows trivially from TGD inspired model in which dark nuclei with binding energy scale much lower than for ordinary nuclei and Compton length of order electron Compton length are formed as sequences consisting of dark protons, deuterons or even heavier nuclei, which then transform to ordinary nuclei and liberate nuclear binding energy. This occurs at negatively charged surfaces (that of cathode for instance) since they attract positively charged flux tubes. On the other hand, the negative surface charge could be generated in the Pollack effect for the water molecules generating exclusion zone and dark protons at the monopole flux tubes.

      The energy scale of dark variants of gamma rays liberated in dark nuclear reactions is considerably smaller than that of gamma rays since it is scaled down from few MeV to few keV which indeed corresponds to the thermal energy liberated per hydrogen atom. This could explain why gamma rays are not observed. The questionable assumptions of the Widom-Larsen model are not needed.

      The maximum length of dark nucleon sequences determines how heavy nuclei can emerge. The minimum length corresponds to a single alpha nucleus and it could induce nuclear transformation such as the transmutation of C to O. Part of the dark nuclei could escape from the target volume and remain undetected. Dark nuclei could also directly interact with the target nuclei, in particular Ni and Cu.

    See A new experimental demonstration for the occurrence of low energy nuclear reactions or the chapter Cold Fusion Again.

    For a summary of the earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.