https://matpitka.blogspot.com/

Thursday, February 26, 2026

Are Pollack batteries possible?

This posting was motivated by the claim of Donut Lab about a breakthrough in battery technology. February 2026, Donut Lab published one of a planned series of independent VTT test reports covering fast-charge performance only. All other claimed specifications -- energy density (400 Wh/kg), cycle life (100,000 cycles), extreme-temperature tolerance, safety, and cost -- remain entirely unverified by any independent party.

1. The claims of Donut LAB

What was announced was "Ultra high energy density, the fastest charging time, practically unlimited cycles, extreme safety, and lower price than lithium-ion". The reactions from professional circles have been skeptical. It is indeed difficult to see how the claims about Donut batteries could be consistent with standard condensed matter physics.

  1. The claim about very rapid charging time of about 5 minutes is verified in the VTT test. This corresponds to charging rate 11 C, where 1 C corresponds to a charging time of 1 hour.
  2. It was found that there is a high heat production during charging. During VTT Test #6, the cell reached temperature of ≈ 90 ºC under 11C charging with passive cooling only, triggering an automatic safety cutoff by the test equipment the cell itself showed no damage or signs of thermal runaway.
  3. The number of life cycles is claimed to be extremely large, about 105 cycle times and testing of so many cycles has been claimed to be implausible since it would require years. VTT made only 7 tests meaning 7 cycle times. The strong heating during the loading by ohmic currents is expected to cause damage to the electrode receiving the charge and this reduces the number of cycle times.
  4. The claimed energy density of about 400 Wh/kg is very high. Suppose that the system consists of basic units with mass Amp (mp is proton mass) having atomic volume a03, where a0= 10-10 m. This would give an energy density of dE/dm= 1.4× 10-10, where the unit c=1 is used. This would mean .1 eV per proton mass mp≈ 109 eV.

    The energy density relates closely to the reported energy efficiency related to the counterpart of capacitor charge about 105 Coulombs, which is very high but consistent with that for mobile phone batteries. Note that the energy density is proportional to the dielectric constant ε of a dielectric possibly used between the positively and negatively charged electrons. It measures how large fraction of energy is stored as chemical energy. For a simple capacitor the energy is mere electrostatic energy.

  5. Donut battery is claimed to be a solid state battery cell. VTT did not verify the chemistry of the cell. \item Donut battery is claimed to be a solid state battery cell. VTT did not verify the chemistry of the cell. Donut patent application gives the following information about the battery.
    • Cathode with cathode material in particulate form + polymeric binder (polymeric binders are used to bind together battery materials)
    • Solid electrolyte with solid electrolyte material + polymeric binder
    • Anode with anode material in particulate form + polymeric binder
The basic problem is what is called trilemma. In the framework of standard condensed matter physics, the conditions for high charging speed, large number of life cycles, and high energy density are mutually conflicting. The high charging rate, which has been verified, requires high energies so that the charging involves ohmic dissipation and large energy and momentum transfer to the electrode causing its deterioration. It is claimed that the momentum transfer during the charging is small.

This led to is a kind of private brain storming session about whether TGD based physics could allow the the realization of batteries based on TGD view of Pollack effect (see this, this, this, this). I am not specialized to battery technologies and these considerations are just speculations and need not have much to do with the Donut Lab battery, except as a thought ignition and framing the energy charging, storage, and dissipation systems. The basic inspiration comes from biological analogies and the charging of the battery is regarded as an analog of photosynthesis.

The notions of field/magnetic body, the hierarchy of effective Planck constants and Pollack effect are the key elements of the model and the following gives a brief summary of heff hierarchy and Pollack effect.

2. Could the notion of Pollack battery make sense?

I have considered the possibility that the Pollack effect plays a central role in electrolysis, which is the key effect in the chemistry of batteries. The following is an attempt to build a model for a battery based on the Pollack effect.

The claimed properties of the Donut battery can be used as guidelines in speculations. Something new making possible the rapid charging and the resolution of the trilemma and Pollack effect could be the missing element. I have discussed its generalization and possible applications to biology (see this and this) and also to develop some speculative ideas about living computers (see this and this).

  1. The fast charging could be understood if the ions are generated by the Pollack effect or its generalization at the second electrode. Protons or perhaps even alkali ions could be generated by the generalized Pollack effect. In the presence of an electric field the positively charged ions would travel to the second electrode in the electric field (note that for static electric fields the voltage is the same along the space-time sheet for ordinary matter and for the magnetic flux tube).

    Since the value of heff is large, dissipation would be small and could be even absent if the analog superconducting is in question. Therefore the travel time would be very short and could make rapid charging possible. In the simplest classical model the particle would experience the analog of free fall in the approximately constant gravitational field of Earth.

  2. It is enough to get the positive ions to the opposite electrode. The positive electrode generates an opposing electric field Eopp causing a gradually increasing electric force. It is enough to have a gradually increasing electric field E, which exceeds this opposing electric field. The dark positive ions would experience the force Δ E= E-Eopp. This would save energy in charging and minimize the effects caused at the positive electrode. The positive ions could be transferred with minimal energy and momentum transfer to the positive electrode. Δ E could be much weaker than the electric field Eopp between the electrodes defining the voltage of the battery. This would minimize the damage to the electrode.
  3. Where the positive dark ions would be generated by the Pollack effect. Could the Pollack effect occur at the electrode becoming negatively charged or in the counterpart of electrolyte between the electrodes? The recent finding reported in ScienceDaily (see this) that addition of water to a Sodium-Vanadium battery increases its charge capacity almost by a factor 2, suggests that the Pollack effect for water is in an essential role.

    What is nice is that Sodium and Vanadium are not rare metals unlike Li. Researchers found that keeping water inside a key sodium-ion battery material nearly doubled its charge storage. It also charges faster and stays stable for hundreds of cycles. This discovery could make lithium obsolete. The same material can also desalinate seawater into drinking water.

    This suggests that the Pollack effect generates negatively charged EZs in water. The first guess is that the negative charge is transferred to the negatively charged electrode by conduction in the electric field used for charging. If this occurs by ohmic conduction, a small value of Δ E would make the transfer slow. There is however evidence for the change of the arrow of time at the electric field body and this suggests large hem (see this and this). If the negative ions are in large heff=hem phase (proportional to the charge of the electrode), the transfer could occur without dissipation and be fast.

    Also the huge dielectric constant ε (as large as 106) strongly suggests that chemical energy storage dominates over electrostatic energy storage. This storage would naturally occur to the dielectric between the electrodes. The energy storage would be chemical as in biosystems and the electret would take the role of proteins and lipids. This suggests that the solid state dielectret should be organic material able to store metabolic energy. Carbon polymers carrying energy in carbon-carbon and carbon-hydrogen bonds is what suggests itself. In this case the use of the energy cannot lead to the catabolism producing CO2 and water. The molecules must however experience a chemical change liberating energy. Double bonds (C=O)-(CH3) groups are essential in the energy storage using proteins and lipids.

  4. Very large charge for the capacitor-like system is required. A capacitor with parallel plates cannot realize this demand. The idea is that the standard capacitor is replaced with a very thin, highly folded bilayer, analogous to the pair of the lipid layers of a cell. These layers are insulated from each other by using a polymer so that dielectric breakdowns do not occur between the layers. There would also be electrolytes between the layers as electrodes.

    If the bilayer is folded several times, the surface area increases so that the charge (and capacitance) can become very large. Interestingly, a also the cortex is also highly folded, which supports the idea that the surface area and the associated charge are maximized for both cells and cortex to increase the value of the total charge. This ensures maximum value of electric Planck constant hem proportional to the total charge of the bilayer and serving as a universal IQ in TGD inspired theory of conscious experience.

  5. The simplest Pollack battery would not involve the electrolyte and would store energy as electrostatic energy. The naive idea is that the addition of current wire between two electrodes makes it possible to use the energy of the capacitor. The addition of electrolyte is also possible.

    Ohmic conductivity makes possible the transfer of currents in the electrolyte and the storage of energy as electric energy. Taking into account the contribution of the electric energy means the replacement of the electric energy CU2/2 with electric plus chemical energy εr CU2/2. For water the value is in the range 78-80. Doped semiconductors/polymers can have dielectric constant exceeding values 106. This suggests that the dielectric storage of energy dominates overy the electrostatic storage. This would mean that the charging by Pollack effect should transfer energy to the electret requiring "dropping" of positive ions to the electret where they react chemically.

    Does the presence of ohmic current create negative effects spoiling the nice features of Pollack battery? Should one require the dropping of the positively charged ions to the positive electrode or is the dropping to a possible electrolyte containing region between the electrodes desirable?

Just for fun, one can make brave amateurish guesses about the actualization of the Pollack battery. Pollack effect is the new element.
  1. The first guess would be the use of water for which Pollack effect certainly occurs. As alredy noticed, the addition of water to Sodium-Vanadium battery increases the charge storage capacity by a factor of almost 2 and also the charging becomes faster (see this).
  2. One can also consider more exotic options. Could Carbon nanotubes (see this) serve an additional element of the Pollack battery besides electrodes and electrolyte? Carbon nanotube has an aromatic ring with six C atoms as a basic building block. Each C atom has a double bond with one of the neighboring 3 carbons associated with an aromatic ring.

    It is known that -OH groups can be added to the defects (C=C is replaced with C-C) associated with the aromatic rings and the surface of Carbon nanotubes and they could could serve as seats of Pollack effect (see this). The Pollack effect as transformation -OH→ O- + dark proton, followed by the transfer of electron as dark electron to the negative electrode or to electrolyte, would replace C-OH with C-O. O has an unpaired electron. The loading of hydrogen would transform C-O back to C-OH.

    A feed of hydrogen and irradiation by IR light to induce the Pollack effect as the analog of photosynthesis would create dark electrons and protons accelerating them in the electric field. Could this store energy to chemical ordinary energy to electrolyte as they transform to ordinary protons and electrons and bind chemically?

    When hydrogen gas consisting of H2 molecules is used to generate energy, it qwould combine with oxygen molecules O2 and generate water. Now this process should occur for H2 and C-O of carbon nanotubes to create C-OH. Is this process possible energetically? The reaction H2 +2C-O rightarrow 2C-OH should occur. Is the binding energy for 2 C-OH bonds larger than the sum of binding energies of 2C-O and H2?

See the article Are Pollack batteries possible?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, February 19, 2026

A possible TGD based narrative for how life might have evolved

I have worked decades in attempting to combine various basic ideas of TGD inspired quantum biology to a single narrative about how life could have evolved at the Earth and possibly is evolving at other planets.

TGD introduces several new concepts, such as the new view of space-time and classical fields. TGD also predicts a new quantum ontology predicting phases of ordinary particles labelled by effective Planck constant heff behaving like dark matter and residing at the field bodies. Zero energy ontology (ZEO) is part of the new quantum ontology. The basic challenge has been the fusion of these notions with the standard approach involving electromagnetic fields and biochemistry. In particular, the Pollack effect and its generalizations have turned out to be central in the development of TGD based views of living matter. Also the TGD views of cell membrane and of neuron and nerve pulse and EEG examples should be integrated with standard biochemistry- and bio-electricity based approaches.

For me the challenge has been and still is the fact that biochemical thinking is very different from that of a theoretical physicist thinking in terms of action principles, field equations, and quantum theory. In order to understand the stunning complexities of biochemistry one must learn the key concepts at an intuitive level.

Of course, notions such as acids and bases and electrolysis, electronegativity, oxidation, reduction, and redox reactions belong to the basic conceptual arsenal. One should also understand how these notions relate to basic biological processes such as photosynthesis, chemical storage of metabolic energy and respiration. One begins to learn the significance of these notions as one tries to understand how to test whether some sample, taken for instance from Mars, contains organics possibly produced by the decay of living organisms.

Could the new physics provided by TGD provide totally new insights about biology and biochemistry? The mechanisms leading to emergence of the basic organic molecules serving as building bricks of basic information molecules like amino acids, DNA and RNA are poorly understood. The extreme efficiency of biocatalysis remains a mystery in the biochemistry based approach: more concretely, where does the energy making it possible to overcome potential barriers preventing the reactions come from? How did the basic information molecules and genetic code emerge and why is the genetic code what it is? Is there some hidden new physics behind the replication of DNA, its transcription of DNA to mRNA, and translation of mRNA to proteins? How did the genetic code evolve?

I am not a biochemist and the article linked below is also an attempt to clarify these notions for myself. Google AI allows anyone to get detailed accounts of the basic notions and has been of considerable help in fact checking and learning of new facts about basic biochemistry during the writing of the article. I will also discuss some examples related to the evidence for life on Mars and the recent finding of JWST that organic molecules relevant for life existed much before the planet Earth.

See the article A possible TGD based narrative for how life might have evolved.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, February 18, 2026

The discovery of geobatteries as manganese nodules at deep sea floor and the TGD view of evolution of life

The new view of space-time and quantum motivates one of the most radical proposals of TGD inspired view of quantum biology stating that the life could have evolved in underground oceans and bursted to the surface in Cambrian Explosion that occurred about .5 billion years ago as the radius of the Earth increased by factor 2. This picture is motivated by the TGD view of cosmic expansion as a sequence of rapid expansions rather than smooth expansion and by the finding that the continents fit nicely together if the radius of the Earth is by a factor 1/2 smaller. The reason for the rapid expansion would have been a huge nuclear explosion in the interior of Earth as dark fusion in which dark nuclei transformed to ordinary nuclei and liberated almost all nuclear binding energy. Similar explosion would have generated the Moon and the moons of Mars (see this).

The presence of multicellular life forms requires photosynthesis but according to the standard physics, solar radiation cannot reach the Earth's interior. A possible solution of the problem is that the light arrived from the Earth's interior as dark photons during the era before CE when nuclei were dark and formed a quantum coherent state in the scale of Earth.

One can also wonder where the oxygen needed for cell respiration came from. The answer to this question is suggested recently found evidence that electrolysis decomposing water to hydrogen and oxygen is possible in the metal nodules at the deep ocean floor (see this). The manganese (Mn) nodules are coal-like metallic lumps. Mn has atomic number 25 and mass number 55 and behaves therefore like fermion. It is often associated with iron with atomic number 26 and mass number 56. Nodules are rich in metals, specifically manganese, nickel, copper, cobalt, and lithium.

The nodules act as natural "geobatteries" carrying a charge similar to that of a 1.5 V battery. The batteries are able to split seawater to oxygen and hydrogen. These potato-sized mineral formations, found thousands of meters below the surface, were observed producing measurable amounts of oxygen in complete darkness without sunlight or photosynthesis. The metals could act as catalysts: TGD suggests that dark metal ions with large values of $h_{eff}$ were involved.

Could these nodules appear also in underground oceans and make possible evolution of photosynthetizing life by producing the needed oxygen? The energy source could still be the dark photon radiation from the interior of Earth but the oxygen needed for cell respiration would be produced by the counterparts of nodules. See the article About the TGD based models for Cambrian Explosion and the formation of planets and Moon or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, February 17, 2026

About the evolution of life on Mars

I encountered a highly interesting article with title "Evidence for Past, Massive, Nuclear Explosions on Mars, and its Relationship to Fermi s Paradox and The Cydonian Hypothesis" by Brandenburg and Murad (see this). The article represents evidence suggesting that at least two massive nuclear explosions occurred on Mars 500 million years ago. The explosions are proposed to have been caused by a civilization with disappeared in the event.

Here is part of the abstract of the article.

The Fermi Paradox is the unexpected silence of the cosmos under the Assumption of Mediocrity, in a cosmos known to have abundant planets and life precursor chemicals. On Mars, the nearest Earthlike planet in the cosmos, the concentration of 129Xe in the Martian atmosphere, the evidence from 80Kr abundance of intense 1014/cm2 flux over the Northern young part of Mars, and the detected pattern of excess abundance of Uranium and Thorium on Mars surface, relative to Mars meteorites, can be explained as due to two large thermonuclear explosions on Mars in the past.

Based on the pattern of thorium and radioactive potassium gamma radiation, the explosions were centered in the Northern plains in Mare Acidalium at approximately 50N, 3OW ,near Cydonia Mensa and in Utopia Planum at approximately 50N 12OW near Galaxias Chaos, both locations of possible archeological artifacts. The xenon isotope mass spectrum of the Mars atmosphere matches that from open air nuclear testing on Earth and is characteristic of fast neutron fission rather than that produced by a moderated nuclear reactor.

The high abundance of Ar cannot be explained by mass fractionation during atmospheric loss, and must be the result of neutron capture on 39K, also requiring an intense neutron flux on the Mars surface as is the high abundance of 17N and deuterium. Modeling the 129Xe component in the Mars atmosphere as due to fast neutron fission and the 80Kr as due to delayed neutrons from a planet-wide debris layer, and assuming an explosive disassembly of uranium-thorium casing into a planet wide debris layer with 10 % residue, all three estimates arrive at approximately 1025 J, or a yield of 10 Megatons. This is similar to the Chiculub event on Earth and would be large enough to create a global catastrophe and change Mars global climate. The absence of craters at the site suggests centers of the explosions were above the ground. The explosions appear due to very large fusion-fission devices of similar design as seen on Earth, and the Acidalia device, the largest, being approximately 80 meter radius. ...

Concerning the interpretation in terms of the Cydonian hypothesis, I am a skeptic. Also the existence of a highly evolved civilization at the surface of Mars at this time looks implausible if one believes that Mars lost its magnetic field and water about 3.5 billion years ago. However, recently Mars has local magnetic fields in the Southern Hemisphere (SH).

Why this is interesting from the TGD point of view?

The possibility that two massive nuclear explosions occurred in the NH of Mars looks highly interesting from the TGD point of view.

  1. The TGD based proposal (see this) is that the two moons of Mars were formed in two explosions transforming dark nuclei with a rather small binding energy to ordinary nuclei and liberating essentially all ordinary nuclear binding energy and throwing out the surface layer of Mars or part of it, which then gravitationally condensed to form a moon. These explosions would have occurred below the surface of Mars. This transformation is the TGD counterpart for "cold fusion" (see this, this, and this).

    The age estimate for Deimos is a few billion years with an upper bound of 4.5 billion years in which case an asteroid capture would is assumed to induce the birth of Deimos. For Phobos the age estimates vary from 100-200 million years to a few billion years. The ordinary nuclear explosions would have occurred 500 million years ago. These limits allow us to consider the possibility that the birth of Phobos was accompanied by the local nuclear explosions.

    In some locations, the "cold fusion" products could have leaked to the newly formed surface of Mars during or after the explosion. Alternatively, the "cold fusion" could have induced ordinary nuclear fission at some loci. The absence of craters conforms with this assumption. An interesting question is whether similar traces of massive nuclear explosions associated with the formation of the Moon could be found at the Earth.

    It is estimated that life, if it existed on Mars, disappeared or was forced to hide about 3-3.5 billion years ago. This period marked the transition from a warmer, wetter early Mars to the cold, arid, and desolate planet observed today. Did an explosion throwing out the layer of Mars forming Deimos cause the loss of the magnetic field at the surface layer of Mars consisting of monopole flux tubes and essential for life?

  2. What is intriguing is that the local nuclear explosions are estimated to have occurred 500 million years ago, the time when the Cambrian Explosion occurred on Earth. The TGD proposal is that Cambrian Explosion was accompanied by a rapid increase of the Earth radius by factor 2 (see this). Also the Cambrian explosion would have been induced by "cold fusion" in the core of the Earth and liberating practically all nuclear binding energy. Life would have disappeared from the surface of Mars 3-3.5 billion years ago in the first BSFR.

An attempt to build an internally constent narrative

If the signatures of life at Mars correspond to life at the surface, the explosions of throwing out the surface layers at the Southern and Northern hemispheres (NH and SH) should have also destroyed all signatures of life at the surface of Mars. Therefore the observed signatures of life should be assigned with the underground life. There is indeed evidence that the signatures of life do not originate from atmosphere or meteorites.

The above information allows us to consider two options.

  1. Deimos and Phobos were formed simultaneously about 4.5 billion years ago. This option is not supported by the fact that Phobos is nearer to Mars and therefore younger.
  2. Deimos was formed 4.5 billion years ago and Phobos about .5 billion years ago when at least two local nuclear explosions occurred at the NH and Cambrian Explosion occurred at Earth. This option conforms with the orbital data and will be considered in the sequel.
The signatures of life on Mars found by Nasa are in the Northern hemisphere (NH) and life would have disappeared or become invisible about 3.0-3.5 billion years ago. The TGD based view of the formation of the moons of Mars (see this) suggests that the formation of Deimos (Phobos) threw out a surface layer at NH or Southern hemisphere (SH). The Deimos-Northern (D-N) association was just a cautious first guess and also the Deimos-Southern (D-S) association can be considered. For both options the detected signatures of life at NH must originate from underground life.

It is interesting to see whether either option can lead to an internally consistent narrative.

  1. Magnetic history

    D-N option: The surface layer of the NH exploded 4.5 billion years ago and led to the disappearance of the monopole flux tubes by a mechanism analogous to work on Earth and Sun (see this and this).

    One can represent objectives as questions. Why didn't the explosion creating Phobos destroy the local magnetic field at the SH? Or was there a global magnetic field at SH and are the recent local magnetic fields at SH its remnants? But why are there no local or even global magnetic fields at NH?

    D-S option: The local magnetic fields at SH would have been probably destroyed at SH, at least partially. Were they destroyed only partially or were they regenerated later? The explosion occurred .5 billion years ago would have destroyed the magnetic field at NH. The general thinking is that the local magnetic fields at NH disappeared already 3.0-3.5 billion years ago. Did they really disappear? Were the crucial monopole flux tube structure defining the surface magnetic field thrown out as Phobos was formed?

  2. No signatures of life on SH have been detected yet.

    D-N option: The explosion throwing out the SH surface layer creating Phobos .5 billion years ago could have destroyed the underground life at SH and also the signatures about its existence.

    D-S option: One would expect signatures of underground life on SH to be more probable than on NH. The loci for the signatures are at latitude near the Equator: the loci could be outside the exploded layer but near its boundary. Since Phobos is smaller than Deimos one expects that the area of the layer thrown out is smaller for Phobos and spanning less than hemisphere. Could the gigantic explosion have brought living organisms to the surface near the lost boundary in the same way as in the Cambrian Explosion.

  3. Can the local nuclear explosions at NH be understood as being induced by the gigantic underground explosion .5 billion years ago. The latitude for the local nuclear explosions is 50 degrees so that they would have occurred far from the equator.

    D-N option: The gigantic underground nuclear explosion at SH could have induced local nuclear explosions at NH near the Equator but it is less plausible that they would have occurred as far as 50 degrees from the Equator.

    D-S option: The occurrence of gigantic explosion .5 billion years throwing out a surface layer of NH conforms with the location of nuclear explosion. The proposed local nuclear explosions involving nuclear fission could have been induced by the gigantic explosion. Furthermore, the latitude for the Leopard spots, where the signatures of life are detected, is 18.4 degrees. This suggest that the Leopard spots are near to the boundary of the layer of SH thrown out and therefore have survived in the gigantic explosion.

Clearly, the D-S option is favored.

The role of zero energy ontology (ZEO)

The proposal (see this)/moonmysteries} is that the creation of the Moon and moons of Mars and the Cambrian Explosion were accompanied by a "big" (really big!) state function reduction (BSFR) changing the arrow of time at some layer of some gravitational field body. The gravitational field body in question could be that of the planet or of the Sun.

This "big" state function reduction (BSFR) would change the geometric arrow of time in internal discrete degrees of freedom of the system due to a small violation of classical determinism serving as correlate for cognition. In BSFR the system would "die" and reincarnate with an opposite arrow of geometric time assignable to the tip at the active boundary of the causal diamond increasing in statistical sense so that the active tip defining geometric time shifts during the sequence of "small" state function reductions (SSFRs) whereas the passive boundary and states at it are not affected.

There are two options to consider.

  1. If the gravitational field bodies of Mars and Earth are the independent entities, the BSFRs for Mars and Earth can occur at different times.
  2. If the gravitational magnetic body of the Sun is the relevant entity, there would be only two BSFRs giving rise to Deimos and Moon resp. Phobos and Cambrian Explosion.
Consider the solar option in detail.
  1. The first BSFR creating Deimos could have occurred 3.0-4.5 billion years ago. The age of the Moon is estimated to be 4.5 billion years. This allows the possibility that the "solar" BSFR creating Moon and Deimos occured 4.5 billion years. The estimate for the disappearance of the magnetic field and water of Mars is estimated to have occurred 3.5 billion years ago. One could see this event as a consequence of this BSFR, as an analog of the decay process following biological death. The change of the arrow of time in the first BSFR creating Deimos would have meant "death" for both Mars and Earth.
  2. The second BSFR would have been a "birth" giving rise to Cambrian Explosion and the creation of Phobos. In the case of Mars also the second BSFR threw away a surface layer forming Phobos and this can explain why the analog of Cambrian Explosion could not occur for Mars. The two local nuclear explosions could be associated with this BSFR.
Needless to say, these events could be seen as a dramatic evidence for quantum coherence in the scale of the solar system.

See the article About the TGD based models for Cambrian Explosion and the formation of planets and Moon or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, February 14, 2026

Is it possible to experience entanglement and remember it?

In TGD inspired theory of conscious experience, quantum entanglement generates from given units of conscious experience larger ones: selves fuse to form larger selves. The flow of universal consciousness is a sequence of this kind of fusions and splittings having interpretation as quantum measurements accompanied by state function reductions (SFRs) giving rise to moments of consciousness with duration defined by the interval between two SFRs. The dynamics of selves is completely analogous to that of particle reactions of chemical reactions. There are also subselves having subselves etc.. and subselves correspond to mental images.

Are there situations in which one could experience the entanglement, or its ending, as SFR occurs? Could the entanglement with other human beings be experienced as a sense of presence, as a kind of "we experience". Could this period be remembered? This might be the case. Quite recently I had for a period of almost a week such experiences every night.

With two old friends, a couple, we made a 4-day visit to Tallinn. It was very intense being together since my friend has severe problems with memory and we had to help him constantly. I do not sleep well and very often wake up to visit the toilet (not so rare at this age). During these visits I suddenly realized that my friends are not together with me although I had felt so! The sense of their presence had returned during the sleep. The interpretation as regeneration of entanglement is very natural. At some level of self hierarchy, we again formed a kind of triple-self.

See the article The recent view of TGD inspired theory of consciousness and quantum biology or the chapter with the same title. See also the article Answers to the questions of Vasileios Basios and Marko Manninen in Hypothesis Refinery session of Galileo Commission.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, February 12, 2026

Are large voids empty of dark matter and how to understand quiet Hubble flow

Are large voids empty of dark matter and how to understand quiet Hubble flow

Sabine Hossenfelder (see this) had a very interesting video about recent findings challenging the notion of the spherical dark matter halo and provides support for the TGD view. Hossenfelder crystallizes the findings to a statement "We live in between two huge dark matter voids".

The article discussed by Sabine Hossenfelder is published in Nature Astronomy by Wempe et al with title "The mass distribution inside the local group and around the local group" (see this). Here is the abstract of the article.

Modelling efforts have long struggled to reproduce the quiet Hubble flow around the Local Group, as they require unrealistically little mass beyond the haloes of the two main galaxies. Here we revisit this using ΛCDM simulations of Local Group analogues with initial conditions constrained to match the observed dynamics of the two main haloes and the surrounding flow. The observations are reconcilable within ΛCDM, but only if mass is strongly concentrated in a plane out to 10 Mpc, with the surface density rising away from the Local Group and with deep voids above and below. This configuration, dynamically inferred, mirrors known structures in the nearby galaxy distribution. The resulting Hubble flow is quiet yet strongly anisotropic, a fact obscured by the paucity of tracers at high supergalactic latitude. This flattened geometry reconciles the dynamical mass estimates of the Local Group with the surrounding velocity field, thus demonstrating full consistency within the standard cosmological model.

One of the motivations for the work of Wempe et al was the problem of quiet Hubble flow. Hubble flow corresponds to cosmic expansion. The gravitational interaction of astrophysical objects is expected to give rise to peculiar velocities as deviations from this flow. The scale for the deviations of these velocities from mean is however found to be too small, as if the local gravitational interactions had no effect.

The second challenge is to understand the generation of large voids with unrealistically low mass density.

  1. In principle, dark matter as particles could have experienced gravitational condensation and have developed localized structures just like ordinary matter. The observed planar pancake structure between two large voids associated with the local Group containing the pair formed by the Milky Way and M31 at its center can be reproduced by choosing initial values suitably in the simulations of ΛCDM model. would explain also the quiet Hubble flow. The likelihood for this kind of configuration is reported to be between 1/100 or 1/1000 in ΛCDM.
  2. In absence of any identified candidate for dark particles having only gravitational interactions, one is forced to challenge the notion of dark matter halo. A further motivation is that the ΛCDM model makes several problematic predictions. A cuspy galactic dark matter halo, too many small satellite galaxies and too slow growth of galaxies by gravitational condensation are predicted. These failures challenge not only the ΛCDM paradigm but also the view that gravitational condensation has formed galaxies.

Does dark matter really consist of exotic particles having only gravitational interactions?

What if dark matter is not realized as exotic particles and there is no halo of galactic dark matter?

  1. The TGD based view of space-time is motivated by the energy problem of general relativity (see this and this) and differs dramatically from that of general relativity (see this). Space-time at the fundamental level corresponds to space-time surfaces in H=M4× CP2 obeying holomorphy = holography principle and satisfying minimal surface equations irrespective of action except at 3-D singularities. Space-time surfaces are analogs of Bohr orbits for particles as 3-surfaces and are slightly non-deterministic although field equations are satisfied (see this).

    General Relativistic space-time follows at the quantum field theory limit when the many-sheeted space-time surface is approximated with single a slightly curved region of M4 and the induced gravitational and gauge fields associated with different space-time sheets are summed to give the standard model gauge fields and gravitational field in long length scales.

  2. TGD predicts Russian doll cosmomology. This follows from a fractal hierarchy if causal diamonds CD=cd× CP2, where cd is causal diamond of M4 identified as intersection of future and past directed light-cones. One could regard cd as an analog of perceptive field of a conscious entity or as quantization volume. cd is also analogous to an empty cosmology: big bang followed by big crunch. CDs contain space-time surfaces inside them. 4-D cosmologies are in TGD analogous to Bohr orbits of particles identified as 3-surfaces.
  3. The weak failure of classical determinism forces what I call zero energy ontology (ZEO) (see this) and together with the number theoretic vision (see this, this and this) this leads to the prediction that quantum coherence is possible in all scales. Space-time surfaces can be regarded as quantum coherence regions. The notion of field body, in particular the predicted monopole flux tubes, means a rather radical modification of the Maxwellian view of classical fields and has far reaching implications in all scales.
TGD view of galactic dark matter

TGD predicts that galactic dark matter is actually analogous to dark energy and concentrated at long cosmic strings with thickness given by CP2 length scale.

  1. Cosmic strings generate 1/ρ gravitational potential predicting the flat velocity spectrum (see this). Both volume term and Kähler magnetic energy contribute to the string tension.
  2. The cosmic string model bears a relation to the MOND model. At a certain radius the stringy 1/ρ contribution overcomes the ordinary 1/r2 contribution from the galactic nucleus and this critical radius has critical acceleration as a counterpart in the MOND model. Already Zeldowich observed a long time ago (1982) that galaxies are located along linear structures at the boundaries of giant voids.
  3. The model predicts a mechanism for the generation of ordinary matter via the decay of cosmic strings induced by some perturbation, say by their collision, inducing their thickening and reduction of string tension and the transformation of the energy to ordinary matter. This process would replace inflation in the TGD framework.

    No exponential expansion of the Universe is required since TGD predicts quantum coherence in arbitrarily long scales explaining the almost constant of the CMB temperature.

  4. The cosmic strings were the first extremals found already during the first year of TGD after the emergence of the basic idea in 1977 and leading to my thesis 1982. In the discussion inspired by the the article of Sabine Hossenfelder (see this), I learned that I am not anymore the only physicist suggesting that string like objects could explain galactic dark matter. Also K. Zatrimaylov has proposed something similar (see this, this, this, this, this).
Going to the concretia, one can ask whether MW and M31 are associated with the same cosmic string connecting them in the plane between the voids, or whether there are also vertical cosmic strings orthogonal to the plane and going through MW and M31. The cosmic strings carry monopole flux and must be closed: could MW and M31 be associated with the same closed cosmic string? There is indeed evidence that MW is formed in a collision of two cosmic strings, which are bound to occur for cosmic strings glued to background 3-surfaces.

TGD view of voids explains the quiet Hubble flow

An entire hierarchy of voids have been observed. Besides the Local Void (or rather pair of voids having the pancake-like structure between them) there are other Large Voids such as KBC void, Böotes Void, and Giant Void appearing in various scales.

In standard view gravitational condensation would have led to the generation of voids. This could be the case also in TGD. Could TGD say something more detailed about them?

  1. Poincare invariance and Lorentz invariance are exact symmetries of TGD. The preservation of these symmetries lost in GRT was the basic motivation of TGD. cd allows slicings by Lorentz invariant light-cone proper time constant hyperboloids of cd. The hyperbolic 3-space H3 as cosmic time constant 3-surface is therefore a fundamental object in TGD.

    H3 allows an infinite number of tessellations (see this and this) as analogs of lattices of E3 characterized by a symmetry group which is an infinite discrete subgroup of the Lorentz group SO(1,3). Simplest tessellations are honeycombs consisting of Platonic solids.

  2. There are indications that astrophysical objects could be assigned with the vertices of these kinds of tessellations. The quantization of cosmic redshifts is one piece of evidence for this. The recent finding of unexpectedly strong gravitational radiation background (see this)could be understood in terms of diffraction of gravitational radiation in a tessellation having stars at its vertices.

    Could also galaxies tend to form this kind of tessellations? These tessellations have vertices, edges and sheets as basic building blocks. Could the 3-D cells correspond to voids? Could vertices correspond to galaxies and edges to cosmic strings? Could faces correspond to the regions between voids. Could the unavoidable collisions of the cosmic strings generate networks giving rise to these tessellations.

  3. Could these tessellations dynamical equilibria emerge as correspond to gravitational bound states, in which the gravitational interactions with neighboring vertices are compensated. Could this be true also for the edges and matter at sheets. If so, the tessellations are quasi static in the sense that they only participate in cosmic expansion associated with the either half-cone of the cd. This would mean that virial motion.

    Planar sheets in E3 are minimal surfaces: could configurations M1× E2 × S1 ⊂ M4× CP2 are not allowed by holography= holomorphy principle in its basic form: could one combine M4 time coordinate and the geodesic coordinate of S1 to a complex coordinate of H?

  4. The icosa tetrahedral tessellation involved assigned to the TGD based model of genetic code (see this and this) is of special interest? Tetrahedra, octahedra, and icosahedra have triangular faces appearing as faces of this completely unique tessellation. Octahedra define however void regions, kind of holes in the tessellation formed by icosahedra and tetrahedra. Could this tessellation occur also in cosmic scales? Could octahedra correspond to regions of cd as geometric vacua in which there is no space-time as 4-surface. Could galaxies or larger units tend to be associated with the vertices and the faces of the tessellation.
  5. This picture could solve the problem of quiet Hubble flow. The gravitational interactions of astrophysical objects should induce peculiar velocities so that the velocity field of astrophysical objects is expected to deviate from the smooth Hubble flow caused by the cosmic expansion. The peculiar velocities are however much smaller than expected. Apparently, the gravitational interactions have no effect on the Hubble flow. This finding actually motivated also the study of Wempe et al (see this) leading to a support for the view that there are two voids containing no dark matter. If the galaxies and larger structures form quasi static tessellations having cosmic strings as edges, this would be the case.

    See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, February 08, 2026

Could TGD provide a vision about evolution at the gene level?

Could TGD provide a concrete view of evolution at the level of genes? How could new genes appear? Genetic engineering produces them artificially. Does Nature also perform genetic engineering? One can try to answer the question using the basic ideas of TGD inspired biology.

1. Key notions and ideas

  1. The predicts the presence of dark variants of DNA, mRNA, and tRNA associated with flux tubes with codons realized as dark proton triplets. Amino-acids do not carry constant negative charges so that dark proton triplets might not be present at the corresponding monopole flux tubes permanently.

    The hypothesis is that the DNA, mRNA, and tRNA and possibly also AA sequences pair with their dark variants. Resonance coupling by dark 3N-photons would make this possible: N corresponds to the number of codons or AAs).

    DNA replication, transcription, translation occur at the level of dark DNA and the counterparts of these processes at the level of chemistry correspond to an induced shadow dynamics, a kind of mimicry.

  2. There are good reasons to expect that the dark variants of basic information molecules, such as DNA and RNA, consisting of dark proton triplets, could be dynamical. This makes possible a kind of R&D lab. How could this be realized? The DNA double strand is not dynamical but RNA is. If the dynamics of RNA is induced from that of dark RNA, dark RNA could make possible experimentation producing new kinds of genes. The living system would evolve actively rather than by random mutations. Of course, also dark DNA could be dynamical and communicate with ordinary DNA resonantly only when in corresponding quantum states.
  3. Zero energy ontology (ZEO) predicts a fundamental error correction mechanism based on a pair of "big" state function reductions (BSFRs) changing the arrow of time temporarily. When the system finds that something goes wrong, it can make a BSFR and return back in geometric time and restart. After the second BSFR the situation might be better. This would be a fundamental mechanism of learning and problem solving. And perhaps also a fundamental mechanism of evolution.
  4. ZEO inspires the question whether the time reversals of transcription, of the splicing process of RNA after transcription, and even translation could be possible.

    Could the time reversal of the entire sequence decomposing to transcription of DNA to RNA followed by the splicing of RNA to mRNA followed by the transformation of tRNA and mRNA to AA sequence and mRNA codons produced from tRNA and from the decay of mRNA look like if possible at all? This would give rise to non-deterministic reverse engineering of DNA making possible a generation of modified more complex genes? What would be nice is that random mutations would be replaced by genetic engineering modifying the existing genome by starting from the protein level would be possible.

  5. A weaker form of the proposal is that only the reversals of splicing and transcription are possible. Already this could make possible an active evolution at the gene level.
In the following these alternative hypotheses are studied in the TGD framework. The cautious conclusion is that time reversals of splicing as attachment of introns and transcription are enough to induce active evolution. Also a rather detailed view about the connection of genetic code and the cognitive hierarchies predicted by the holography = holomorphy hypothesis emerges.

2. Could one consider the reversal of the translation of DNA to proteins?

Consider now what the reverse of the process leading from DNA to proteins would look like. In the initial state amino acid (AA) sequence and RNA codons are present. The central dogma of biology states that information is transferred in the direction of DNA → RNA → proteins so that the first guess for the answer is "No". Could ZEO help?

  1. At the first step mRNA and tRNA would be generated from AA sequence by reverse translation. This step seems to be the most vulnerable part of the process.
    1. AA sequence and RNA codons would transform to mRNA and tRNA codons in a process occurring in reversed time time direction. After the first BSFR mRNA and tRNA would appear at the "past" end of increasing causal diamond (CD). After the second BSFR they would appear at the "future" end of the CD. They would apparently pop out of vacuum. One could say that mRNA is reversed engineered from AA. This process is non-deterministic and 1-to-many since many mRNA codons code for a given amino acid.
    2. The process would generate tRNA. Usually tRNA is generated by transcribing an appropriate gene to pre-tRNA. After splicing and other kinds of processing the tRNA\AA is transferred to cytoplasm and AA is added to give the tRNA.

      Suppose that the AA sequence can be feeded to the ribosome machinery (somewhat like AA to tRNA\AA) operating in the reverse time direction. If so, AA sequence is transformed to mRNA sequence parallel to it by adding mRNA codons from cytoplasm to the increasing mRNA sequence and fusing the counterparts of RNA codons to AAs to give tRNA.

    The basic objections against reverse translation will be considered later.
  2. The second step would be time reversal of splicing. I would add to the mRNA obtained in this way pieces of RNA. Non-determinism could be involved and only in special cases the outcome would be the RNA produced in the transcription of the original DNA. This is also so because a given AA corresponds to several RNA codons. Also this step would involve the R&D aspect giving rise to active evolution.

    This would generate new introns, which give rise to higher control levels in transcription. Could the emergence of the control levels in this way correspond to the composition f→ gº f for g: C2→ C2 and f=(f1,f2): H→ C2 defining a space-time surface decomposing to a union of regions given by the roots f=(f1,f2)=(0,0). For g=(g1,Id) with degree d=2 the number of roots is doubled. The prime degrees d=2 and d=3 are favoured since in these cases the roots of the iterates can be solved analytically.

    d=4 is the maximal degree allowing analytic expressions for the roots and a good guess is that it corresponds to the letters A,T,C,G of the code assignable to the roots of g4).

  3. The third step would be time reversal of transcription and in general does not produce DNA equivalent with the DNA coding for AA sequence. Time reversed splicing would increase the complexity of the DNA. After this the DNA sequence would replicate to double strand.
  4. If the dark variant of the reverse process leading from dark AA sequence to dark DNA can occur, the last step would lead to dark DNA strand, which would pair with ordinary DNA. Dark DNA would replicate and this would induce the replication of ordinary DNA strands leading to double DNA strands.
3. Objections agains reverse translation

Consider now the objections against the proposal.

  1. There exists no "reverse ribosome enzyme" for the reverse translation from protein to DNA. Could the time reversal occurring in BSFR come to the rescue? Could the ribosome machinery operate in the opposite time direction and in this way make possible reverse translation?

    After the first BSFR, the time reversed process would generate mRNA and tRNA from AA sequence and RNA codons and their counterparts in the cytosome and this looks like a decay of mRNA in standard time direction.

  2. The tRNA counterpart of RNA could be called tRNA\A. Is a gene activating its generation needed or does the cytosome contain enough tRNA\A generated in the translation. If not, information transfer to DNA to activate it is needed.

    It deserves to be noticed that for years ago I considered the possibility that originally AA sequences catalyzed the formation of RNA sequences and decayed in the process. Then the roles were changed: RNA sequence started to be generated by AA sequence. This process would have been analogous to the reverse translation.

  3. The map RNA → proteins is not invertible: this is however not a problem from R&D point of view since it would make possible generation of new DNAs. Furthermore, ZEO is motivated by the small failure of classical determinism for the dynamics of the space-time surfaces. Non-determinism is necessary if one wants to realize R&D lab.
  4. Protein folding could be seen as the problem. The protein should be unfolded first but this process occurs routinely under metabolic energy feed. Proteins also suffer modifications after translations but even this is not a problem if one wants to make living organism R&D lab.
  5. Is it really possible that reverse translation would not have been observed? Could a more prosaic and realistic option be the decay of AA sequence to AAs and the fusion of AAs and tRNA-AA codons to tRNA occurring in the standard view about generation of tRNA. Indeed, since AA sequence does not carry a negative constant charge density, heff hypothesis suggests that it is not accompanied by a dark variant consisting of dark proton triplets (as I have suggested earlier).

    The optimistic hope is that quantum coherence allows the reverse translation to occur for the entire AA or sequence or part of it, at least with some probability. If so, the RNAs combine in the process to RNA sequence accompanied by dark RNA.

  6. One can also consider the possibility that the reverse translation is dropped away so that one would have only the reverse transcription. This would be enough to produce the introns.
To sum up, the first step of the reverse process is clearly the vulnerable part of the proposal but it is not necessary.

4. Connection of the genetic code with the hierarchy of functional compositions as representation of cognition

An attractive idea is that the genes correspond to 4-surfaces as roots of polynomials gº f defining corresponding space-time surfaces and that the polynomials g are obtained as or from functional compositions of very simple polynomials. A natural identification of the letters of A, T, C, G of the genetic code would be as roots of a polynomial of degree d=4, which also allows analytic solutions for the roots. For the sake of simplicity, one can restrict g=(g1,g2) to g=(g1,Id) in the following.

  1. Why polynomials of degree 4 rather than prime degree 2 or 3 would appear as fundamental polynomials? Could the polynomials of degree 4 have simple Galois group so that functional decomposition g4)= h2)º i2) is not possible?

    The Galois group is a subgroup of S4 and the isomorphism classes for the Galois group of a quartic are S4, A4, D4 (dihedral), V4 (Klein four-group), and C4 (cyclic). A4 is non-Abelian and has V4 as a normal subgroup and is not simple. However if A4 acts as Galois group of a fourth order polynomials, the polynomial does not allow a decomposition g4)= h2)º i2) so that in this sense it is simple and also the only subgroup with this property. Hence A4 is unique.

  2. Remarkably, the order of A4 is 12, which is the number of vertices of icosahedron appearing in the icosa tetrahedral model of the genetic code (see this) in which Hamilton cycles through the 12 vertices of icosahedron defines a representation of 12-note scale and the triangular faces define bioharmony consisting 3-chords defined by the cycle.

  3. Could DNA codon sequences correspond to an abstraction hierarchy defined by functional composites of polynomials g4)? Codons would correspond to polynomials obtained as functional composites g64)=g14)º g24)º g34) and codons would correspond to the 64 roots of g. As a special case, one has g14)=g24)=g24) but holography = holomorphy vision does not require this also the roots can be solves for the iterates in general case.

    The polynomial degree associated with g64) is 42=64. g64)=g14)º g24)º g34) defines a 3-fold extension of the extension E of rationals appearing as coefficients of g64) and f so that the Galois group is not simple and allows a decomposition to normal subgroups defining a cognitive hierarchy.

  4. One should understand why codons special units of DNA. What if one modifies g64) so that it becomes a simple polynomial with prime degree allowing no functional decomposition so that codon would represent irreducible cognition? Prime degree d=61 is the maximal degree allowing this and corresponds to the number of codons coding for proteins. 3 codons would correspond to stop codons. Could g61) obtained from g64) by dropping 3 monomial factors associated with the stop codons?
  5. What about genes? Gene cannot contain stop codons except at its end. Could genes with N codons correspond to functional compositions of N polynomials gi61), i=1,...,N having degree 61N and defining a space-time representative of the gene. Note that the roots of gi61) are known if they are constructed in the proposed way so that also the genetic polynomials are cognitively very special!

    The simplicity condition for the genetic polymials could be realized by dropping out k monomial factors associated with the roots so that the the degree d=61N-k is prime. Genes correspond to irreducible cognitions obtained from composite cognitions by dropping k genes. Could these non-allowed genes be analogous to stop codons? What could this mean?

  6. In this framework, the addition of introns in the reverse transcription would correspond to the addition of functional composites of g61)k to the functional composite of g61)i defining the gene. The added composites should be somehow distinguishable from the codons coding for proteins. Note that it is not quite clear whether the order for functional compositions is the same as the linear order along the gene.

    The addition functional composites of g61)k increases the degree of the polynomial associated with the gene. This could imply that it is not anymore a prime polynomial. The dropping of the introns in splicing could mean a reduction to the original prime polynomial with a simple Galois group.

5. Connection with p-adic length scale hypothesis

What is remarkable is that this picture relates directly to the p-adic length scale hypothesis stating that primes p near to but smaller than powers of 2 or 3 are in central role physically. TGD leads to a generalization of p-adic number fields to their functional counterparts for which expansion in powers of prime is replaced by expansion in functional powers of polynomials with prime degrees p (see this and this). By dividing out k monomial factor one can reduce the degree d=pn to the prime degree d=pn-k.

For p=2 or 3 the roots of the polynomials in the hierarchy can be solved analytically and these hierarchies are expected to be cognitively very special. Genetic code would provide a realization with d=4 and for codons and genes one would have prime degree. The discovery of Galois would reflect itself in physics, biology and cognition.

See the article Could life have emerged when the universe was at room temperature? or the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.