https://matpitka.blogspot.com/

Thursday, February 12, 2026

Are large voids empty of dark matter and how to understand quiet Hubble flow

Are large voids empty of dark matter and how to understand quiet Hubble flow

Sabine Hossenfelder (see this) had a very interesting video about recent findings challenging the notion of the spherical dark matter halo and provides support for the TGD view. Hossenfelder crystallizes the findings to a statement "We live in between two huge dark matter voids".

The article discussed by Sabine Hossenfelder is published in Nature Astronomy by Wempe et al with title "The mass distribution inside the local group and around the local group" (see this). Here is the abstract of the article.

Modelling efforts have long struggled to reproduce the quiet Hubble flow around the Local Group, as they require unrealistically little mass beyond the haloes of the two main galaxies. Here we revisit this using ΛCDM simulations of Local Group analogues with initial conditions constrained to match the observed dynamics of the two main haloes and the surrounding flow. The observations are reconcilable within ΛCDM, but only if mass is strongly concentrated in a plane out to 10 Mpc, with the surface density rising away from the Local Group and with deep voids above and below. This configuration, dynamically inferred, mirrors known structures in the nearby galaxy distribution. The resulting Hubble flow is quiet yet strongly anisotropic, a fact obscured by the paucity of tracers at high supergalactic latitude. This flattened geometry reconciles the dynamical mass estimates of the Local Group with the surrounding velocity field, thus demonstrating full consistency within the standard cosmological model.

One of the motivations for the work of Wempe et al was the problem of quiet Hubble flow. Hubble flow corresponds to cosmic expansion. The gravitational interaction of astrophysical objects is expected to give rise to peculiar velocities as deviations from this flow. The scale for the deviations of these velocities from mean is however found to be too small, as if the local gravitational interactions had no effect.

The second challenge is to understand the generation of large voids with unrealistically low mass density.

  1. In principle, dark matter as particles could have experienced gravitational condensation and have developed localized structures just like ordinary matter. The observed planar pancake structure between two large voids associated with the local Group containing the pair formed by the Milky Way and M31 at its center can be reproduced by choosing initial values suitably in the simulations of ΛCDM model. would explain also the quiet Hubble flow. The likelihood for this kind of configuration is reported to be between 1/100 or 1/1000 in ΛCDM.
  2. In absence of any identified candidate for dark particles having only gravitational interactions, one is forced to challenge the notion of dark matter halo. A further motivation is that the ΛCDM model makes several problematic predictions. A cuspy galactic dark matter halo, too many small satellite galaxies and too slow growth of galaxies by gravitational condensation are predicted. These failures challenge not only the ΛCDM paradigm but also the view that gravitational condensation has formed galaxies.

Does dark matter really consist of exotic particles having only gravitational interactions?

What if dark matter is not realized as exotic particles and there is no halo of galactic dark matter?

  1. The TGD based view of space-time is motivated by the energy problem of general relativity (see this and this) and differs dramatically from that of general relativity (see this). Space-time at the fundamental level corresponds to space-time surfaces in H=M4× CP2 obeying holomorphy = holography principle and satisfying minimal surface equations irrespective of action except at 3-D singularities. Space-time surfaces are analogs of Bohr orbits for particles as 3-surfaces and are slightly non-deterministic although field equations are satisfied (see this).

    General Relativistic space-time follows at the quantum field theory limit when the many-sheeted space-time surface is approximated with single a slightly curved region of M4 and the induced gravitational and gauge fields associated with different space-time sheets are summed to give the standard model gauge fields and gravitational field in long length scales.

  2. TGD predicts Russian doll cosmomology. This follows from a fractal hierarchy if causal diamonds CD=cd× CP2, where cd is causal diamond of M4 identified as intersection of future and past directed light-cones. One could regard cd as an analog of perceptive field of a conscious entity or as quantization volume. cd is also analogous to an empty cosmology: big bang followed by big crunch. CDs contain space-time surfaces inside them. 4-D cosmologies are in TGD analogous to Bohr orbits of particles identified as 3-surfaces.
  3. The weak failure of classical determinism forces what I call zero energy ontology (ZEO) (see this) and together with the number theoretic vision (see this, this and this) this leads to the prediction that quantum coherence is possible in all scales. Space-time surfaces can be regarded as quantum coherence regions. The notion of field body, in particular the predicted monopole flux tubes, means a rather radical modification of the Maxwellian view of classical fields and has far reaching implications in all scales.
TGD view of galactic dark matter

TGD predicts that galactic dark matter is actually analogous to dark energy and concentrated at long cosmic strings with thickness given by CP2 length scale.

  1. Cosmic strings generate 1/ρ gravitational potential predicting the flat velocity spectrum (see this). Both volume term and Kähler magnetic energy contribute to the string tension.
  2. The cosmic string model bears a relation to the MOND model. At a certain radius the stringy 1/ρ contribution overcomes the ordinary 1/r2 contribution from the galactic nucleus and this critical radius has critical acceleration as a counterpart in the MOND model. Already Zeldowich observed a long time ago (1982) that galaxies are located along linear structures at the boundaries of giant voids.
  3. The model predicts a mechanism for the generation of ordinary matter via the decay of cosmic strings induced by some perturbation, say by their collision, inducing their thickening and reduction of string tension and the transformation of the energy to ordinary matter. This process would replace inflation in the TGD framework.

    No exponential expansion of the Universe is required since TGD predicts quantum coherence in arbitrarily long scales explaining the almost constant of the CMB temperature.

  4. The cosmic strings were the first extremals found already during the first year of TGD after the emergence of the basic idea in 1977 and leading to my thesis 1982. In the discussion inspired by the the article of Sabine Hossenfelder (see this), I learned that I am not anymore the only physicist suggesting that string like objects could explain galactic dark matter. Also K. Zatrimaylov has proposed something similar (see this, this, this, this, this).
Going to the concretia, one can ask whether MW and M31 are associated with the same cosmic string connecting them in the plane between the voids, or whether there are also vertical cosmic strings orthogonal to the plane and going through MW and M31. The cosmic strings carry monopole flux and must be closed: could MW and M31 be associated with the same closed cosmic string? There is indeed evidence that MW is formed in a collision of two cosmic strings, which are bound to occur for cosmic strings glued to background 3-surfaces.

TGD view of voids explains the quiet Hubble flow

An entire hierarchy of voids have been observed. Besides the Local Void (or rather pair of voids having the pancake-like structure between them) there are other Large Voids such as KBC void, Böotes Void, and Giant Void appearing in various scales.

In standard view gravitational condensation would have led to the generation of voids. This could be the case also in TGD. Could TGD say something more detailed about them?

  1. Poincare invariance and Lorentz invariance are exact symmetries of TGD. The preservation of these symmetries lost in GRT was the basic motivation of TGD. cd allows slicings by Lorentz invariant light-cone proper time constant hyperboloids of cd. The hyperbolic 3-space H3 as cosmic time constant 3-surface is therefore a fundamental object in TGD.

    H3 allows an infinite number of tessellations (see this and this) as analogs of lattices of E3 characterized by a symmetry group which is an infinite discrete subgroup of the Lorentz group SO(1,3). Simplest tessellations are honeycombs consisting of Platonic solids.

  2. There are indications that astrophysical objects could be assigned with the vertices of these kinds of tessellations. The quantization of cosmic redshifts is one piece of evidence for this. The recent finding of unexpectedly strong gravitational radiation background (see this)could be understood in terms of diffraction of gravitational radiation in a tessellation having stars at its vertices.

    Could also galaxies tend to form this kind of tessellations? These tessellations have vertices, edges and sheets as basic building blocks. Could the 3-D cells correspond to voids? Could vertices correspond to galaxies and edges to cosmic strings? Could faces correspond to the regions between voids. Could the unavoidable collisions of the cosmic strings generate networks giving rise to these tessellations.

  3. Could these tessellations dynamical equilibria emerge as correspond to gravitational bound states, in which the gravitational interactions with neighboring vertices are compensated. Could this be true also for the edges and matter at sheets. If so, the tessellations are quasi static in the sense that they only participate in cosmic expansion associated with the either half-cone of the cd. This would mean that virial motion.

    Planar sheets in E3 are minimal surfaces: could configurations M1× E2 × S1 ⊂ M4× CP2 are not allowed by holography= holomorphy principle in its basic form: could one combine M4 time coordinate and the geodesic coordinate of S1 to a complex coordinate of H?

  4. The icosa tetrahedral tessellation involved assigned to the TGD based model of genetic code (see this and this) is of special interest? Tetrahedra, octahedra, and icosahedra have triangular faces appearing as faces of this completely unique tessellation. Octahedra define however void regions, kind of holes in the tessellation formed by icosahedra and tetrahedra. Could this tessellation occur also in cosmic scales? Could octahedra correspond to regions of cd as geometric vacua in which there is no space-time as 4-surface. Could galaxies or larger units tend to be associated with the vertices and the faces of the tessellation.
  5. This picture could solve the problem of quiet Hubble flow. The gravitational interactions of astrophysical objects should induce peculiar velocities so that the velocity field of astrophysical objects is expected to deviate from the smooth Hubble flow caused by the cosmic expansion. The peculiar velocities are however much smaller than expected. Apparently, the gravitational interactions have no effect on the Hubble flow. This finding actually motivated also the study of Wempe et al (see this) leading to a support for the view that there are two voids containing no dark matter. If the galaxies and larger structures form quasi static tessellations having cosmic strings as edges, this would be the case.

    See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, February 08, 2026

Could TGD provide a vision about evolution at the gene level?

Could TGD provide a concrete view of evolution at the level of genes? How could new genes appear? Genetic engineering produces them artificially. Does Nature also perform genetic engineering? One can try to answer the question using the basic ideas of TGD inspired biology.

1. Key notions and ideas

  1. The predicts the presence of dark variants of DNA, mRNA, and tRNA associated with flux tubes with codons realized as dark proton triplets. Amino-acids do not carry constant negative charges so that dark proton triplets might not be present at the corresponding monopole flux tubes permanently.

    The hypothesis is that the DNA, mRNA, and tRNA and possibly also AA sequences pair with their dark variants. Resonance coupling by dark 3N-photons would make this possible: N corresponds to the number of codons or AAs).

    DNA replication, transcription, translation occur at the level of dark DNA and the counterparts of these processes at the level of chemistry correspond to an induced shadow dynamics, a kind of mimicry.

  2. There are good reasons to expect that the dark variants of basic information molecules, such as DNA and RNA, consisting of dark proton triplets, could be dynamical. This makes possible a kind of R&D lab. How could this be realized? The DNA double strand is not dynamical but RNA is. If the dynamics of RNA is induced from that of dark RNA, dark RNA could make possible experimentation producing new kinds of genes. The living system would evolve actively rather than by random mutations. Of course, also dark DNA could be dynamical and communicate with ordinary DNA resonantly only when in corresponding quantum states.
  3. Zero energy ontology (ZEO) predicts a fundamental error correction mechanism based on a pair of "big" state function reductions (BSFRs) changing the arrow of time temporarily. When the system finds that something goes wrong, it can make a BSFR and return back in geometric time and restart. After the second BSFR the situation might be better. This would be a fundamental mechanism of learning and problem solving. And perhaps also a fundamental mechanism of evolution.
  4. ZEO inspires the question whether the time reversals of transcription, of the splicing process of RNA after transcription, and even translation could be possible.

    Could the time reversal of the entire sequence decomposing to transcription of DNA to RNA followed by the splicing of RNA to mRNA followed by the transformation of tRNA and mRNA to AA sequence and mRNA codons produced from tRNA and from the decay of mRNA look like if possible at all? This would give rise to non-deterministic reverse engineering of DNA making possible a generation of modified more complex genes? What would be nice is that random mutations would be replaced by genetic engineering modifying the existing genome by starting from the protein level would be possible.

  5. A weaker form of the proposal is that only the reversals of splicing and transcription are possible. Already this could make possible an active evolution at the gene level.
In the following these alternative hypotheses are studied in the TGD framework. The cautious conclusion is that time reversals of splicing as attachment of introns and transcription are enough to induce active evolution. Also a rather detailed view about the connection of genetic code and the cognitive hierarchies predicted by the holography = holomorphy hypothesis emerges.

2. Could one consider the reversal of the translation of DNA to proteins?

Consider now what the reverse of the process leading from DNA to proteins would look like. In the initial state amino acid (AA) sequence and RNA codons are present. The central dogma of biology states that information is transferred in the direction of DNA → RNA → proteins so that the first guess for the answer is "No". Could ZEO help?

  1. At the first step mRNA and tRNA would be generated from AA sequence by reverse translation. This step seems to be the most vulnerable part of the process.
    1. AA sequence and RNA codons would transform to mRNA and tRNA codons in a process occurring in reversed time time direction. After the first BSFR mRNA and tRNA would appear at the "past" end of increasing causal diamond (CD). After the second BSFR they would appear at the "future" end of the CD. They would apparently pop out of vacuum. One could say that mRNA is reversed engineered from AA. This process is non-deterministic and 1-to-many since many mRNA codons code for a given amino acid.
    2. The process would generate tRNA. Usually tRNA is generated by transcribing an appropriate gene to pre-tRNA. After splicing and other kinds of processing the tRNA\AA is transferred to cytoplasm and AA is added to give the tRNA.

      Suppose that the AA sequence can be feeded to the ribosome machinery (somewhat like AA to tRNA\AA) operating in the reverse time direction. If so, AA sequence is transformed to mRNA sequence parallel to it by adding mRNA codons from cytoplasm to the increasing mRNA sequence and fusing the counterparts of RNA codons to AAs to give tRNA.

    The basic objections against reverse translation will be considered later.
  2. The second step would be time reversal of splicing. I would add to the mRNA obtained in this way pieces of RNA. Non-determinism could be involved and only in special cases the outcome would be the RNA produced in the transcription of the original DNA. This is also so because a given AA corresponds to several RNA codons. Also this step would involve the R&D aspect giving rise to active evolution.

    This would generate new introns, which give rise to higher control levels in transcription. Could the emergence of the control levels in this way correspond to the composition f→ gº f for g: C2→ C2 and f=(f1,f2): H→ C2 defining a space-time surface decomposing to a union of regions given by the roots f=(f1,f2)=(0,0). For g=(g1,Id) with degree d=2 the number of roots is doubled. The prime degrees d=2 and d=3 are favoured since in these cases the roots of the iterates can be solved analytically.

    d=4 is the maximal degree allowing analytic expressions for the roots and a good guess is that it corresponds to the letters A,T,C,G of the code assignable to the roots of g4).

  3. The third step would be time reversal of transcription and in general does not produce DNA equivalent with the DNA coding for AA sequence. Time reversed splicing would increase the complexity of the DNA. After this the DNA sequence would replicate to double strand.
  4. If the dark variant of the reverse process leading from dark AA sequence to dark DNA can occur, the last step would lead to dark DNA strand, which would pair with ordinary DNA. Dark DNA would replicate and this would induce the replication of ordinary DNA strands leading to double DNA strands.
3. Objections agains reverse translation

Consider now the objections against the proposal.

  1. There exists no "reverse ribosome enzyme" for the reverse translation from protein to DNA. Could the time reversal occurring in BSFR come to the rescue? Could the ribosome machinery operate in the opposite time direction and in this way make possible reverse translation?

    After the first BSFR, the time reversed process would generate mRNA and tRNA from AA sequence and RNA codons and their counterparts in the cytosome and this looks like a decay of mRNA in standard time direction.

  2. The tRNA counterpart of RNA could be called tRNA\A. Is a gene activating its generation needed or does the cytosome contain enough tRNA\A generated in the translation. If not, information transfer to DNA to activate it is needed.

    It deserves to be noticed that for years ago I considered the possibility that originally AA sequences catalyzed the formation of RNA sequences and decayed in the process. Then the roles were changed: RNA sequence started to be generated by AA sequence. This process would have been analogous to the reverse translation.

  3. The map RNA → proteins is not invertible: this is however not a problem from R&D point of view since it would make possible generation of new DNAs. Furthermore, ZEO is motivated by the small failure of classical determinism for the dynamics of the space-time surfaces. Non-determinism is necessary if one wants to realize R&D lab.
  4. Protein folding could be seen as the problem. The protein should be unfolded first but this process occurs routinely under metabolic energy feed. Proteins also suffer modifications after translations but even this is not a problem if one wants to make living organism R&D lab.
  5. Is it really possible that reverse translation would not have been observed? Could a more prosaic and realistic option be the decay of AA sequence to AAs and the fusion of AAs and tRNA-AA codons to tRNA occurring in the standard view about generation of tRNA. Indeed, since AA sequence does not carry a negative constant charge density, heff hypothesis suggests that it is not accompanied by a dark variant consisting of dark proton triplets (as I have suggested earlier).

    The optimistic hope is that quantum coherence allows the reverse translation to occur for the entire AA or sequence or part of it, at least with some probability. If so, the RNAs combine in the process to RNA sequence accompanied by dark RNA.

  6. One can also consider the possibility that the reverse translation is dropped away so that one would have only the reverse transcription. This would be enough to produce the introns.
To sum up, the first step of the reverse process is clearly the vulnerable part of the proposal but it is not necessary.

4. Connection of the genetic code with the hierarchy of functional compositions as representation of cognition

An attractive idea is that the genes correspond to 4-surfaces as roots of polynomials gº f defining corresponding space-time surfaces and that the polynomials g are obtained as or from functional compositions of very simple polynomials. A natural identification of the letters of A, T, C, G of the genetic code would be as roots of a polynomial of degree d=4, which also allows analytic solutions for the roots. For the sake of simplicity, one can restrict g=(g1,g2) to g=(g1,Id) in the following.

  1. Why polynomials of degree 4 rather than prime degree 2 or 3 would appear as fundamental polynomials? Could the polynomials of degree 4 have simple Galois group so that functional decomposition g4)= h2)º i2) is not possible?

    The Galois group is a subgroup of S4 and the isomorphism classes for the Galois group of a quartic are S4, A4, D4 (dihedral), V4 (Klein four-group), and C4 (cyclic). A4 is non-Abelian and has V4 as a normal subgroup and is not simple. However if A4 acts as Galois group of a fourth order polynomials, the polynomial does not allow a decomposition g4)= h2)º i2) so that in this sense it is simple and also the only subgroup with this property. Hence A4 is unique.

  2. Remarkably, the order of A4 is 12, which is the number of vertices of icosahedron appearing in the icosa tetrahedral model of the genetic code (see this) in which Hamilton cycles through the 12 vertices of icosahedron defines a representation of 12-note scale and the triangular faces define bioharmony consisting 3-chords defined by the cycle.

  3. Could DNA codon sequences correspond to an abstraction hierarchy defined by functional composites of polynomials g4)? Codons would correspond to polynomials obtained as functional composites g64)=g14)º g24)º g34) and codons would correspond to the 64 roots of g. As a special case, one has g14)=g24)=g24) but holography = holomorphy vision does not require this also the roots can be solves for the iterates in general case.

    The polynomial degree associated with g64) is 42=64. g64)=g14)º g24)º g34) defines a 3-fold extension of the extension E of rationals appearing as coefficients of g64) and f so that the Galois group is not simple and allows a decomposition to normal subgroups defining a cognitive hierarchy.

  4. One should understand why codons special units of DNA. What if one modifies g64) so that it becomes a simple polynomial with prime degree allowing no functional decomposition so that codon would represent irreducible cognition? Prime degree d=61 is the maximal degree allowing this and corresponds to the number of codons coding for proteins. 3 codons would correspond to stop codons. Could g61) obtained from g64) by dropping 3 monomial factors associated with the stop codons?
  5. What about genes? Gene cannot contain stop codons except at its end. Could genes with N codons correspond to functional compositions of N polynomials gi61), i=1,...,N having degree 61N and defining a space-time representative of the gene. Note that the roots of gi61) are known if they are constructed in the proposed way so that also the genetic polynomials are cognitively very special!

    The simplicity condition for the genetic polymials could be realized by dropping out k monomial factors associated with the roots so that the the degree d=61N-k is prime. Genes correspond to irreducible cognitions obtained from composite cognitions by dropping k genes. Could these non-allowed genes be analogous to stop codons? What could this mean?

  6. In this framework, the addition of introns in the reverse transcription would correspond to the addition of functional composites of g61)k to the functional composite of g61)i defining the gene. The added composites should be somehow distinguishable from the codons coding for proteins. Note that it is not quite clear whether the order for functional compositions is the same as the linear order along the gene.

    The addition functional composites of g61)k increases the degree of the polynomial associated with the gene. This could imply that it is not anymore a prime polynomial. The dropping of the introns in splicing could mean a reduction to the original prime polynomial with a simple Galois group.

5. Connection with p-adic length scale hypothesis

What is remarkable is that this picture relates directly to the p-adic length scale hypothesis stating that primes p near to but smaller than powers of 2 or 3 are in central role physically. TGD leads to a generalization of p-adic number fields to their functional counterparts for which expansion in powers of prime is replaced by expansion in functional powers of polynomials with prime degrees p (see this and this). By dividing out k monomial factor one can reduce the degree d=pn to the prime degree d=pn-k.

For p=2 or 3 the roots of the polynomials in the hierarchy can be solved analytically and these hierarchies are expected to be cognitively very special. Genetic code would provide a realization with d=4 and for codons and genes one would have prime degree. The discovery of Galois would reflect itself in physics, biology and cognition.

See the article Could life have emerged when the universe was at room temperature? or the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, February 07, 2026

Basic ingredients of life found in conditions prevailing in the early universe

Using the James Webb Space Telescope, astronomers have detected complex organic molecules frozen in ice around a baby star called ST6, located in the Large Magellanic Cloud, a neighboring galaxy about 160,000 light-years away (see this). There is an article about the finding by Sewilo M. et al, with title "Protostars at Subsolar Metallicity: First Detection of Large Solid-State Complex Organic Molecules in the Large Magellanic Cloud", published in Astrophysical Journal Letters (see this).

Molecules like methanol, ethanol, acetaldehyde, methyl formate, and even acetic acid the key ingredient of vinegar were locked inside cosmic ice. These chemicals belong to the molecules that help kick-start chemistry needed for life. Until now, finding them in solid ice was incredibly rare, even inside our own galaxy.

The Large Magellanic Cloud is a harsh place. It has fewer heavy elements and stronger radiation conditions similar to the early universe, long before planets like Earth existed. Despite this complex organic chemistry is possible there. Does this mean that the ingredients for life don t need perfect conditions or that there is new physics involved? Even more, is this new physics universal.

It is very difficult to imagine how a complex biochemistry (biochemistry as we understand it) could have prevailed much before planets like Earth did exist. New physics seems to be required and TGD predicts it. The notion of a field body (field-/magnetic body) carrying large heff phases of ordinary matter (see this, this, this, this and this) could explain how the organic molecules crucial for life could have formed in these circumstances.

  1. Water is a key element of life in TGD inspired quantum biology. Therefore the fact that the molecules were inside ice is a valuable hint. Pollack effect (see this) occurs when water is irradiated with, say, infrared photons arriving from the Sun or some other source now the protostar. Pollack effect generates negatively charged regions, Exclusion Zones (EZs) with rather strange properties such as the ability to kick out impurities, which seems to be in conflict with the second law of thermodynamics.

    Protons must go somewhere from the EZ and the TGD inspired proposal is that they go to the magnetic body of the system and form a large heff phase, in many cases behaving like dark matter. These phases are not however identifiable as galactic dark matter. heff serves as a measure for the algebraic complexity of the space-time surface and also for the level of conscious intelligence, a kind of universal IQ. The magnetic body naturally controls the physics of ordinary matter.

    What matters is the energy needed to kick ordinary protons to the magnetic body: the needed energy corresponds to the energy difference between -OH and O- + dark protons in the magnetic body. These two states of proton are proposed to define what might be regarded as a universal topological qubit (see this, this and this). Also the formation of organic molecules as bound states liberates binding energy and can induce the generalized Pollack effect.

  2. The formation of dark protons at the magnetic body of water could represent one of the first steps in the evolution of life and already at this stage the dark analogs of the basic information molecules, genetic code and metabolism could have emerged. The chemical realization of the genetic code would have emerged later.
Could the Pollack effect and the notion of magnetic body allow us to understand the formation of the basic molecules of life found to exist in the Magellanic Cloud. -OH group, appearing also in water molecules, is essential for the standard form of Pollack effect so that it could be important also now.
  1. Complex alcohols can contain more than one -OH group bound to a saturated Carbon atom. Simple alcohols (see this) obey the formula H-(CH2)n-OH. Both methanol (CH3)-OH and ethanol (CH3)-(CH2)-OH contain the -OH group so that Pollack effect is possible for them and could explain the special effects of alcohol on consciousness. Note that methane CH4 emerges from the decompositions of organic materials.
  2. Acetaldehyde (CH3)-(H-C=O) can be formed by a partial oxidation of ethanol in an exothermic reaction at temperatures 500-650 C. The reaction equation for the condensation of acetaldehyde and O2 is 2 (CH3)-(CH2)-OH + O2 → 2 (CH3)-(H-C=O) + 2 H2O. Dehydration is in question.

    One can imagine the following sketch for what might happen. At the first step the protons of -OH groups of ethanols are kicked to dark protons at the magnetic body. This would induce the transformation of C-O bonds to C=O bonds, forcing C to give up the second H atom of CH2. The dark protons would drop back to ordinary protons and together with electrons and the two H atoms and oxygens of O2 would form 2 water molecules.

  3. Methyl formate (CH3)-(O=C-OH) can be produced in the condensation reaction (CH3)-OH + H-(O=C-OH)→ CH3)-(O=C-OH)+H2O of methanol (CH3)-OH and formic acid H-(O=C-OH). Dehydration is involved.

    OH group must be replaced with O=C-OH in the reaction. One can imagine that the proton of -OH is temporarily transformed to a dark proton at the magnetic body and facilitates the replacement. After that the dark proton, O- and H- of H-(O=C-OH) combine to form the water molecule.

  4. Acetic acid (CH3)-(O=C-OH) is formed by the transformation H2→ =O occurring in the condensation reaction of ethanol and oxygen as (CH3)-(CH2)-OH + O2 → (CH3)-(O=C-OH )+H2O involving dehydration. Also now the proton of -OH could transform to a dark proton. This should induce replacement CH2 → C=O, the splitting of O2 and the formation of H2O. The dark proton would drop back and -OH would be regenerated.
Could the detected molecules allow us to conclude anything about the presence of more complex biomolecules such as sugars and riboses crucial for life?
  1. Sugars or carbohydrates (see this), involve monosaccharides with formula CnH2nOn, with n in the range 5 to 7, have a key role in metabolism. They contain a relatively large number of -OH groups associated with an aromatic ring. For C6H12O6 (fructose, galactose, glucose) have 4 -OH groups. Yeasts break down fructose, galactose and glucose to ethanol in alcoholic fermentation. More generally, alcohols such as mannitol emerge in the reduction of saccharides.
  2. TGD suggests that the metabolic energy liberated as sugars burn to alcohols involves the transfer of dark protons to the magnetic bodies of the acceptor molecules followed by their transformation to ordinary protons liberating the metabolic energy. This would occur in the ADP → ATP process.
  3. Ribose C5H10O5 (see this), appearing in RNA, contains 4 -OH groups and deoxyribose C5H10O4 appearing in DNA contains 3 -OH groups. Phosphorylated ribose appears in ADP, ATP, coenzyme A and NADH.

    Biological and chemical reduction and fermentation can produce ribitol C5H12O5, which is a sugar alcohol. When ribitol is subjected to hydroxyl radicals, C-C bonds are cleavaged and formic acid ((H-C=O)-(OH)) appears as a decay product. Methanol was detected: could formic acid transform to methanol ((CH3)-(OH)) in the presence of water by the reaction (H-C=O)-(OH)+H2O→ (CH3)-(OH)+ O2?

To conclude, the temporary transformation of proton to dark proton at the magnetic body by Pollack effect could be involved with all these reactions.

See the article Could life have emerged when the universe was at room temperature? or the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, February 06, 2026

Quantum geometry and M8-H duality

Quantum geometry and quantum metric are very interesting notions (see this). What does one mean with a quantum metric?
  1. In condensed matter physics many particle states are labelled by particle momenta. Berry phase is associated with U(1) connection in this momentum space. Quantum metric means extension to Kähler metric involving both the U(1) connection and metric. It defines a distance between quantum states labelled by momenta at 2-D Fermi surface.
  2. Quantum metric in condensed matter sense is defined in momentum space, or rather, Fermi surface, rather than Hilbert space. Here I disagree with the claim of (see this). At Hilbert space level geometrization would replace the flat Kähler metric with a curved metric and replace global linear superposition with its local variant.
  3. What is essential for this interpretation, is that the momentum space (2-D Fermi sphere) is endowed with a Kähler geometry. Both momentum space and position space are geometrized.
In the TGD framework, M8-H duality implies something analogous but not equivalent.
  1. Space-times are 4-surfaces X4 in H=M4× CP2: this hypothesis geometrizes standard model interactions and solves the energy problem of general relativity. Holography = holomorphy hypothesis leads to an exactly solvable classical theory.
  2. M8 serves as the analog of 8-D momentum space for H=M4× CP2 and Y4 generalizes the notion of momentum space. One can define for the points of M8 Minkowskian inner product xy as a real part of their octonionic product.
  3. Space-time surfaces X4 in H=M4× CP2 have as M8 duals 4-surfaces Y4 in M8 related by M8-H duality. Y4 generalizes the notion of the Fermi sphere and can be regarded as the 4-D space of 4-momenta representing the dispersion relation (see this and this).

    Associativity condition for the tangent space of Y4 defines the number theoretic dynamics of Y4 and local G2 transformations allow us to construct a general solutions Y4 from very simple basic solutions Y4(f) determined by the roots f(0)=0 of analytic functions f(o) with real coefficients, which can be restricted to an extension of rationals. Polynomials and rational functions f appear as important special cases and form hierarchies since basic arithmetic operations and functional composition produce new solutions.

  4. How to define the analog of quantum geometry?
    1. The values of functions f(o) can be regarded as octonions such that the imaginary part is proportional to the radial octonion unit and thus allows interpretation as an ordinary imaginary unit. For two tangent vectors of x, y of quaternionic Y4 the real part of xy defines Minkowskian inner product. The product xy is a quaternion and could be seen as a quaternionic analog of Kähler form. An analog of quaternion structure would be in question. Could this define the number theoretic version of quantum geometry?
    2. CP2 allows a quaternion structure but does not allow hyper-Kähler structure. Hyper-Kähler structure with 4 covariantly constant quaternionic units defined by metric and 3 covariantly constant Kähler forms is not possible for CP2. And define induced quaternionic structure in X4. Could one induce the metric and spinor curvature of X4 to Y4?

      The quaternionic tangent spaces of Y4 are labelled by the points of CP2 and the corresponding CP2 point can be taken as a local coordinate of M8. The metric of Y4 could be induced from that of X4 by M8-H duality. In M4⊂ M8 degrees of freedom the inversion map M4→ M4⊂ H motivated by Uncertainty Principle defines the M8-H duality.

      There are singularities at which the CP2 point associated with Y4 is not unique. In the case of CP2 type extremals the CP2 points form a 3-D surface X3 and X3 points correspond to single point y in Y4: Y4 has a coordinate singularity at y, y blows up to X3 in H.

    Could these two approaches give equivalent quaternionic quantum metrics in Y4?

    See the article "Does M8-H duality reduce to local G2 symmetry?" or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, January 31, 2026

A critical view of heff hypothesis

heff hypothesis is one of the key elements of TGD and of TGD inspired theory of consciouness. One can raise several critical questions related to it.

1. Identification of heff

Consider first the identification of heff.

  1. The idea is that the TGD Universe is theoretician friendly (see this) and this). The value of heff increases when the perturbative QFT as a long range limit of TGD ceases to converge. Since coupling strengths are proportional to 1/ℏeff, the increase of heff guarantees convergence. In TGD, quantum coherence is predicted to be present in all scales and this kind of perturbation theory is possible even when the interacting systems have macroscopic sizes so that masses and charges are very large.
  2. This predicts that heff has a spectrum and depends on the products of the charges appearing in a given coupling strength. Since in TGD classical fields define the vertices (see this), this suggests that one can assign heff to gravitational, electric, and perhaps also color and weak coupling strengths and heff is proportional to a product of charges and is a two-particle parameter unlike the ordinary Planck constant.

    The proposed mathematical interpretation is that the 2-particle character reflects a Yangianization of the basic symmetries (see this and this). Yangian symmetries do not reduce to single particle symmetries but can also act on pairs, triplets, ... of particles. One would have poly-local symmetries so that the charge unit heff would depend on quantum numbers of particles in the vertex. The monopole flux tube connections between particle-like 3-surfaces are a natural candidate for inducing Yangianization. The problem indeed is that monopole flux tubes carry the large heff phases.

  3. Perturbative QFT is assumed to apply at the QFT limit when many-sheeted space-time is replaced with single region of M4 and the sums of the induced gauge potentials for space-time sheets define gauge fields and the sum over the CP2 parts of induced metric defines the gravitational field.

    The objection is that the QFT approach does not apply at the fundamental level of TGD: there is no path integral. Is there any way to replace this argument with an argument holding true at the fundamental level.

2. Number theoretic vision and heff

Number theoretic vision leads to a possible identification of heff.

  1. Number theoretic vision leads to the proposal that heff characterizes the algebraic complexity of the many-sheeted space-time surface. If the space-time surface is defined in terms of roots of analytic function pair (f1,f2), the extension of rationals appearing in the coefficients of fi would define heff as its dimension and heff would not depend of the form of fi.

    The number of roots as the number space-time regions as solutions to (f1,f2)=0 would also be a natural candidate for the value of heff. In particular, if fi are polynomials.

    One can generalize the ordinary Galois group so that it acts as flows and permutes different roots of (f1,f2)=(0,0). In this case the number of roots could define heff. Certainly it is a measure for the complexity.

    Suppose that f2 is kept constant f1=P1 is polynomial. In this case the dimension of the algebraic extension associated with P1 could determine the value of heff. Also the degree of P1 giving the number of roots can be considered.

3. The physical interpretation of heff

Consider now the physical picture about the emergence of larger values of heff.

  1. The increase of heff means also that the Compton length ℏeff/m as a size scale for a quantum object of mass m increases. Since one expects that space-time sheets of arbitrarily large size are possible, this is very natural. In the case of ℏeff=ℏgr (see this), the gravitational Compton length proportional to the product Mm of masses does not depend on the "small" mass m. This would reflect the Equivalence Principle. For electromagnetic interactions one would have a similar picture ℏeff=ℏem (see this), which is proportional to Qq, where Q and q are em charges. The same applies to color and weak interactions.

    The heff phases associated with different interactions and different particles would be at separate space-times sheets: U-shaped magnetic and electric flux tubes carrying monopole fluxes are the proposed identification. This implies a highly organized structure: "dark" particles would reside like books at library shelves labelled by classical interactions and by products of corresponding charges .

  2. The increase of heff that the unit of angular momentum increases. This in turn implies that the cyclotron energy scale is scaled up by heff/h. This is crucial for the explanation of the findings of Blackman about the effects of ELF em fields on vertebrate brains. This assumes that particle mass and therefore also four-momentum remains un-affected in the scaling h→heff or at least that their values are not larger than the ordinary value.

    The intuitive view about the geometric origin of angular momentum (L=r× p) as something proportional to the size of the 3-surface supports this view. Angular momentum has a scaling dimension 0 whereas for momentum it is -1. Also conformal weight h has dimension 0 so that scaling should affect the maximal unit of conformal weight. Conformal algebras and symplectic algebra allow hierarchy of isomorphic sub-algebras (see this) and I have proposed that this hierarchy means a hierarchy of breakings of conformal symmetry with the unit of the conformal weight is scale up by integer.

  3. What about those conserved charges, which do not relate to M4 but to CP2? What happens to the unit of electric charge? Anyons provide evidence for charge fractionation. Could charge fractionation take place quite generally? Even in M4 degrees of freedom?

    I have discussed the possibility of charge fractionation (see this). For heff=Nh0 (h0≤ h is the minimum value of heff), the charge would be distributed between M<N space-time surfaces, possibly connected by monopole flux tubes. The k:th space-time sheet would carry charge QmaxMk/N. This would give a total charge MQmax/N. The system would consist of fractionally charged subsystems and the total charge would be integer valued for the standard unit of charge.

    For this option, the cyclotron energy would be proportional to (Mk/N)(ℏeff/h0) and its value would be proportional ℏeff/h0 only in maximum. For other quantum numbers than angular momentum and conformal weight, the fractional charge would be Mk/N fraction of the ordinary value.

Is there any concrete interpretation for the emergence of the effective value of the Planck constant?
  1. The gravitational Compton length Λgr= GM/β0= rS/2β0, where rS is Schwartschild radius and β0=0/c≤ 1 is velocity parameter, is a natural guess for the thickness of the M4 projection of the gravitational flux tube. Particle Compton length Lc would be scaled up by rS/2β0Lc: for protons and for β0=1 this would mean scaling of ∼ 1013.
  2. The classical interpretation would rely on the replacement of a point-like particle with 3-surface. The large radius of the flux tube, the classical angular momentum of classical fields and the orbital angular momentum of a delocalized dark particle. This could increase the effective spin unit to hgr. A similar interpretation applies in the case of electric Planck constant hem.

    This interpretation would support the view that heff corresponds to the number of roots to (f1,f2)=(0,0) as space-time regions. The fractionally charged states would correspond to states in which a charged particle is delocalized in a finite subset of roots.

  3. It must be noticed that many-sheetedness can be interpreted in two ways. The space-time surface can be many-sheeted over M4 or CP2. In the first case the sheets are parallel and extremely near to each other. In the second case they could correspond to parallel monopole flux tubes forming a bubble. The flux tubes could have even macroscopic distances. Elementary particles could be delocalized at the flux tubes.

4. Conservation laws in the heff changing phase transitions

How can conservation laws be satisfied in the heff changing phase transitions?

  1. How to satisfy the conservation laws in the phase transition changing the value of heff? If the value of the spin unit changes to heff, the transition must involve a process guaranteeing angular momentum conservation. What comes to mind is that the transition generates radiation, compensating for the increase of the total angular momentum in the process. This radiation could generate a state analogous to Bose-Einstein condensate. The transition could also proceed in a stepwise way from a seed and gradually increase the fractionized angular momentum unit via values Mheff/N to its maximum value heff.
  2. I have proposed the notion of N-particles to describe the macroscopic quantum states at the monopole flux tubes and applied this notion in the model of genetic code (see this)/tessellationH3}. The emergence of fractionally charged N-particles with a scaled up size and angular momentum could be accompanied by the emission of N-photons or N-gravitons to guarantee angular momentum conservation. In quantum biology 3N-photons would make possible communications between dark genes consisting of N codons.
      See the article Answers to the questions of Vasileios Basios and Marko Manninen in Hypothesis Refinery session of Galileo Commission.

      For a summary of earlier postings see Latest progress in TGD.

      For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, January 30, 2026

Answers to the questions of Vasileios Basios and Marko Manninen in Hypothesis Refinery session of Galileo Commission

Galilei Commission (see this) is a Scientific and Medical network with the goal to help the transition from the materialism and reductionism dominated view of science to a postmaterialistic world view expanding the science so that also consciousness, life and spirituality are accepted as aspects of the reality.

There are of course very many proposals for what a post-materialistic view might be and TGD (Topological GeometroDynamics) and TGD inspired view of consciousness and quantum biology is one of these views. In this view theory of conscious experience can be seen as a generalization of quantum measurement theory based on new quantum ontology forced by TGD.

I participated in a Hypothesis Refinery meeting Galilei Commission held 27.1. 2026 and talked about TGD inspired theory of consciousness (see this). There were very interesting questions by Vasileios Basios and Marko Manninen that I received already before meeting. Unfortunately, the time allowed me to answer only some of these questions during the meeting. Therefore I decided to write an article containing the somewhat shortened questions and my responses. As always, this process stimulated fresh observations.

For instance, I formulated more precisely the crucial arguments behind the holography = holomorphy hypothesis implying also the universality implying that the solution ansatz makes sense for any general coordinate invariant action constructible in terms of the induced geometry.

I discussed in detail the classical non-determinism crucial for cognition already present for 2-D minimal surfaces (soap films) (see this). Since classical non-determinism is so crucial, I added an appendix about what it means geometrically in the 2-D case and how it might generalize to the 4-D situation.

I also clarified the testable implications of the heff hypothesis in biology derivable from the explicit expressions for gravitational and electric Planck constants.

See the article Answers to the questions of Vasileios Basios and Marko Manninen in Hypothesis Refinery session of Galileo Commission or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 29, 2026

Can one deduce the value of the effective Planck constant for a bio-system from empirical data?

The TGD inspired view of consciousness involves the proposal that the effective Planck constant heff, identified as a dimension of an extension of rationals, serves as a measure for algebraic complexity and in this way serves as a universal IQ. The reason would be that the higher the complexity of the system, the higher its capacity to represent. Marko Manninen posed in the Galilei meeting an interesting question related to this hypothesis.

Assuming that you have a concrete biological or computational system, how do you actually compute n (dimension of extension) in a non-arbitrary way?

Consider first the notion of heff. One must start from the system and try to deduce the value for that system. Assuming that one has a formula for heff in terms of the parameters of the system, one can calculate it. Gravitational and electric Planck constants assignable to classical gravitational and electric long range fields are in a special role. It is not clear whether there are other kinds of effective Planck constants or whether heff always is a parameter characterizing a 2-particle system (this is proposed to reflect the presence of Yangian symmetries, which are multilocal).

There are two especially important cases: hgr and hem assignable to classical gravitational and electric fields.

  1. There is an explicit formula for computing both gravitational and electric Planck constants for a pair M,m or Q,q. The products Mm and Qq appear in the formulas. It is not clear whether all values of heff are expressible as products of two charges or two masses. In most applications this assumption can be made.

    The proposal is that when the value of coupling strength appearing in perturbative expansion at QFT limit is so large that perturbative series fails to converge, a phase transition increasing the value of H to heff guaranteeing the convergence occurs since coupling strength is scale down by h/heff.

    One can compute hgr and hem when the velocity parameter v0/c\leq 1 is given. The outcome conforms with the fact that increase of M and Q means increase of the "IQ".

  2. For Earth, Sun, etc gravitational Compton length is Λgr= ℏgr/m= 2GM/β0, where β0 is velocity parameter. For the Earth β0=1 implies that Λgr equals to 1/2 of Schwartschild radius rS= 2GM. There is no dependence on the small mass m. This reflects the Equivalence Principle. The same is true for dark cyclotron energies. This has strong consequences for biology.

    For Earth Λgr equals .5 cm, the size of a snowflake, which is a mysterious structure from the point of view of standard physics, since it can be seen as a zoomed version of water molecule crystal in atomic scale. In Mars it would be by a factor 1/10 smaller. There is also the corresponding universal frequency, which for Earth is in the 10 GHz range and could be an important biological frequency. These predictions are very strong and could be killer predictions.

  3. hem is proportional to a product Qq of charges must be so large that Qqα/β0 is larger than 1 so that perturbation series making sense at the QFT limit of TGD and involving Qqα fails to converge. β0=v0/c is a velocity parameter and for Earth its value equals to 1 for ℏgr in the case of the Earth. For q=1, Q must be large enough.

    DNA charge density is constant and Q is proportional to 3N, N the number of codons. Q is therefore proportional to the length of the DNA unit considered. Genes increase in length with evolution and also DNA itself. For DNA strand pairs Q2 is very large! For cells Q is proportional to the surface area S of the cell membrane. For large neurons, in particular pyramidal neurons and the trigeminal nerve it is very large and would correspond to the highest neuronal IQ. Neurons can also fuse to larger systems and this occurs temporarily in nerve pulse transmission at synaptic contacts and this increases the value of hem.

  4. What about atomic nuclei? When nuclear charge exceeds n=137 hem becomes larger than h even for v0/c=1. In the 1970s it was observed that at energies exceeding Coulomb wall particles that I interpreted as electropions were created (see this).
See the article The recent view of TGD inspired theory of consciousness and quantum biology or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.