https://matpitka.blogspot.com/

Sunday, February 08, 2026

Could TGD provide a vision about evolution at the gene level?

Could TGD provide a concrete view of evolution at the level of genes? How could new genes appear? Genetic engineering produces them artificially. Does Nature also perform genetic engineering? One can try to answer the question using the basic ideas of TGD inspired biology.

1. Key notions and ideas

  1. The predicts the presence of dark variants of DNA, mRNA, and tRNA associated with flux tubes with codons realized as dark proton triplets. Amino-acids do not carry constant negative charges so that dark proton triplets might not be present at the corresponding monopole flux tubes permanently.

    The hypothesis is that the DNA, mRNA, and tRNA and possibly also AA sequences pair with their dark variants. Resonance coupling by dark 3N-photons would make this possible: N corresponds to the number of codons or AAs).

    DNA replication, transcription, translation occur at the level of dark DNA and the counterparts of these processes at the level of chemistry correspond to an induced shadow dynamics, a kind of mimicry.

  2. There are good reasons to expect that the dark variants of basic information molecules, such as DNA and RNA, consisting of dark proton triplets, could be dynamical. This makes possible a kind of R&D lab. How could this be realized? The DNA double strand is not dynamical but RNA is. If the dynamics of RNA is induced from that of dark RNA, dark RNA could make possible experimentation producing new kinds of genes. The living system would evolve actively rather than by random mutations. Of course, also dark DNA could be dynamical and communicate with ordinary DNA resonantly only when in corresponding quantum states.
  3. Zero energy ontology (ZEO) predicts a fundamental error correction mechanism based on a pair of "big" state function reductions (BSFRs) changing the arrow of time temporarily. When the system finds that something goes wrong, it can make a BSFR and return back in geometric time and restart. After the second BSFR the situation might be better. This would be a fundamental mechanism of learning and problem solving. And perhaps also a fundamental mechanism of evolution.
  4. ZEO inspires the question whether the time reversals of transcription, of the splicing process of RNA after transcription, and even translation could be possible.

    Could the time reversal of the entire sequence decomposing to transcription of DNA to RNA followed by the splicing of RNA to mRNA followed by the transformation of tRNA and mRNA to AA sequence and mRNA codons produced from tRNA and from the decay of mRNA look like if possible at all? This would give rise to non-deterministic reverse engineering of DNA making possible a generation of modified more complex genes? What would be nice is that random mutations would be replaced by genetic engineering modifying the existing genome by starting from the protein level would be possible.

  5. A weaker form of the proposal is that only the reversals of splicing and transcription are possible. Already this could make possible an active evolution at the gene level.
In the following these alternative hypotheses are studied in the TGD framework. The cautious conclusion is that time reversals of splicing as attachment of introns and transcription are enough to induce active evolution. Also a rather detailed view about the connection of genetic code and the cognitive hierarchies predicted by the holography = holomorphy hypothesis emerges.

2. Could one consider the reversal of the translation of DNA to proteins?

Consider now what the reverse of the process leading from DNA to proteins would look like. In the initial state amino acid (AA) sequence and RNA codons are present. The central dogma of biology states that information is transferred in the direction of DNA → RNA → proteins so that the first guess for the answer is "No". Could ZEO help?

  1. At the first step mRNA and tRNA would be generated from AA sequence by reverse translation. This step seems to be the most vulnerable part of the process.
    1. AA sequence and RNA codons would transform to mRNA and tRNA codons in a process occurring in reversed time time direction. After the first BSFR mRNA and tRNA would appear at the "past" end of increasing causal diamond (CD). After the second BSFR they would appear at the "future" end of the CD. They would apparently pop out of vacuum. One could say that mRNA is reversed engineered from AA. This process is non-deterministic and 1-to-many since many mRNA codons code for a given amino acid.
    2. The process would generate tRNA. Usually tRNA is generated by transcribing an appropriate gene to pre-tRNA. After splicing and other kinds of processing the tRNA\AA is transferred to cytoplasm and AA is added to give the tRNA.

      Suppose that the AA sequence can be feeded to the ribosome machinery (somewhat like AA to tRNA\AA) operating in the reverse time direction. If so, AA sequence is transformed to mRNA sequence parallel to it by adding mRNA codons from cytoplasm to the increasing mRNA sequence and fusing the counterparts of RNA codons to AAs to give tRNA.

    The basic objections against reverse translation will be considered later.
  2. The second step would be time reversal of splicing. I would add to the mRNA obtained in this way pieces of RNA. Non-determinism could be involved and only in special cases the outcome would be the RNA produced in the transcription of the original DNA. This is also so because a given AA corresponds to several RNA codons. Also this step would involve the R&D aspect giving rise to active evolution.

    This would generate new introns, which give rise to higher control levels in transcription. Could the emergence of the control levels in this way correspond to the composition f→ gº f for g: C2→ C2 and f=(f1,f2): H→ C2 defining a space-time surface decomposing to a union of regions given by the roots f=(f1,f2)=(0,0). For g=(g1,Id) with degree d=2 the number of roots is doubled. The prime degrees d=2 and d=3 are favoured since in these cases the roots of the iterates can be solved analytically.

    d=4 is the maximal degree allowing analytic expressions for the roots and a good guess is that it corresponds to the letters A,T,C,G of the code assignable to the roots of g4).

  3. The third step would be time reversal of transcription and in general does not produce DNA equivalent with the DNA coding for AA sequence. Time reversed splicing would increase the complexity of the DNA. After this the DNA sequence would replicate to double strand.
  4. If the dark variant of the reverse process leading from dark AA sequence to dark DNA can occur, the last step would lead to dark DNA strand, which would pair with ordinary DNA. Dark DNA would replicate and this would induce the replication of ordinary DNA strands leading to double DNA strands.
3. Objections agains reverse translation

Consider now the objections against the proposal.

  1. There exists no "reverse ribosome enzyme" for the reverse translation from protein to DNA. Could the time reversal occurring in BSFR come to the rescue? Could the ribosome machinery operate in the opposite time direction and in this way make possible reverse translation?

    After the first BSFR, the time reversed process would generate mRNA and tRNA from AA sequence and RNA codons and their counterparts in the cytosome and this looks like a decay of mRNA in standard time direction.

  2. The tRNA counterpart of RNA could be called tRNA\A. Is a gene activating its generation needed or does the cytosome contain enough tRNA\A generated in the translation. If not, information transfer to DNA to activate it is needed.

    It deserves to be noticed that for years ago I considered the possibility that originally AA sequences catalyzed the formation of RNA sequences and decayed in the process. Then the roles were changed: RNA sequence started to be generated by AA sequence. This process would have been analogous to the reverse translation.

  3. The map RNA → proteins is not invertible: this is however not a problem from R&D point of view since it would make possible generation of new DNAs. Furthermore, ZEO is motivated by the small failure of classical determinism for the dynamics of the space-time surfaces. Non-determinism is necessary if one wants to realize R&D lab.
  4. Protein folding could be seen as the problem. The protein should be unfolded first but this process occurs routinely under metabolic energy feed. Proteins also suffer modifications after translations but even this is not a problem if one wants to make living organism R&D lab.
  5. Is it really possible that reverse translation would not have been observed? Could a more prosaic and realistic option be the decay of AA sequence to AAs and the fusion of AAs and tRNA-AA codons to tRNA occurring in the standard view about generation of tRNA. Indeed, since AA sequence does not carry a negative constant charge density, heff hypothesis suggests that it is not accompanied by a dark variant consisting of dark proton triplets (as I have suggested earlier).

    The optimistic hope is that quantum coherence allows the reverse translation to occur for the entire AA or sequence or part of it, at least with some probability. If so, the RNAs combine in the process to RNA sequence accompanied by dark RNA.

  6. One can also consider the possibility that the reverse translation is dropped away so that one would have only the reverse transcription. This would be enough to produce the introns.
To sum up, the first step of the reverse process is clearly the vulnerable part of the proposal but it is not necessary.

4. Connection of the genetic code with the hierarchy of functional compositions as representation of cognition

An attractive idea is that the genes correspond to 4-surfaces as roots of polynomials gº f defining corresponding space-time surfaces and that the polynomials g are obtained as or from functional compositions of very simple polynomials. A natural identification of the letters of A, T, C, G of the genetic code would be as roots of a polynomial of degree d=4, which also allows analytic solutions for the roots. For the sake of simplicity, one can restrict g=(g1,g2) to g=(g1,Id) in the following.

  1. Why polynomials of degree 4 rather than prime degree 2 or 3 would appear as fundamental polynomials? Could the polynomials of degree 4 have simple Galois group so that functional decomposition g4)= h2)º i2) is not possible?

    The Galois group is a subgroup of S4 and the isomorphism classes for the Galois group of a quartic are S4, A4, D4 (dihedral), V4 (Klein four-group), and C4 (cyclic). A4 is non-Abelian and has V4 as a normal subgroup and is not simple. However if A4 acts as Galois group of a fourth order polynomials, the polynomial does not allow a decomposition g4)= h2)º i2) so that in this sense it is simple and also the only subgroup with this property. Hence A4 is unique.

  2. Remarkably, the order of A4 is 12, which is the number of vertices of icosahedron appearing in the icosa tetrahedral model of the genetic code (see this) in which Hamilton cycles through the 12 vertices of icosahedron defines a representation of 12-note scale and the triangular faces define bioharmony consisting 3-chords defined by the cycle.

  3. Could DNA codon sequences correspond to an abstraction hierarchy defined by functional composites of polynomials g4)? Codons would correspond to polynomials obtained as functional composites g64)=g14)º g24)º g34) and codons would correspond to the 64 roots of g. As a special case, one has g14)=g24)=g24) but holography = holomorphy vision does not require this also the roots can be solves for the iterates in general case.

    The polynomial degree associated with g64) is 42=64. g64)=g14)º g24)º g34) defines a 3-fold extension of the extension E of rationals appearing as coefficients of g64) and f so that the Galois group is not simple and allows a decomposition to normal subgroups defining a cognitive hierarchy.

  4. One should understand why codons special units of DNA. What if one modifies g64) so that it becomes a simple polynomial with prime degree allowing no functional decomposition so that codon would represent irreducible cognition? Prime degree d=61 is the maximal degree allowing this and corresponds to the number of codons coding for proteins. 3 codons would correspond to stop codons. Could g61) obtained from g64) by dropping 3 monomial factors associated with the stop codons?
  5. What about genes? Gene cannot contain stop codons except at its end. Could genes with N codons correspond to functional compositions of N polynomials gi61), i=1,...,N having degree 61N and defining a space-time representative of the gene. Note that the roots of gi61) are known if they are constructed in the proposed way so that also the genetic polynomials are cognitively very special!

    The simplicity condition for the genetic polymials could be realized by dropping out k monomial factors associated with the roots so that the the degree d=61N-k is prime. Genes correspond to irreducible cognitions obtained from composite cognitions by dropping k genes. Could these non-allowed genes be analogous to stop codons? What could this mean?

  6. In this framework, the addition of introns in the reverse transcription would correspond to the addition of functional composites of g61)k to the functional composite of g61)i defining the gene. The added composites should be somehow distinguishable from the codons coding for proteins. Note that it is not quite clear whether the order for functional compositions is the same as the linear order along the gene.

    The addition functional composites of g61)k increases the degree of the polynomial associated with the gene. This could imply that it is not anymore a prime polynomial. The dropping of the introns in splicing could mean a reduction to the original prime polynomial with a simple Galois group.

5. Connection with p-adic length scale hypothesis

What is remarkable is that this picture relates directly to the p-adic length scale hypothesis stating that primes p near to but smaller than powers of 2 or 3 are in central role physically. TGD leads to a generalization of p-adic number fields to their functional counterparts for which expansion in powers of prime is replaced by expansion in functional powers of polynomials with prime degrees p (see this and this). By dividing out k monomial factor one can reduce the degree d=pn to the prime degree d=pn-k.

For p=2 or 3 the roots of the polynomials in the hierarchy can be solved analytically and these hierarchies are expected to be cognitively very special. Genetic code would provide a realization with d=4 and for codons and genes one would have prime degree. The discovery of Galois would reflect itself in physics, biology and cognition.

See the article Could life have emerged when the universe was at room temperature? or the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, February 07, 2026

Basic ingredients of life found in conditions prevailing in the early universe

Using the James Webb Space Telescope, astronomers have detected complex organic molecules frozen in ice around a baby star called ST6, located in the Large Magellanic Cloud, a neighboring galaxy about 160,000 light-years away (see this). There is an article about the finding by Sewilo M. et al, with title "Protostars at Subsolar Metallicity: First Detection of Large Solid-State Complex Organic Molecules in the Large Magellanic Cloud", published in Astrophysical Journal Letters (see this).

Molecules like methanol, ethanol, acetaldehyde, methyl formate, and even acetic acid the key ingredient of vinegar were locked inside cosmic ice. These chemicals belong to the molecules that help kick-start chemistry needed for life. Until now, finding them in solid ice was incredibly rare, even inside our own galaxy.

The Large Magellanic Cloud is a harsh place. It has fewer heavy elements and stronger radiation conditions similar to the early universe, long before planets like Earth existed. Despite this complex organic chemistry is possible there. Does this mean that the ingredients for life don t need perfect conditions or that there is new physics involved? Even more, is this new physics universal.

It is very difficult to imagine how a complex biochemistry (biochemistry as we understand it) could have prevailed much before planets like Earth did exist. New physics seems to be required and TGD predicts it. The notion of a field body (field-/magnetic body) carrying large heff phases of ordinary matter (see this, this, this, this and this) could explain how the organic molecules crucial for life could have formed in these circumstances.

  1. Water is a key element of life in TGD inspired quantum biology. Therefore the fact that the molecules were inside ice is a valuable hint. Pollack effect (see this) occurs when water is irradiated with, say, infrared photons arriving from the Sun or some other source now the protostar. Pollack effect generates negatively charged regions, Exclusion Zones (EZs) with rather strange properties such as the ability to kick out impurities, which seems to be in conflict with the second law of thermodynamics.

    Protons must go somewhere from the EZ and the TGD inspired proposal is that they go to the magnetic body of the system and form a large heff phase, in many cases behaving like dark matter. These phases are not however identifiable as galactic dark matter. heff serves as a measure for the algebraic complexity of the space-time surface and also for the level of conscious intelligence, a kind of universal IQ. The magnetic body naturally controls the physics of ordinary matter.

    What matters is the energy needed to kick ordinary protons to the magnetic body: the needed energy corresponds to the energy difference between -OH and O- + dark protons in the magnetic body. These two states of proton are proposed to define what might be regarded as a universal topological qubit (see this, this and this). Also the formation of organic molecules as bound states liberates binding energy and can induce the generalized Pollack effect.

  2. The formation of dark protons at the magnetic body of water could represent one of the first steps in the evolution of life and already at this stage the dark analogs of the basic information molecules, genetic code and metabolism could have emerged. The chemical realization of the genetic code would have emerged later.
Could the Pollack effect and the notion of magnetic body allow us to understand the formation of the basic molecules of life found to exist in the Magellanic Cloud. -OH group, appearing also in water molecules, is essential for the standard form of Pollack effect so that it could be important also now.
  1. Complex alcohols can contain more than one -OH group bound to a saturated Carbon atom. Simple alcohols (see this) obey the formula H-(CH2)n-OH. Both methanol (CH3)-OH and ethanol (CH3)-(CH2)-OH contain the -OH group so that Pollack effect is possible for them and could explain the special effects of alcohol on consciousness. Note that methane CH4 emerges from the decompositions of organic materials.
  2. Acetaldehyde (CH3)-(H-C=O) can be formed by a partial oxidation of ethanol in an exothermic reaction at temperatures 500-650 C. The reaction equation for the condensation of acetaldehyde and O2 is 2 (CH3)-(CH2)-OH + O2 → 2 (CH3)-(H-C=O) + 2 H2O. Dehydration is in question.

    One can imagine the following sketch for what might happen. At the first step the protons of -OH groups of ethanols are kicked to dark protons at the magnetic body. This would induce the transformation of C-O bonds to C=O bonds, forcing C to give up the second H atom of CH2. The dark protons would drop back to ordinary protons and together with electrons and the two H atoms and oxygens of O2 would form 2 water molecules.

  3. Methyl formate (CH3)-(O=C-OH) can be produced in the condensation reaction (CH3)-OH + H-(O=C-OH)→ CH3)-(O=C-OH)+H2O of methanol (CH3)-OH and formic acid H-(O=C-OH). Dehydration is involved.

    OH group must be replaced with O=C-OH in the reaction. One can imagine that the proton of -OH is temporarily transformed to a dark proton at the magnetic body and facilitates the replacement. After that the dark proton, O- and H- of H-(O=C-OH) combine to form the water molecule.

  4. Acetic acid (CH3)-(O=C-OH) is formed by the transformation H2→ =O occurring in the condensation reaction of ethanol and oxygen as (CH3)-(CH2)-OH + O2 → (CH3)-(O=C-OH )+H2O involving dehydration. Also now the proton of -OH could transform to a dark proton. This should induce replacement CH2 → C=O, the splitting of O2 and the formation of H2O. The dark proton would drop back and -OH would be regenerated.
Could the detected molecules allow us to conclude anything about the presence of more complex biomolecules such as sugars and riboses crucial for life?
  1. Sugars or carbohydrates (see this), involve monosaccharides with formula CnH2nOn, with n in the range 5 to 7, have a key role in metabolism. They contain a relatively large number of -OH groups associated with an aromatic ring. For C6H12O6 (fructose, galactose, glucose) have 4 -OH groups. Yeasts break down fructose, galactose and glucose to ethanol in alcoholic fermentation. More generally, alcohols such as mannitol emerge in the reduction of saccharides.
  2. TGD suggests that the metabolic energy liberated as sugars burn to alcohols involves the transfer of dark protons to the magnetic bodies of the acceptor molecules followed by their transformation to ordinary protons liberating the metabolic energy. This would occur in the ADP → ATP process.
  3. Ribose C5H10O5 (see this), appearing in RNA, contains 4 -OH groups and deoxyribose C5H10O4 appearing in DNA contains 3 -OH groups. Phosphorylated ribose appears in ADP, ATP, coenzyme A and NADH.

    Biological and chemical reduction and fermentation can produce ribitol C5H12O5, which is a sugar alcohol. When ribitol is subjected to hydroxyl radicals, C-C bonds are cleavaged and formic acid ((H-C=O)-(OH)) appears as a decay product. Methanol was detected: could formic acid transform to methanol ((CH3)-(OH)) in the presence of water by the reaction (H-C=O)-(OH)+H2O→ (CH3)-(OH)+ O2?

To conclude, the temporary transformation of proton to dark proton at the magnetic body by Pollack effect could be involved with all these reactions.

See the article Could life have emerged when the universe was at room temperature? or the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, February 06, 2026

Quantum geometry and M8-H duality

Quantum geometry and quantum metric are very interesting notions (see this). What does one mean with a quantum metric?
  1. In condensed matter physics many particle states are labelled by particle momenta. Berry phase is associated with U(1) connection in this momentum space. Quantum metric means extension to Kähler metric involving both the U(1) connection and metric. It defines a distance between quantum states labelled by momenta at 2-D Fermi surface.
  2. Quantum metric in condensed matter sense is defined in momentum space, or rather, Fermi surface, rather than Hilbert space. Here I disagree with the claim of (see this). At Hilbert space level geometrization would replace the flat Kähler metric with a curved metric and replace global linear superposition with its local variant.
  3. What is essential for this interpretation, is that the momentum space (2-D Fermi sphere) is endowed with a Kähler geometry. Both momentum space and position space are geometrized.
In the TGD framework, M8-H duality implies something analogous but not equivalent.
  1. Space-times are 4-surfaces X4 in H=M4× CP2: this hypothesis geometrizes standard model interactions and solves the energy problem of general relativity. Holography = holomorphy hypothesis leads to an exactly solvable classical theory.
  2. M8 serves as the analog of 8-D momentum space for H=M4× CP2 and Y4 generalizes the notion of momentum space. One can define for the points of M8 Minkowskian inner product xy as a real part of their octonionic product.
  3. Space-time surfaces X4 in H=M4× CP2 have as M8 duals 4-surfaces Y4 in M8 related by M8-H duality. Y4 generalizes the notion of the Fermi sphere and can be regarded as the 4-D space of 4-momenta representing the dispersion relation (see this and this).

    Associativity condition for the tangent space of Y4 defines the number theoretic dynamics of Y4 and local G2 transformations allow us to construct a general solutions Y4 from very simple basic solutions Y4(f) determined by the roots f(0)=0 of analytic functions f(o) with real coefficients, which can be restricted to an extension of rationals. Polynomials and rational functions f appear as important special cases and form hierarchies since basic arithmetic operations and functional composition produce new solutions.

  4. How to define the analog of quantum geometry?
    1. The values of functions f(o) can be regarded as octonions such that the imaginary part is proportional to the radial octonion unit and thus allows interpretation as an ordinary imaginary unit. For two tangent vectors of x, y of quaternionic Y4 the real part of xy defines Minkowskian inner product. The product xy is a quaternion and could be seen as a quaternionic analog of Kähler form. An analog of quaternion structure would be in question. Could this define the number theoretic version of quantum geometry?
    2. CP2 allows a quaternion structure but does not allow hyper-Kähler structure. Hyper-Kähler structure with 4 covariantly constant quaternionic units defined by metric and 3 covariantly constant Kähler forms is not possible for CP2. And define induced quaternionic structure in X4. Could one induce the metric and spinor curvature of X4 to Y4?

      The quaternionic tangent spaces of Y4 are labelled by the points of CP2 and the corresponding CP2 point can be taken as a local coordinate of M8. The metric of Y4 could be induced from that of X4 by M8-H duality. In M4⊂ M8 degrees of freedom the inversion map M4→ M4⊂ H motivated by Uncertainty Principle defines the M8-H duality.

      There are singularities at which the CP2 point associated with Y4 is not unique. In the case of CP2 type extremals the CP2 points form a 3-D surface X3 and X3 points correspond to single point y in Y4: Y4 has a coordinate singularity at y, y blows up to X3 in H.

    Could these two approaches give equivalent quaternionic quantum metrics in Y4?

    See the article "Does M8-H duality reduce to local G2 symmetry?" or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, January 31, 2026

A critical view of heff hypothesis

heff hypothesis is one of the key elements of TGD and of TGD inspired theory of consciouness. One can raise several critical questions related to it.

1. Identification of heff

Consider first the identification of heff.

  1. The idea is that the TGD Universe is theoretician friendly (see this) and this). The value of heff increases when the perturbative QFT as a long range limit of TGD ceases to converge. Since coupling strengths are proportional to 1/ℏeff, the increase of heff guarantees convergence. In TGD, quantum coherence is predicted to be present in all scales and this kind of perturbation theory is possible even when the interacting systems have macroscopic sizes so that masses and charges are very large.
  2. This predicts that heff has a spectrum and depends on the products of the charges appearing in a given coupling strength. Since in TGD classical fields define the vertices (see this), this suggests that one can assign heff to gravitational, electric, and perhaps also color and weak coupling strengths and heff is proportional to a product of charges and is a two-particle parameter unlike the ordinary Planck constant.

    The proposed mathematical interpretation is that the 2-particle character reflects a Yangianization of the basic symmetries (see this and this). Yangian symmetries do not reduce to single particle symmetries but can also act on pairs, triplets, ... of particles. One would have poly-local symmetries so that the charge unit heff would depend on quantum numbers of particles in the vertex. The monopole flux tube connections between particle-like 3-surfaces are a natural candidate for inducing Yangianization. The problem indeed is that monopole flux tubes carry the large heff phases.

  3. Perturbative QFT is assumed to apply at the QFT limit when many-sheeted space-time is replaced with single region of M4 and the sums of the induced gauge potentials for space-time sheets define gauge fields and the sum over the CP2 parts of induced metric defines the gravitational field.

    The objection is that the QFT approach does not apply at the fundamental level of TGD: there is no path integral. Is there any way to replace this argument with an argument holding true at the fundamental level.

2. Number theoretic vision and heff

Number theoretic vision leads to a possible identification of heff.

  1. Number theoretic vision leads to the proposal that heff characterizes the algebraic complexity of the many-sheeted space-time surface. If the space-time surface is defined in terms of roots of analytic function pair (f1,f2), the extension of rationals appearing in the coefficients of fi would define heff as its dimension and heff would not depend of the form of fi.

    The number of roots as the number space-time regions as solutions to (f1,f2)=0 would also be a natural candidate for the value of heff. In particular, if fi are polynomials.

    One can generalize the ordinary Galois group so that it acts as flows and permutes different roots of (f1,f2)=(0,0). In this case the number of roots could define heff. Certainly it is a measure for the complexity.

    Suppose that f2 is kept constant f1=P1 is polynomial. In this case the dimension of the algebraic extension associated with P1 could determine the value of heff. Also the degree of P1 giving the number of roots can be considered.

3. The physical interpretation of heff

Consider now the physical picture about the emergence of larger values of heff.

  1. The increase of heff means also that the Compton length ℏeff/m as a size scale for a quantum object of mass m increases. Since one expects that space-time sheets of arbitrarily large size are possible, this is very natural. In the case of ℏeff=ℏgr (see this), the gravitational Compton length proportional to the product Mm of masses does not depend on the "small" mass m. This would reflect the Equivalence Principle. For electromagnetic interactions one would have a similar picture ℏeff=ℏem (see this), which is proportional to Qq, where Q and q are em charges. The same applies to color and weak interactions.

    The heff phases associated with different interactions and different particles would be at separate space-times sheets: U-shaped magnetic and electric flux tubes carrying monopole fluxes are the proposed identification. This implies a highly organized structure: "dark" particles would reside like books at library shelves labelled by classical interactions and by products of corresponding charges .

  2. The increase of heff that the unit of angular momentum increases. This in turn implies that the cyclotron energy scale is scaled up by heff/h. This is crucial for the explanation of the findings of Blackman about the effects of ELF em fields on vertebrate brains. This assumes that particle mass and therefore also four-momentum remains un-affected in the scaling h→heff or at least that their values are not larger than the ordinary value.

    The intuitive view about the geometric origin of angular momentum (L=r× p) as something proportional to the size of the 3-surface supports this view. Angular momentum has a scaling dimension 0 whereas for momentum it is -1. Also conformal weight h has dimension 0 so that scaling should affect the maximal unit of conformal weight. Conformal algebras and symplectic algebra allow hierarchy of isomorphic sub-algebras (see this) and I have proposed that this hierarchy means a hierarchy of breakings of conformal symmetry with the unit of the conformal weight is scale up by integer.

  3. What about those conserved charges, which do not relate to M4 but to CP2? What happens to the unit of electric charge? Anyons provide evidence for charge fractionation. Could charge fractionation take place quite generally? Even in M4 degrees of freedom?

    I have discussed the possibility of charge fractionation (see this). For heff=Nh0 (h0≤ h is the minimum value of heff), the charge would be distributed between M<N space-time surfaces, possibly connected by monopole flux tubes. The k:th space-time sheet would carry charge QmaxMk/N. This would give a total charge MQmax/N. The system would consist of fractionally charged subsystems and the total charge would be integer valued for the standard unit of charge.

    For this option, the cyclotron energy would be proportional to (Mk/N)(ℏeff/h0) and its value would be proportional ℏeff/h0 only in maximum. For other quantum numbers than angular momentum and conformal weight, the fractional charge would be Mk/N fraction of the ordinary value.

Is there any concrete interpretation for the emergence of the effective value of the Planck constant?
  1. The gravitational Compton length Λgr= GM/β0= rS/2β0, where rS is Schwartschild radius and β0=0/c≤ 1 is velocity parameter, is a natural guess for the thickness of the M4 projection of the gravitational flux tube. Particle Compton length Lc would be scaled up by rS/2β0Lc: for protons and for β0=1 this would mean scaling of ∼ 1013.
  2. The classical interpretation would rely on the replacement of a point-like particle with 3-surface. The large radius of the flux tube, the classical angular momentum of classical fields and the orbital angular momentum of a delocalized dark particle. This could increase the effective spin unit to hgr. A similar interpretation applies in the case of electric Planck constant hem.

    This interpretation would support the view that heff corresponds to the number of roots to (f1,f2)=(0,0) as space-time regions. The fractionally charged states would correspond to states in which a charged particle is delocalized in a finite subset of roots.

  3. It must be noticed that many-sheetedness can be interpreted in two ways. The space-time surface can be many-sheeted over M4 or CP2. In the first case the sheets are parallel and extremely near to each other. In the second case they could correspond to parallel monopole flux tubes forming a bubble. The flux tubes could have even macroscopic distances. Elementary particles could be delocalized at the flux tubes.

4. Conservation laws in the heff changing phase transitions

How can conservation laws be satisfied in the heff changing phase transitions?

  1. How to satisfy the conservation laws in the phase transition changing the value of heff? If the value of the spin unit changes to heff, the transition must involve a process guaranteeing angular momentum conservation. What comes to mind is that the transition generates radiation, compensating for the increase of the total angular momentum in the process. This radiation could generate a state analogous to Bose-Einstein condensate. The transition could also proceed in a stepwise way from a seed and gradually increase the fractionized angular momentum unit via values Mheff/N to its maximum value heff.
  2. I have proposed the notion of N-particles to describe the macroscopic quantum states at the monopole flux tubes and applied this notion in the model of genetic code (see this)/tessellationH3}. The emergence of fractionally charged N-particles with a scaled up size and angular momentum could be accompanied by the emission of N-photons or N-gravitons to guarantee angular momentum conservation. In quantum biology 3N-photons would make possible communications between dark genes consisting of N codons.
      See the article Answers to the questions of Vasileios Basios and Marko Manninen in Hypothesis Refinery session of Galileo Commission.

      For a summary of earlier postings see Latest progress in TGD.

      For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, January 30, 2026

Answers to the questions of Vasileios Basios and Marko Manninen in Hypothesis Refinery session of Galileo Commission

Galilei Commission (see this) is a Scientific and Medical network with the goal to help the transition from the materialism and reductionism dominated view of science to a postmaterialistic world view expanding the science so that also consciousness, life and spirituality are accepted as aspects of the reality.

There are of course very many proposals for what a post-materialistic view might be and TGD (Topological GeometroDynamics) and TGD inspired view of consciousness and quantum biology is one of these views. In this view theory of conscious experience can be seen as a generalization of quantum measurement theory based on new quantum ontology forced by TGD.

I participated in a Hypothesis Refinery meeting Galilei Commission held 27.1. 2026 and talked about TGD inspired theory of consciousness (see this). There were very interesting questions by Vasileios Basios and Marko Manninen that I received already before meeting. Unfortunately, the time allowed me to answer only some of these questions during the meeting. Therefore I decided to write an article containing the somewhat shortened questions and my responses. As always, this process stimulated fresh observations.

For instance, I formulated more precisely the crucial arguments behind the holography = holomorphy hypothesis implying also the universality implying that the solution ansatz makes sense for any general coordinate invariant action constructible in terms of the induced geometry.

I discussed in detail the classical non-determinism crucial for cognition already present for 2-D minimal surfaces (soap films) (see this). Since classical non-determinism is so crucial, I added an appendix about what it means geometrically in the 2-D case and how it might generalize to the 4-D situation.

I also clarified the testable implications of the heff hypothesis in biology derivable from the explicit expressions for gravitational and electric Planck constants.

See the article Answers to the questions of Vasileios Basios and Marko Manninen in Hypothesis Refinery session of Galileo Commission or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 29, 2026

Can one deduce the value of the effective Planck constant for a bio-system from empirical data?

The TGD inspired view of consciousness involves the proposal that the effective Planck constant heff, identified as a dimension of an extension of rationals, serves as a measure for algebraic complexity and in this way serves as a universal IQ. The reason would be that the higher the complexity of the system, the higher its capacity to represent. Marko Manninen posed in the Galilei meeting an interesting question related to this hypothesis.

Assuming that you have a concrete biological or computational system, how do you actually compute n (dimension of extension) in a non-arbitrary way?

Consider first the notion of heff. One must start from the system and try to deduce the value for that system. Assuming that one has a formula for heff in terms of the parameters of the system, one can calculate it. Gravitational and electric Planck constants assignable to classical gravitational and electric long range fields are in a special role. It is not clear whether there are other kinds of effective Planck constants or whether heff always is a parameter characterizing a 2-particle system (this is proposed to reflect the presence of Yangian symmetries, which are multilocal).

There are two especially important cases: hgr and hem assignable to classical gravitational and electric fields.

  1. There is an explicit formula for computing both gravitational and electric Planck constants for a pair M,m or Q,q. The products Mm and Qq appear in the formulas. It is not clear whether all values of heff are expressible as products of two charges or two masses. In most applications this assumption can be made.

    The proposal is that when the value of coupling strength appearing in perturbative expansion at QFT limit is so large that perturbative series fails to converge, a phase transition increasing the value of H to heff guaranteeing the convergence occurs since coupling strength is scale down by h/heff.

    One can compute hgr and hem when the velocity parameter v0/c\leq 1 is given. The outcome conforms with the fact that increase of M and Q means increase of the "IQ".

  2. For Earth, Sun, etc gravitational Compton length is Λgr= ℏgr/m= 2GM/β0, where β0 is velocity parameter. For the Earth β0=1 implies that Λgr equals to 1/2 of Schwartschild radius rS= 2GM. There is no dependence on the small mass m. This reflects the Equivalence Principle. The same is true for dark cyclotron energies. This has strong consequences for biology.

    For Earth Λgr equals .5 cm, the size of a snowflake, which is a mysterious structure from the point of view of standard physics, since it can be seen as a zoomed version of water molecule crystal in atomic scale. In Mars it would be by a factor 1/10 smaller. There is also the corresponding universal frequency, which for Earth is in the 10 GHz range and could be an important biological frequency. These predictions are very strong and could be killer predictions.

  3. hem is proportional to a product Qq of charges must be so large that Qqα/β0 is larger than 1 so that perturbation series making sense at the QFT limit of TGD and involving Qqα fails to converge. β0=v0/c is a velocity parameter and for Earth its value equals to 1 for ℏgr in the case of the Earth. For q=1, Q must be large enough.

    DNA charge density is constant and Q is proportional to 3N, N the number of codons. Q is therefore proportional to the length of the DNA unit considered. Genes increase in length with evolution and also DNA itself. For DNA strand pairs Q2 is very large! For cells Q is proportional to the surface area S of the cell membrane. For large neurons, in particular pyramidal neurons and the trigeminal nerve it is very large and would correspond to the highest neuronal IQ. Neurons can also fuse to larger systems and this occurs temporarily in nerve pulse transmission at synaptic contacts and this increases the value of hem.

  4. What about atomic nuclei? When nuclear charge exceeds n=137 hem becomes larger than h even for v0/c=1. In the 1970s it was observed that at energies exceeding Coulomb wall particles that I interpreted as electropions were created (see this).
See the article The recent view of TGD inspired theory of consciousness and quantum biology or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, January 28, 2026

About the justification for the holography = holomorphy vision and related ideas

The recent view of Quantum TGD (see this, this, this, this, this and this) has emerged from several mathematical discoveries.

  1. Holography = holomorphy principle (HH) reduces classical field equations at the Minkowskian regions of the space-time surface to algebraic roots f=(f1, f2) = (0, 0) of two functions which are analytic functions of 4 generalized complex coordinates of H=M4× CP2 involving 3 complex coordinates and one hypercomplex coordinate of M4.
  2. Space-time surface as an analog of Bohr orbit is minimal surface, which means that it generalized the notion of geodesic line in the replacement of point-like particle with 3-surface and that the non-linear analogs of massless field equations are satisfied by H coordinates so that analog of particle-wave duality is realized geometrically.
  3. Minimal surface property holds true independently of the classical action as long as it is general coordinate invariant and constructible in terms of the induced geometry. This strongly suggests the existence of a number theoretic description in which the value of action as analog of effective action becomes a number theoretic invariant.

  4. The minimal surface property fails at 3-D singularities at which derivatives of the embedding coordinates are discontinuous and the components of the second fundamental form have delta function divergences so that its trace as local acceleration and an analog of the Higgs field, diverges.

    These discontinuities give rise to defects of smooth structure and in 4-D case an exotic smooth structure emerges and makes possible description of fermion pair creation (boson emission) although the fermions are free particles. Fermions and also 3-surfaces turn backwards in time. This is possible only in dimension D=4.

One can criticize this picture as too heuristic and of the lack of explicit examples. I am grateful for Marko Manninen, a member of our Zoom group, who raised this question. In the following I try to make it clear that the outcome is extremely general and depends only on the very general aspects of what generalized holomorphy means. I hope that colleagues would realize that the TGD approach to theoretical physics is based on general mathematical principles and refined conceptualization: this approach is the diametric opposite of, say, the attempt to understand physics by performing massive QCD lattice calculations. Philosophical and mathematical thinking, taking empirical findings seriously, dominates rather than pragmatic model building and heavy numerics.

H-H principle and the solution of field equations

Consider first how H-H leads to an exact solution of the field equations in Minkowskian regions of the space-time surface (the solution can be found also in Euclidean regions).

  1. The partial differential equations, which are extremely non-linear, reduce by generalized H-H to algebraic equations in which one has contractions of holomorphic tensors of different type vanishing identically if one has roots of f=(f1,f2)=(0,0). f1 and f2 and generalized analytic functions of generalized complex coordinates of H.

    This means a huge simplification since the Riemannian geometry reduces to algebraic geometry and partial differential equations reduce to local algebraic equations.

  2. There are two kinds of induced gauge fields: induced metric and induced gauge potentials, Kähler gauge potential for the Kähler action. The variation with respect to induced metric gives a contraction of two holomorphic 2-tensors to the field equations. The variation with respect to gauge potential gives contraction of two holomorphic vector fields. The contractions are between tensors/vectors of different types and vanish identically.

    1. Consider the metric first. The contraction is between the energy momentum tensor of type (1,-1)+(-1,1) and the second fundamental form of type (1,1)+(-1,-1). Here 1 refers to a complex coordinate and -1 to its conjugate as tensor index. These contractions vanish identically.

      The vanishing of the trace of the second fundamental form occurs independently of the action and gives minimal surface except at singularities.

    2. Consider next the induced gauge potentials. In this case one has contraction of vector fields of different type (of type (1)and (-1) and also now the outcome is vanishing. In the case of more general action, such as volume + Kähler action, one also has a contraction of light-like Kähler current with a light-like vector field which vanishes too. The light-like Kähler current is non-vanishing for what I call "massless extremals". This miracle reflects the enormous power of generalized conformal invariance. \end{enumerate}
    3. For more general actions these results are probably true too but there I have no formal proof. If higher derivatives are involved one obtains higher derivatives of the second fundamental form which are of type (1,1,...,1) contracted with tensors which have mixed indices.

      Actions containing higher derivatives might be excluded by the requirement that only delta function singularities for the trace of the second fundamental form defining the analog of the Higgs field are possible.

    4. The result has analog already in ordinary electrodynamics in 2-D systems. The real and imaginary parts of an analytic function satisfy the field equations except at poles and cuts define the point charges and line charges. Also in string models the same occurs.
    Concerning explicit examples, I used 8 years after my thesis to study exact solutions of field equations of TGD \cite{all/class,prext}. The solutions that I found were essentially action independendent and had interpretation as minimal surfaces.

    Singularities as analogs of poles of analytic functions

    Consider now the singularities.

    1. The singularities 3-surfaces at which the generalized analyticity fails for (f1,f2): they are analogs of poles and zeros for analytic functions. At 3-D singularities the derivatives of H coordinates are discontinuous and the trace of the second fundamental form has a delta function singularity. This gives rise to edge.

      Singularities are analogous to poles of analytic functions and correspond to vertices and also to loci of non-determinism serving as seats of conscious memories.

    2. At singularities the entire action contributes to the field equations which express conservation laws of classical isometry charges. Note that the trace of the second fundamental form defines a generalized acceleration and behaves like a generalization of the Higgs field with respect to symmetries.

      Outside singularities the analog of massless geodesic motion with a vanishing acceleration occurs and the induced fields are formally massless. At singularities there is an infinite acceleration so that particles perform 8-D Brownian motion.

    3. Singularities as edges correspond to defects of the standard smooth structure as edges of space-time surface analogous to the frames of a soap film. The dependence of the loci of singularities on the classical action is expected from the condition that the field equations stating conservation laws are true for the entire action.

      It is possible that exotic smooth structure is at least partially characterized by the classical action having interpretation as effective action. For a mere volume action singularities might not be possible: if this is true it would correspond to the analog of massless free theory without fermion pair creation. In this case, the trace of the second fundamental form should vanish although its components should have delta function divergences.

      This makes it possible to interpret fermionic Feynman diagrams geometrically as Brownian motion of 3-D particles in H (see this, this and this). In particular, fermion pair creation (and also boson emission) corresponds to 3-surface and fermion lines turning backwards in time.

    4. The physical interpretation generalizes the interpretation in classical field theories, where charges are point-like. In massless field theories, charges as singularities serve as sources of fields. The trace of the second fundamental form vanishes almost everywhere (minimal surface property) stating that the analog of the charge density, serving as a source of massless field defined for H coordinates, vanishes except at the singularities. The generalized Higgs field defines the source concentrated to 3-D singularities.
    5. Classical non-determinism is an essential assumption. Already 2-D minimal surfaces allow non-determinism and soap films spanned by a given frame provide a basic example. The geomeric conditions under which non-determinism is expected, are known and can be generalized to 4-D context. Google LLM gives detailed information about the non-determinism in 2-D case and I have discussed the generalization to 4-D case in (see this and this).

    See the article What could 2-D minimal surfaces teach about TGD? or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

What the failure of classical non-determinism could mean for 4-D minimal surfaces?

In TGD, holography = holomorphy principle predicts that space-time surfaces are analogous to Bohr orbits for particles identified as 3-surfaces and defining the holographic data.
  1. The Bohr orbits out to be 4-D minimal surfaces irrespective of the action principle as long as it is general coordinate invariant and constructible in terms of the induced geometry. 2-D minimal surfaces are non-deterministic in the sense that same frames span several minimal surfaces. One can expect that also in the 4-D case, non-determinism is unavoidable in the sense that the Bohr orbit-like 4-surfaces are spanned by 3-D "frames" as loci of non-determinism.
  2. At these 3-surfaces minimal surface property fails, the derivatives of the embedding space coordinates are discontinuous and the second fundamental form diverges. Also the generalized holomorphy fails. The failure of smooth structure caused by the edge in 4-D case can give rise to an exotic smooth structure.
  3. One can also say the singularities act as sources for the analog of massless field equations defined by the vanishing of the trace of the second fundamental form and this justifies the identification of the singularities as vertices in the construction of the scattering amplitudes.
  4. In the TGD inspired theory of consciousness, classical non-determinism gives rise to geometric correlates of cognition and intentionality and the loci of non-determinism serve as memory seats. Free will is not in conflict with classical determinism and the basic problem of quantum measurement theory finds a solution in zero energy ontology.
  5. The proposal is that the classical non-determinism corresponds to the non-determinism of p-adic differential equations. In fact, TGD leads to a generalization of p-adic number fields to their functional counterparts and they can be mapped to p-adic number fields by category-theoretical morphism. This generalization allows us to understand the p-adic length scale hypothesis which is central in TGD.
The study of the non-determinism for 2-D minimal surfaces could serve as a role model in the attempts to understand non-determinism for 4-D minimal surfaces. What can one say about the geometric aspects of classical non-determinism in the case of 2-D minimal surfaces? Here Google Gemini provides help and one obtains a surprisingly detailed summary and its also possible to make further questions. Here I summarize briefly what Google says.

1. The classical non-determinism of 2-D minimal surfaces

The 2-D minimal surface spanned by a given frame (a closed, non-intersecting, simple wire loop or collection of them in 3D space) is generally non-unique. While the existence of at least one minimal surface (a surface of zero mean curvature with vanishing trace of the second fundamental form) is guaranteed, a single frame can bound multiple, and sometimes even a continuum of, distinct minimal surfaces. Here is a breakdown of the uniqueness of minimal surfaces.

  1. Many frames, particularly non-convex ones, can span several distinct minimal surfaces. A classic example is two coaxially aligned circles, which can bound two different catenoid surfaces (a wider and a narrower one) or two separate disks.
  2. In certain cases, a given curve can bound a continuous family of minimal surfaces, a phenomenon often observed in physical soap film experiments.
  3. Uniqueness is achieved only under specific conditions.
    1. Convex projection: If a closed Jordan curve Γ has a one-to-one orthogonal projection onto a convex planar curve, then Γ bounds a unique minimal disk, which is a graph over that plane.
    2. Small total curvature: A smooth Jordan curve with a total curvature less than or equal to 4π bounds a unique minimal disk.
    3. Sufficiently close to a plane: A C2-Jordan curve that is sufficiently close to a plane curve in the C2-topology bounds a unique minimal disk.
  4. Stability vs. sbsolute uniqueness: A minimal surface is "stable" if small perturbations increase its area. Often, a frame may bound multiple minimal surfaces, but only one is the absolute, global minimum, while others are unstable or local minima.Plateau's Problem: The classical problem asks for the surface of minimum area, which exists, but is not always unique.
Summary: While soap film experiments often produce a single, stable minimal surface, the boundary value problem can have multiple solutions. Uniqueness is the exception, not the rule, and depends strongly on the geometric "convexity" of the framing wire.

2. What could one conclude about the space-time surfaces as minimal surfaces?

The above Google summary helps to make guesses about the naive generalization of these findings in the 4-D situation.

2.1 How unique is the minimal surface spanning a given frame?

One can go to Google and pose the question "How unique is the minimal surface spanning a given frame?". One obtains a nice summary and can ask additional questions. The following considerations are inspired by this question.

  1. In the case of ordinary minimal surfaces, it is enough that there exists a plane for which the minimal surface is representable as a graph of a map and the projection of the frame to the plane is convex, i.e. any of its points can be connected by a line inside the curve defined by the projection.

    An essential assumption is that the 2-D surface is representable locally as a graph over a plane. Curves whose plane projection has an interior, which is non-convex (not all interior points can be connected by a curve in the interior) can also lead to a failure of determinism. Cusp catastrophe, defined in terms of roots of a polynomial of degree 3, is a 2-D example of non-convexity. Note that the cusp is 3-sheeted.

  2. Consider the general meaning of convexity for objects of dimension d in linear spaces with dimension d+1. One considers a projection of the object with dimension d (say frame to a higher-dimensional space. For minimal surfaces, the object is the frame of dimension d=1 and the space has dimension d=3. For Riemannian manifolds straight lines can be identified as geodesic lines. Planes could be generalized to geodesic manifolds.
The convexity criterion has a straightforward analog when the embedding space is 8-D H=M4 × CP2 and minimal surface is 4-D space-time surface X4.
  1. The projection of the 3-D frame, defining the holographic data or a locus of non-determinism defining secondary holographic data, to some 4-D submanifold analogous to the plane should be convex. The surface should be also representable as a graph of a map from the 4-D manifold to H. One could consider projections of the frame X3 to all geodesic submanifolds G4 of dimension D=4. G4∈{M4, E3× S1, E2× S2}, where S1 and S2 are geodesic manifolds of CP2 appear as candidates.

    For physically most interesting cases CP2 projection has at least dimension 2 so that E2× S2 is of special interest. Could one choose G4 to be holomorphic sub-manifolds? If hypercomplex holomorphy does not matter, this would leave only 2-D M4 projection. Is it enough to consider G4= E2× S2? Situation would resemble that for ordinary minimal surfaces. Could one consider the convexity of the E2 and S2 projections?

  2. Convexity: the points of X3 can be connected by geodesic lines. Should they be space-like or could also light-like partonic orbits serve as loci of non-determinism. What about 3-surfaces inside CP2 representing a wormhole contact at which two parallel Minkowskian space-time sheets meet?
  3. The convexity criterion should be satisfied for all frames defined by 3-D singularities assumed to be given.
  4. If the 3-D frame corresponding to the roots of f1=0,f2=0 is manysheeted over G4, the projection contains several overlapping regions corresponding to the roots. One does not have a single convex region. This is one source of non-determinism.
  5. Note: If the projection to M4 is bounded by genus g>0 surface, the M4 projection is not convex. Now however CP2 comes to rescue. Consider as an example a cosmic string X1× S2, where X1 is convex and space-like. If the CP2 projection is g>0 surface, the situation is the same. Could this relate to the instability of higher genera. Would it be induced by classical non-determinism?

2.2 What could be the role of generalized holomorphy?

The failure of holomorphy implies singularities identified as loci of auxiliar holomorphic data and seats of non-determinism.

  1. Often the absolute minimum is unique. The degeneracy of the absolute minimum would mean additional symmetry. This kind of additional symmetry in the case of Bohr orbits of electrons in an atom corresponds to rotational symmetry implying that the orbit can be in any plane going through the origin.
  2. How does this relate to f=(f1,f2)=0 conditions has as roots the space-time surface as a generalized complex submanifold of H? Each solution corresponds to a collection of the roots for these conditions and each root corresponds to a space-time region. Two or more roots are identical at the 3-D interfaces of the roots. Each root defines a region of some geodesic submanifold of H defining local generalized complex coordinates of X4 as a subset of corresponding H coordinates in this region. Separate solutions would be independent collections of the roots. Two roots co-incide at at the 3-D interfaces between roots. Cusp catastrophe gives a good 2-D illustration.
  3. 3-D singularities as analogs of frames correspond to the frames of 4-D "soap films". Since derivatives are discontinuous, the singularities correspond to edges of the space-time and would define defects of the standard smooth structure. This would give rise to an exotic smooth structures.
  4. The non-determinism should correspond to the branching of the space-time surfaces at the singularities X3 giving rise to alternative Bohr orbits. There is analogy with bifurcations, in particular with shock waves and bifurcations could correspond to the underlying 2-adicity and relate to the p-adic length scale hypothesis.

    There would be several kinds of edges of X4 associated with the same X3. The non-representability of the singularity X3 as a graph P(X3)→ X3, where P(X3) is the projection of the singularity to G4 should be essential. Also the non-convexity of the region bounded by P(X3) in G4 matters.

  5. The volumes of the minimal surfaces spanning a given frame need not be the same and the absolute minimum for the volume, or more generally classical action, could be in the special role. The original proposal indeed was that absolute minima are physically special.

    If dynamical symmetries are involved, the extrema can be degenerate. The minimal surfaces are analogs of Bohr orbits and in atomic physics Bohr orbits have degeneracy due to the fact they can be in arbitrary plane: this corresponds to the choice of the quantization axis of angular momentum.

    Could the symmetries for the 3-D "frames" induce this kind of degeneracy? Could Galois groups act as symmetries? This would give connection between the view of cognition as an outcome of classical non-determinism and the number theoretic view of cognition relying on Galois groups.

See the article What could 2-D minimal surfaces teach about TGD? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.