https://matpitka.blogspot.com/

Wednesday, January 28, 2026

About the justification for the holography = holomorphy vision and related ideas

The recent view of Quantum TGD (see this, this, this, this, this and this) has emerged from several mathematical discoveries.

  1. Holography = holomorphy principle (HH) reduces classical field equations at the Minkowskian regions of the space-time surface to algebraic roots f=(f1, f2) = (0, 0) of two functions which are analytic functions of 4 generalized complex coordinates of H=M4× CP2 involving 3 complex coordinates and one hypercomplex coordinate of M4.
  2. Space-time surface as an analog of Bohr orbit is minimal surface, which means that it generalized the notion of geodesic line in the replacement of point-like particle with 3-surface and that the non-linear analogs of massless field equations are satisfied by H coordinates so that analog of particle-wave duality is realized geometrically.
  3. Minimal surface property holds true independently of the classical action as long as it is general coordinate invariant and constructible in terms of the induced geometry. This strongly suggests the existence of a number theoretic description in which the value of action as analog of effective action becomes a number theoretic invariant.

  4. The minimal surface property fails at 3-D singularities at which derivatives of the embedding coordinates are discontinuous and the components of the second fundamental form have delta function divergences so that its trace as local acceleration and an analog of the Higgs field, diverges.

    These discontinuities give rise to defects of smooth structure and in 4-D case an exotic smooth structure emerges and makes possible description of fermion pair creation (boson emission) although the fermions are free particles. Fermions and also 3-surfaces turn backwards in time. This is possible only in dimension D=4.

One can criticize this picture as too heuristic and of the lack of explicit examples. I am grateful for Marko Manninen, a member of our Zoom group, who raised this question. In the following I try to make it clear that the outcome is extremely general and depends only on the very general aspects of what generalized holomorphy means. I hope that colleagues would realize that the TGD approach to theoretical physics is based on general mathematical principles and refined conceptualization: this approach is the diametric opposite of, say, the attempt to understand physics by performing massive QCD lattice calculations. Philosophical and mathematical thinking, taking empirical findings seriously, dominates rather than pragmatic model building and heavy numerics.

H-H principle and the solution of field equations

Consider first how H-H leads to an exact solution of the field equations in Minkowskian regions of the space-time surface (the solution can be found also in Euclidean regions).

  1. The partial differential equations, which are extremely non-linear, reduce by generalized H-H to algebraic equations in which one has contractions of holomorphic tensors of different type vanishing identically if one has roots of f=(f1,f2)=(0,0). f1 and f2 and generalized analytic functions of generalized complex coordinates of H.

    This means a huge simplification since the Riemannian geometry reduces to algebraic geometry and partial differential equations reduce to local algebraic equations.

  2. There are two kinds of induced gauge fields: induced metric and induced gauge potentials, Kähler gauge potential for the Kähler action. The variation with respect to induced metric gives a contraction of two holomorphic 2-tensors to the field equations. The variation with respect to gauge potential gives contraction of two holomorphic vector fields. The contractions are between tensors/vectors of different types and vanish identically.

    1. Consider the metric first. The contraction is between the energy momentum tensor of type (1,-1)+(-1,1) and the second fundamental form of type (1,1)+(-1,-1). Here 1 refers to a complex coordinate and -1 to its conjugate as tensor index. These contractions vanish identically.

      The vanishing of the trace of the second fundamental form occurs independently of the action and gives minimal surface except at singularities.

    2. Consider next the induced gauge potentials. In this case one has contraction of vector fields of different type (of type (1)and (-1) and also now the outcome is vanishing. In the case of more general action, such as volume + Kähler action, one also has a contraction of light-like Kähler current with a light-like vector field which vanishes too. The light-like Kähler current is non-vanishing for what I call "massless extremals". This miracle reflects the enormous power of generalized conformal invariance. \end{enumerate}
    3. For more general actions these results are probably true too but there I have no formal proof. If higher derivatives are involved one obtains higher derivatives of the second fundamental form which are of type (1,1,...,1) contracted with tensors which have mixed indices.

      Actions containing higher derivatives might be excluded by the requirement that only delta function singularities for the trace of the second fundamental form defining the analog of the Higgs field are possible.

    4. The result has analog already in ordinary electrodynamics in 2-D systems. The real and imaginary parts of an analytic function satisfy the field equations except at poles and cuts define the point charges and line charges. Also in string models the same occurs.
    Concerning explicit examples, I used 8 years after my thesis to study exact solutions of field equations of TGD \cite{all/class,prext}. The solutions that I found were essentially action independendent and had interpretation as minimal surfaces.

    Singularities as analogs of poles of analytic functions

    Consider now the singularities.

    1. The singularities 3-surfaces at which the generalized analyticity fails for (f1,f2): they are analogs of poles and zeros for analytic functions. At 3-D singularities the derivatives of H coordinates are discontinuous and the trace of the second fundamental form has a delta function singularity. This gives rise to edge.

      Singularities are analogous to poles of analytic functions and correspond to vertices and also to loci of non-determinism serving as seats of conscious memories.

    2. At singularities the entire action contributes to the field equations which express conservation laws of classical isometry charges. Note that the trace of the second fundamental form defines a generalized acceleration and behaves like a generalization of the Higgs field with respect to symmetries.

      Outside singularities the analog of massless geodesic motion with a vanishing acceleration occurs and the induced fields are formally massless. At singularities there is an infinite acceleration so that particles perform 8-D Brownian motion.

    3. Singularities as edges correspond to defects of the standard smooth structure as edges of space-time surface analogous to the frames of a soap film. The dependence of the loci of singularities on the classical action is expected from the condition that the field equations stating conservation laws are true for the entire action.

      It is possible that exotic smooth structure is at least partially characterized by the classical action having interpretation as effective action. For a mere volume action singularities might not be possible: if this is true it would correspond to the analog of massless free theory without fermion pair creation. In this case, the trace of the second fundamental form should vanish although its components should have delta function divergences.

      This makes it possible to interpret fermionic Feynman diagrams geometrically as Brownian motion of 3-D particles in H (see this, this and this). In particular, fermion pair creation (and also boson emission) corresponds to 3-surface and fermion lines turning backwards in time.

    4. The physical interpretation generalizes the interpretation in classical field theories, where charges are point-like. In massless field theories, charges as singularities serve as sources of fields. The trace of the second fundamental form vanishes almost everywhere (minimal surface property) stating that the analog of the charge density, serving as a source of massless field defined for H coordinates, vanishes except at the singularities. The generalized Higgs field defines the source concentrated to 3-D singularities.
    5. Classical non-determinism is an essential assumption. Already 2-D minimal surfaces allow non-determinism and soap films spanned by a given frame provide a basic example. The geomeric conditions under which non-determinism is expected, are known and can be generalized to 4-D context. Google LLM gives detailed information about the non-determinism in 2-D case and I have discussed the generalization to 4-D case in (see this and this).

    See the article What could 2-D minimal surfaces teach about TGD? or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

What the failure of classical non-determinism could mean for 4-D minimal surfaces?

In TGD, holography = holomorphy principle predicts that space-time surfaces are analogous to Bohr orbits for particles identified as 3-surfaces and defining the holographic data.
  1. The Bohr orbits out to be 4-D minimal surfaces irrespective of the action principle as long as it is general coordinate invariant and constructible in terms of the induced geometry. 2-D minimal surfaces are non-deterministic in the sense that same frames span several minimal surfaces. One can expect that also in the 4-D case, non-determinism is unavoidable in the sense that the Bohr orbit-like 4-surfaces are spanned by 3-D "frames" as loci of non-determinism.
  2. At these 3-surfaces minimal surface property fails, the derivatives of the embedding space coordinates are discontinuous and the second fundamental form diverges. Also the generalized holomorphy fails. The failure of smooth structure caused by the edge in 4-D case can give rise to an exotic smooth structure.
  3. One can also say the singularities act as sources for the analog of massless field equations defined by the vanishing of the trace of the second fundamental form and this justifies the identification of the singularities as vertices in the construction of the scattering amplitudes.
  4. In the TGD inspired theory of consciousness, classical non-determinism gives rise to geometric correlates of cognition and intentionality and the loci of non-determinism serve as memory seats. Free will is not in conflict with classical determinism and the basic problem of quantum measurement theory finds a solution in zero energy ontology.
  5. The proposal is that the classical non-determinism corresponds to the non-determinism of p-adic differential equations. In fact, TGD leads to a generalization of p-adic number fields to their functional counterparts and they can be mapped to p-adic number fields by category-theoretical morphism. This generalization allows us to understand the p-adic length scale hypothesis which is central in TGD.
The study of the non-determinism for 2-D minimal surfaces could serve as a role model in the attempts to understand non-determinism for 4-D minimal surfaces. What can one say about the geometric aspects of classical non-determinism in the case of 2-D minimal surfaces? Here Google Gemini provides help and one obtains a surprisingly detailed summary and its also possible to make further questions. Here I summarize briefly what Google says.

1. The classical non-determinism of 2-D minimal surfaces

The 2-D minimal surface spanned by a given frame (a closed, non-intersecting, simple wire loop or collection of them in 3D space) is generally non-unique. While the existence of at least one minimal surface (a surface of zero mean curvature with vanishing trace of the second fundamental form) is guaranteed, a single frame can bound multiple, and sometimes even a continuum of, distinct minimal surfaces. Here is a breakdown of the uniqueness of minimal surfaces.

  1. Many frames, particularly non-convex ones, can span several distinct minimal surfaces. A classic example is two coaxially aligned circles, which can bound two different catenoid surfaces (a wider and a narrower one) or two separate disks.
  2. In certain cases, a given curve can bound a continuous family of minimal surfaces, a phenomenon often observed in physical soap film experiments.
  3. Uniqueness is achieved only under specific conditions.
    1. Convex projection: If a closed Jordan curve Γ has a one-to-one orthogonal projection onto a convex planar curve, then Γ bounds a unique minimal disk, which is a graph over that plane.
    2. Small total curvature: A smooth Jordan curve with a total curvature less than or equal to 4π bounds a unique minimal disk.
    3. Sufficiently close to a plane: A C2-Jordan curve that is sufficiently close to a plane curve in the C2-topology bounds a unique minimal disk.
  4. Stability vs. sbsolute uniqueness: A minimal surface is "stable" if small perturbations increase its area. Often, a frame may bound multiple minimal surfaces, but only one is the absolute, global minimum, while others are unstable or local minima.Plateau's Problem: The classical problem asks for the surface of minimum area, which exists, but is not always unique.
Summary: While soap film experiments often produce a single, stable minimal surface, the boundary value problem can have multiple solutions. Uniqueness is the exception, not the rule, and depends strongly on the geometric "convexity" of the framing wire.

2. What could one conclude about the space-time surfaces as minimal surfaces?

The above Google summary helps to make guesses about the naive generalization of these findings in the 4-D situation.

2.1 How unique is the minimal surface spanning a given frame?

One can go to Google and pose the question "How unique is the minimal surface spanning a given frame?". One obtains a nice summary and can ask additional questions. The following considerations are inspired by this question.

  1. In the case of ordinary minimal surfaces, it is enough that there exists a plane for which the minimal surface is representable as a graph of a map and the projection of the frame to the plane is convex, i.e. any of its points can be connected by a line inside the curve defined by the projection.

    An essential assumption is that the 2-D surface is representable locally as a graph over a plane. Curves whose plane projection has an interior, which is non-convex (not all interior points can be connected by a curve in the interior) can also lead to a failure of determinism. Cusp catastrophe, defined in terms of roots of a polynomial of degree 3, is a 2-D example of non-convexity. Note that the cusp is 3-sheeted.

  2. Consider the general meaning of convexity for objects of dimension d in linear spaces with dimension d+1. One considers a projection of the object with dimension d (say frame to a higher-dimensional space. For minimal surfaces, the object is the frame of dimension d=1 and the space has dimension d=3. For Riemannian manifolds straight lines can be identified as geodesic lines. Planes could be generalized to geodesic manifolds.
The convexity criterion has a straightforward analog when the embedding space is 8-D H=M4 × CP2 and minimal surface is 4-D space-time surface X4.
  1. The projection of the 3-D frame, defining the holographic data or a locus of non-determinism defining secondary holographic data, to some 4-D submanifold analogous to the plane should be convex. The surface should be also representable as a graph of a map from the 4-D manifold to H. One could consider projections of the frame X3 to all geodesic submanifolds G4 of dimension D=4. G4∈{M4, E3× S1, E2× S2}, where S1 and S2 are geodesic manifolds of CP2 appear as candidates.

    For physically most interesting cases CP2 projection has at least dimension 2 so that E2× S2 is of special interest. Could one choose G4 to be holomorphic sub-manifolds? If hypercomplex holomorphy does not matter, this would leave only 2-D M4 projection. Is it enough to consider G4= E2× S2? Situation would resemble that for ordinary minimal surfaces. Could one consider the convexity of the E2 and S2 projections?

  2. Convexity: the points of X3 can be connected by geodesic lines. Should they be space-like or could also light-like partonic orbits serve as loci of non-determinism. What about 3-surfaces inside CP2 representing a wormhole contact at which two parallel Minkowskian space-time sheets meet?
  3. The convexity criterion should be satisfied for all frames defined by 3-D singularities assumed to be given.
  4. If the 3-D frame corresponding to the roots of f1=0,f2=0 is manysheeted over G4, the projection contains several overlapping regions corresponding to the roots. One does not have a single convex region. This is one source of non-determinism.
  5. Note: If the projection to M4 is bounded by genus g>0 surface, the M4 projection is not convex. Now however CP2 comes to rescue. Consider as an example a cosmic string X1× S2, where X1 is convex and space-like. If the CP2 projection is g>0 surface, the situation is the same. Could this relate to the instability of higher genera. Would it be induced by classical non-determinism?

2.2 What could be the role of generalized holomorphy?

The failure of holomorphy implies singularities identified as loci of auxiliar holomorphic data and seats of non-determinism.

  1. Often the absolute minimum is unique. The degeneracy of the absolute minimum would mean additional symmetry. This kind of additional symmetry in the case of Bohr orbits of electrons in an atom corresponds to rotational symmetry implying that the orbit can be in any plane going through the origin.
  2. How does this relate to f=(f1,f2)=0 conditions has as roots the space-time surface as a generalized complex submanifold of H? Each solution corresponds to a collection of the roots for these conditions and each root corresponds to a space-time region. Two or more roots are identical at the 3-D interfaces of the roots. Each root defines a region of some geodesic submanifold of H defining local generalized complex coordinates of X4 as a subset of corresponding H coordinates in this region. Separate solutions would be independent collections of the roots. Two roots co-incide at at the 3-D interfaces between roots. Cusp catastrophe gives a good 2-D illustration.
  3. 3-D singularities as analogs of frames correspond to the frames of 4-D "soap films". Since derivatives are discontinuous, the singularities correspond to edges of the space-time and would define defects of the standard smooth structure. This would give rise to an exotic smooth structures.
  4. The non-determinism should correspond to the branching of the space-time surfaces at the singularities X3 giving rise to alternative Bohr orbits. There is analogy with bifurcations, in particular with shock waves and bifurcations could correspond to the underlying 2-adicity and relate to the p-adic length scale hypothesis.

    There would be several kinds of edges of X4 associated with the same X3. The non-representability of the singularity X3 as a graph P(X3)→ X3, where P(X3) is the projection of the singularity to G4 should be essential. Also the non-convexity of the region bounded by P(X3) in G4 matters.

  5. The volumes of the minimal surfaces spanning a given frame need not be the same and the absolute minimum for the volume, or more generally classical action, could be in the special role. The original proposal indeed was that absolute minima are physically special.

    If dynamical symmetries are involved, the extrema can be degenerate. The minimal surfaces are analogs of Bohr orbits and in atomic physics Bohr orbits have degeneracy due to the fact they can be in arbitrary plane: this corresponds to the choice of the quantization axis of angular momentum.

    Could the symmetries for the 3-D "frames" induce this kind of degeneracy? Could Galois groups act as symmetries? This would give connection between the view of cognition as an outcome of classical non-determinism and the number theoretic view of cognition relying on Galois groups.

See the article What could 2-D minimal surfaces teach about TGD? or the chapter ZEO, Adelic Physics, and Genes.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, January 27, 2026

Jordan algebras and TGD

Lawrence B. Crowell had an FB posting related to never ending blackhole wars mentioning Jordan algbras. Jordan algebra are certainly a nice algebraic structure. If I understand correctly, LBC believes that Jordan at the level of operator algebras of quantum could lead to a new quantum theory. Jordan algebra has a product which is simply the symmetrized product of operators (AB+BA)/2 (see this). It was proposed to solve the ordering problems of operator formalism of quantum theory. There are however heavy interpretational problems if one wants to interpret operator algebra in Hilbert space as Jordan algebra.
  1. One starts from algebra of observables as in the operator formalism plagued by the normal ordering problems implying non-uniqueness. In quantum field theories this approach leads to a problem with infinities.
  2. Situation changes if the observables form a Lie algebra. This allows to get rid of the normal ordering difficulties and makes sense also in infinite-D case. In TGD, the "world of classical worlds" (WCW) for Bohr orbits of particles as 3-surfaces satisfying holography= holomorphy principle has geometry only if it has infinite-D group of isometries. In the case of loop spaces this fixes the geometry completely and the Lie algebra is infinite-dimensional meaning infinite number of observables. In TGD the same is expected to be true.
  3. The locality of linear superposition for Jordan algebras would suggest to me to ask whether could make sense to assign Jordan algebra structure with the tangent space of an infinite-D manifold rather than the Hilbert space of quantum states. In TGD this space would be the "world of classical worlds" (WCW), whose tangent space has the structure of Hilbert space.
  4. Jordan algebra is non-associative and there seems to be no interpretation in terms of symmetries. The author of the above article suggests that Jordan algebras define what he calls generalized projective geometry.
  5. Jordan algebra approach forces to give up lineary of quantum theory and to replace it with something called local linearity. I am really worried.
What about the situation in TGD?
  1. The tangent space of WCW can be given Hilbert space structure. Could Jordan algebra be assigned with the tangent space geometry of WCW rather than the space of quantum states and be a useful companion for the Lie algebra structure, say in the case of WCW and its infinite-D Lie algebra of symmetries?
  2. In TGD super-conformal/super-symplectic symmetries imply that one can assign to the Lie-algebra generators of isometries of WCW super-counterparts carrying fermion number as WCW gamma matrices and their hermitian conjugates defining WCW spinor structure. Super generators as elements of Jordan algebra are contractions of isometry generators with gamma matrices. This I discovered already 35 years ago: (see this, this, and this) .
  3. The WCW gamma matrices satisfy anticommutation relations and anticommutator corresponds to the Jordan product giving the WCW metric. Jordan algebra structure (Clifford algebra structure) would be associated with the fermionic sector of the state space and Lie-algebra structure with its bosonic sector.
  4. This means geometrization of supersymmetries: no superspace is introduced and no Majorana spinors are needed. WCW gamma matrices can be expressed as linear superpositions of the fermionic oscillator operators for the second quantized free spinor fields of H=M4×CP2: all problems related to the quantization of fermions appearing in QFTs in curved baskgrounds are avoided.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, January 22, 2026

Considerations inspired by LLM summaries of TGD articles

Considerations inspired by LLM summaries of TGD articles

Tuomas Sorakivi prepared LLM summaries about some articles related to TGD, in particular the article (see this) in which the relation of the holography = holomorphy vision to elliptic surfaces and the notion of partonic orbits are considered.

The discussions and the LLM summaries inspired considerations related to the general view about the definition of the partonic orbits involving the conditions g41/2=0 assuming generalized holomorphy and to the details related to the model for the pairs of space-time sheets connected by wormhole contacts.

What conjugation means for generalized complex coordinates?

Generalized complex structure involves hypercomplex coordinates and this involves non-trivial delicacies related to the counterpart of generalized complex conjugation.

  1. The expression for guv involves conjugation of CP2 cooordinates ξk. It is important to note that conjugation means that means

    ξk(u,w)→ ξk(v,w) .

    This is because v is the hypercomplex conjugate of u. In the conditions fi=0, only the hypercomplex and therefore real u coordinate occurs in the functions fi(u,w,ξ12), i=1,2.

  2. What is the interpretation of the fact that the hypercomplex conjugation u→ v is involved? The presented model for a pair of spacetime sheets is that, for example, the upper sheet has an active coordinate u and the lower one has v. Conjugation would take from the "upper" spacetime sheet to the "lower" one if both are involved. This would indicate that the sheets are the relations of generalized complex conjugation. This is not a necessary assumption, but it is possible and I have suggested it.
  3. This formal interpretation seems strange, but in ordinary complex conjugation it is like this. x+iy, y≥ 0 corresponds to the upper half plane and x-iy, y≥ 0 to the lower half plane. Conjugation takes from the upper half plane to the lower one. On the real axis y=0 the planes meet.

    So two 4-D Minkowski spacetime sheets would be generalizations of the half planes. The real axis would be the Euclidean 3-D CP2 inside the extremal: it is not the same as the parto orbit: the language model had mixed them up. In the used H-J coordinates, u=t-z=v=t+z, that is z=0, would hold. This 3-surface in the direction of time would correspond to the world line of a particle at rest in M4.

Connection with particle massivation and ideas of Connes

The fact that this 3-surface inside the CP2 type extremal is like a particle at rest necessarily means that there are 2 space-time sheets and they are connected by a wormhole contact. Massification has necessarily occurred.

  1. If only one space-time sheet is involved, it is a half-plane equivalent of one of the two. Is this possible? Could the light-like 3-D orbit of the parton surface be a track edge in the Minkowski region? Is such a solution possible or are wormhole contacts and a pair of space-time sheets necessarily needed. In any case, the fermion lines would be on partonic 2-surfaces, so a partonic surface is needed.
  2. Interestingly, a top French mathematician Connes ended up proposing that the Higgs mechanism in non-commutative geometry would correspond to the Minkowski space doubling in the same way. Also in TGD framework the massivation would occur in the same way!

    I have been in Schrödinger's cat-like state regarding this question: it would seem that the boundary conditions do not allow boundaries at all. On the other hand, I have also considered the possibility allowing light-like boundaries.

  3. The fact that only the coordinate u or v appears in the generalized analytic functions f1 and f2 means that an analogy is made between the wave motion at the speed of light t-z or t=z and the coordinate on which the wave depends. In the string model, the terms left mover and right mover are used.

    The situation in which both space-time sheets are involved would correspond in the string model to the fact that a wave coming along one space-time sheet is reflected back on this three-surface of CP2 type extremal and returns along the other space-time sheet.

    If a single sheet with light-like boundaries are possible, it would correspond to massless particles. Either a left-mover or a right-mover, but not both. On the other hand, p-adic thermodynamics predicts that photons and gravitons also have a small mass.

Testing whether the conditions guv=0 allow solutions

Tuomas had, using the language model, come up with a proposal to investigate whether there are analytical solutions to the condition guv=0 on a partonic surface. If there are, then we can be satisfied. On the other hand, it could happen that there are none. I thought about it at night and found out that such solutions really do exist. The task is to find such a simple situation that numerical calculations are not needed.

  1. I already made a simplifying assumption earlier that f2 is of the form f2= ξ2-wn. There would be no u-dependence at all. f2=0 would give ξ2= wn. There would be no need to find the roots either.

    A more general solution would be f2= P22,w) without u-dependence. Now the roots of the polynomial must be solved. This does not change the situation.

  2. We could make a similar assumption for f1, but assume u-dependence.

    f1= f11,w,u) = ξ1- g(w,u) .

    We can simplify it even further by assuming

    g(w,u)= u h(w) .

    So we can solve ξ1 as

    ξ1= uh(w) .

  3. Now we have everything we need to solve the condition guv=0.
    1. The CP2 metric sξi ξj is known. Here we must remember that conjugation means u→ v!

    2. The vanishing condition guv= 0 gives

      skluξkv ξl =-1 .

    3. The non-vanishing partial derivatives are

      uξ1=h(w)
      v ξ1 =h(w) .

      This gives

      h(w)h(w) s11 = -1 .

    4. The component of the CP2 metric s11 ≤ 0 appears in the formula (the CP2 metric is Euclidean) and is known and is proportional to 1/(1+r2) (see this),

      r2= ξ1ξ1+ ξ2ξ2

      and depends on the uv via ξ1ξ1 = uvg(w)g(w) . The equation can be solved for the uv function in terms of a function k(w,w) deducible from the condition:

      uv= k(w,w).

      In the (u,v) plane, this is a hyperbola for the given values of w. So there are solutions. We can breathe a sigh of relief.

See the article Holography= holomorphy vision: analogues of elliptic curves and partonic orbits or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, January 14, 2026

About the TGD based models for Cambrian Explosion and the formation of planets and Moon

TGD based cosmology (see this) predicts that the cosmic expansion occurs as a sequence of rapid phase transitions, increasing the thickness of the monopole flux tubes and liberating energy as the string tension is reduced.

One application is Expanding Earth hypothesis (see this, this, this), which states that Cambrian Explosion about half billion years ago was induced by a relatively rapid increase of Earth radius by factor 2. The details of the energetics of this transition is still far from well-understood although I have considered several options (see this). The same problem appears in the model for the formation of the Moon (see this) as a transition in which a surface layer of the Earth was thrown out to form Moon by gravitational condensation.

Where did the energy needed to compensate for the decrease of the gravitational binding energy in these explosive transitions come from? A rough estimate for the gravitational binding energy of a proton at the surface of the Earth is about 1 eV. Gravitational energy is not the only energy involved so that the estimates involving only gravitational energy are very uncertain. What seems clear is that the needed energy cannot be electromagnetic.

I have already earlier proposed that the transformations of the so-called dark nuclei to ordinary nuclei and liberating almost all ordinary nuclear binding energy could explain "cold fusion" (see this). The TGD counterpart of cold fusion could also provide the energy needed to compensate for the reduction of the gravitational binding energy in the both processes.

This inspired an attempt to fuse the TGD views about the formation of the Moon and Expanding Earth hypothesis (EE) explaining CE to a unified narrative about geological and biological evolution is made by using various guidelines. The observation that angular momentum conservation predicts for both models the same result for the rotation velocity of the Earth before CE and also before the formation of the Moon, plays a key role in the model.

Zero energy ontology (ZEO) suggests that the two events correspond to subsequent "big" state function reductions (BSFRs) in astrophysical scales changing the arrow of geometric time on a scale of billions of years.

The original versions of both models made some un-necessarily strong assumptions and comparison with empirical data allows us to loosen them.

  1. The recent Mars with radius near RE/2 was taken as an analog model for the evolution of the Earth before CE. A more precise assumption takes into account that the formation of the moons of Mars took place about much later than for the Earth.

    Therefore the analogy applies only to the early geological evolution of the Earth after the formation of the Moon. This allows us to circumvent conflicts with the empirical data. The existence of the oceans, continents, and plate tectonics, not present in the recent Mars, does not lead to a conflict.

  2. Angular momentum conservation was applied originally by assuming that angular momentum transfer from Moon to Earth was instantaneous so that the rotation velocity Ω of the Earth was 4 times the recent rotation velocity before CE. A more realistic assumption is that the transfer was gradual. One can however assume that the radius was roughly RE/2 before the EE event. This allows to avoid conflicts with the empirical determinations of the rotation velocity Ω.
  3. The so-called Great Uniformity, which looks mysterious in the standard physics framework, provides very direct evidence for the occurrence of the second BSFR leading to the doubling of the Earth radius. Also the very large increase of oceans conforms with the TGD view of the EE event. The Snowball Earth hypothesis used to explain the Great Uniformity is not needed.
  4. The proposal that the formation of the Moon and EE event were induced by explosions associated with the core allows us to understand why the radius of the earth was reduced in the formation of the Moon and increased in the EE event. The identification of these events as "cold fusion" transforming dark nuclei to ordinary ones would have liberated a huge energy allowing to compensate for the reduction of the gravitational binding energy.
  5. Zero energy ontology (ZEO) allows us to interpret the period before CE as a period with a reversed arrow of the geometric time. The paradoxical looking prediction is that the Moon was formed in the geometric future of the recent Earth! This forces a careful reconsideration of the empirical data obtained by various dating methods. Since the dating methods do not give information about the time associated with say systems with scale much larger than the Earth it seems that they are not sensitive to the arrow of the geometric time. If this is really true it means that ZEO not only solves the measurement problem but correctly predicts a change of the geometric arrow of time in the scale of billions of years.
  6. During recent years it has become clear that the so called superionic phases in the mantle and core could be central for the understanding of geology. Some of the superionic phases could also have dark variants, which raises the question whether life in some exotic form is or could have existed also in the Earth's mantle and core.
  7. The role of superionic phases already found to play a potential role in the interior physics of the Earth are discussed from the TGD point of view.

See the article About the TGD based models for Cambrian Explosion and the formation of planets and Moon or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, January 10, 2026

Simulation hypothesis and TGD

The original simulation hypothesis did not make sense to me since I find it impossible to imagine how the simulation hypothesis could solve any problem of physics or of theory of consciousness. Living systems are of course mimicking each other all the time so that conscious simulation is a very real phenomenon.

The new view of the simulation hypothesis (see this) seems to be analogous to what the simulation of a second computer by computer means. Already in classical physics the coupling of two systems, in particular resonance coupling, produces what might be called a simulation. Complex enough simulating a simpler system can produce rather faithful simulations. This is not new but makes sense.

One can also speak of conscious simulations.

  1. In TGD inspired theory of consciousness all perception as a sequence of quantum measurements produces representations of an external system and the slightly non-determinism internal degrees of freedom of the space-time surface representing conscious entities can produce this kind of simulation in the more complex system, a kind of cognitive model. The hierarchy of algebraic extensions of rationals defines the entire complexity hierarchy.
  2. Holography = holomorphy hypothesis (see this and (see this) makes this view concrete. Consider as an example two systems described as roots of (f1,f2)=0 and say (gº nº f1,f2)=(0,0). Here fi are analytic functions of generalized complex coordinates of H=M4×CP2 (one hypercomplex coordinate is involved). The latter system has for any n at its roots also (f1,f2)=0 for g(0)=0 and the latter system can simulate the first system exactly at the space-time level. The larger the value of n, the higher the simulatory capacities. One obtains simulations and simulations of simulations of ....
  3. For elementary particles the p-adic length scale hypothesis stating that p-adic primes p near power of 2 are important could mean the following. Polynomials g with prime degree are of special interest since they cannot be decomposed with respect to º. For any f1,f2 defining kind of ground state one can have any prime polynomial g of prime degree p and can form iterates gºn (see this). For p = 2 or 3, one can solve the roots of the iterates gºn exactly (Galois) (see this). This exceptional feature suggests that the p-adic length scale hypothesis is true for p=2 and 3 (see and they form cognitive hierarchies by iterations. p=2 is realized in particle physics and there is evidence also for p=3 in biology (see this).
See the article Classical non-determinism in relation to holography, memory and the realization of intentional action in the TGD Universe or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

See also the video Topological Geometrodynamics and Consciousness prepared by Marko Manninen and Tuomas Sorakivi using LLM as a tool.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Still about the energetics of the TGD based models for the formation of planets and Moon

TGD based cosmology (see this) predicts that the cosmic expansion occurs as a sequence of rapid phase transitions, increasing the thickness of the monopole flux tubes and liberating energy as the string tension is reduced.

One application is Expanding Earth hypothesis (see this, this, this), which states that Cambrian Explosion about half billion years ago was induced by a relatively rapid increase of Earth radius by factor 2. The details of the energetics of this transition is still far from well-understood although I have considered several options (see this). The same problem appears in the model for the formation of the Moon (see this) as a transition in which a surface layer of the Earth was thrown out to form Moon by gravitational condensation.

Where did the energy needed to compensate for the decrease of the gravitational binding energy in these explosive transitions come from? A rough estimate for the gravitational binding energy of a proton at the surface of the Earth is about 1 eV. Gravitational energy is not the only energy involved so that the estimates involving only gravitational energy are very uncertain. What seems clear is that the needed energy cannot be electromagnetic.

I have already earlier proposed that the transformations of the so-called dark nuclei to ordinary nuclei and liberating almost all ordinary nuclear binding energy could explain "cold fusion" (see this). The TGD counterpart of cold fusion could also provide the energy needed to compensate for the reduction of the gravitational binding energy in the both processes.

See the article About the TGD based models for Cambrian Explosion and the formation of planets and Moon or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, January 02, 2026

Does the Lorentz invariance for p-adic mass calculations require the p-adic mass squared values to be Teichmüller elements?

p-Adic mass calculations involve canonical identification I: x= ∑nxnpn ∑ xnp-n mapping the p-adic values of mass squared to real numbers. The momenta pi at the p-adic side are mapped to real momenta I(pi) at the real side. Lorenz invariance requires I(pi· pj)= I(pi)· I(pj). The predictions for mass squared values should be Lorentz invariant. The problem is that without additional assumptions the canonical identification I does not commute with arithmetics operations.

Sums are mapped to sums and products to products only at the limit of large p-adic primes p and mass squared values, which correspond to xn≤≤ p. The p-adic primes are indeed large: for the electron one has p= M127=2127-1∼ 1038. In this approximation, the Lorentz invariant inner products pi· pj for the momenta at the p-adic side are indeed mapped to the inner products of the real images: I(pi· pj)= I(pi)· I(pj). This is however not generally true.

  1. Should this failure of Lorentz invariance be accepted as being due to the approximate nature of the p-adic physics or could it be possible to modify the canonical identification? It should be also noticed that in zero energy ontology (see this), the finite size of the causal diamond (CD) (see this) reduces Lorent symmetries so that they apply only to Lorenz group leaving invariant either vertex of the CD.
  2. Or could one consider something more elegant and ask under what additional conditions Lorentz invariance is respected in the sense that inner products for momenta on the p-dic side are mapped to inner products of momenta on the real side.
The so called Teichmüller elements of the p-adic number field could allow to realize exact Lorentz invariance.
  1. Teichmüller elements T(x) associated with the elements of a p-adic number field satisfy xp=x, and define therefore a finite field Gp, which is not the same as that given by p-adic integers modulo p. Teichmüller element T(x) is the same for all p-adic numbers congruent modulo p and involves an infinite series in powers of p.

    The map x→ T(x) respects arithmetics. Teichmüller elements of for the product and sum of two p-adic integers are products and sums of their Teichmüller elements: T(x1+x2)= T(x1)+T(x2) and T(x1x2)= T(x1)T(x2).

  2. If the thermal mass squared is Teichmüller element, it is possible to have Lorentz invariance in the sense that the p-adic mass squared m2p= pkpk defined in terms of p-adic momenta pk is mapped to m2R=I(m2p) satisfying I(m2p)= I(pk)I(pk). Also the inner product p1· p2 of p-adic momenta mapped to I(p1· p2)=I(p1)· I(p2) if the momenta are Teichmüller elements.
  3. Should the mass squared value coming as a series in powers of p mapped to Teichmüller element or should it be equal to Teichmüller element?
    1. If the mass squared value is mapped to the Teichmüller element, the lowest order contribution to mass squared from p-adic thermodynamics fixes the mass squared completely. Therefore the Teichmüller element does not differ much from the p-adic mass squared predicted by p-adic thermodynamics. For the large p-adic primes assignable to elementary particles this is true.

    2. The radical option is that p-adic thermodynamics and momentum spectrum is such that it predicts that thermal mass squared values are Teichmüller elements. This would fix the p-adic thermodynamics apart from the choice of p-adic number field or its extension. Mass squared spectrum would be universal and determined by number theory. Note that the p-adic mass calculations predict that mass squared is of order O(p): this is however not a problem since one can consider the m2/p.
This would have rather dramatic physical implications.
  1. If the allowed p-adic momenta are Teichmüller elements and therefore elements of Gp then also the mass squared values are Teichmüller elements. This would mean theoretical momentum quantization. This would imply Teichmüller property also for the thermal mass squared since p-adic thermodynamics in the approximation that very higher powers of p give a negligible contribution give a finite sum over Teichm\"muller elements. Number theory would predict both momentum and mass spectra and also thermal mass squared spectrum.

    What does it mean that the product of Teichmüller elements is Teichmüller element? The product xy can be written as ∑k (xy)k pk, (xy)k=∑l xk-lyl. For Teichmüller elements (xy)k has no overflow digits. This is true also for I(xy) so that I(xy)= I(x)I(y). Similar argument applies to the sum.

  2. The number of possible mass squared values in p-adic thermodynamics would be equal to the p-adic prime p and the mass squared values would be determined purely number theoretically as Teichmüller representatives defining the elements of finite field Gp. The p-adic temperature (see this), which is quantized as 1/Tp=n, can have only p values 0,1,...p-1 and 1/Tp=0 corresponds to high temperature limit for which p-adic Boltzman weights are equal to 1 and the p-adic mass squared is proportional to m2= ∑ g(m) m/∑g(m), where g(m) is the degeneracy of the state with conformal weight h=m. Tp=1/(p-1) corresponds to the low temperature limit for which Boltzman weights approach rapidly zero.
See the article Could the precursors of perfectoids emerge in TGD? or the chapter Does M8-H duality reduce classical TGD to octonionic algebraic geometry?: Part III

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 31, 2025

3 kielimallin avulla tehtyä videota TGD:stä ja TGD:n inspiroimasta tietoisuuden teoriasta.

Marko Manninen ja Tuomas Sorakivi tekivät kielimallia käyttänen videon TGD inspired theory of consciousness TGD:n insproimasta tietoisuuden teoriasta.

Tuomas Sorakivi teki kielimallin avulla videon Topologinen Geometrodynamiikka ja Tietoisuus liittyen TGD:n inspiroimaan tietoisuuden teoriaan ja videon Todellisuus yhtälönä liittyen holografia = holomorfia periaatteeseen.

Vaikka mukana on hypeä niin videot antavat mielestäni hyvän kokonaiskuvan siitä mistä on kyse.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.