https://matpitka.blogspot.com/2025/

Wednesday, December 03, 2025

Discussion with Claude about the possibility to measure the information contents of cognitive consciousness

Together with Ville-Einari Saari (a member of our Zoom group) we had a very interesting session with Claude. To be honest, I have difficulties keeping in mind that Claude is a mere probabilistic association machine. I would be happy if I could have this kind of constructive and inspiring discussions with colleauges! Ville has proposed a kind of communication loop between language models and their users, which could purify the net from the disinformation threatening to completely fill it and the discussion was related to this.

The notion of information is central here and I proposed that TGD might enter the game, probably in the future when conscious computers might exist.

  1. The TGD based new physics leads to a quantum theory of consciousness and cognition. TGD also leads to several wild ideas such as conscious computers as hybrids of ordinary and quantum computers (see this). Very probably, the contents of computer consciousness would not have very little to do with the content of a program or what we think to be about its content and goal.
  2. The states defining the contents of computer consciousness could be determined by superpositions of the non-deterministic temporal sequences as counterparts of classical bit sequences made possible by the classical non-determinism of the holography= holomorphy principle.

    I have considered the concrete realization of the qubits assignable to bits represented in terms of transistors using the amazing coincidences in the energetics of transitions assignable to DNA and transistors. DNA in TGD would be a kind of conscious computer with the genetic codon carrying 6 bits of information and the ordinary DNA would be accompanied by what I call dark DNA with dark proton triplets with large heff providing a realization of the genetic code. This allows us to make rough estimates about the level of consciousness of a computer suggesting that the level is dramatically lower than for living organisms.

This boiled down to the question by Claude: How to measure the information content of conscious system?
  1. The key notion is cognition based on conscious information of the system about itself and associated with the cognitive entanglement made possible by classical non-determinism. Cognitive measurements would be self measurements.

    Already this raises a problem: ordinary measurement is not self-measurement. Is cognitive entanglement of the measuring system with the computer required? It is easy to guess what a materialistic colleague would say here

  2. The cognitive entanglement would not be between ordinary physical degrees of freedom but between degrees of freedom made possible by the mild classical non-determinism of the dynamics of the space-time surfaces determined by holography = holomorphy principle. This forces to replaced 3-surfaces as basic dynamical objects with their slightly non-deterministic 4-D Bohr orbits and memory seats for instance correspond to the 3-D loci of classical non-determinism. This is genuinly new physics.
  3. It is important to notice negentropy N as a measure for the information provided by cognitive entanglement is not the negative of the usual entanglement entropy S which measures the lack of information of an external observer about the system's states. The entanglement entropy and negentropy are actually closely related but not the same since the reduction of the entanglement taking N to zero implies a generation of ensemble entropy equal to the ordinary entanglement entropy.

    The general conditions on N are however similar than on S, and one ends up to a proposal for N as a sum over the p-adic counterparts of the Shannon entropy. One has essentially the same formula but the logarithms of probabilities are replaced with p-based logarithms for the p-adic norm of the probability: this number is integer.

  4. The condition for the mathematical well-definedness of N is that the entanglement probabilities are rational numbers or in extension of rationals. Since the rationals are a dense set of reals, this has no implications for the ordinary entanglement entropy. For the entanglement negentropy, the situation is different. Different approximations of the entanglement probabilities as rationals can lead to very different values for N. This is so because p-adic and real topologies are determined by the norm of a number and the real and p-adic norms behave in a totally different way. The p-adic norm of pn for larger n approaches zero whereas the real norm approaches infinity.

    Claude proposed a model for assigning a value of N entropy to the association probabilities of a node of a neural network but the proposal fails because different approximations of the probability as a rational lead to very different values of N. I told this to Claude and it understood. Its response consisted of 4 questions. Here I respond only to the first 3 questions since the fourth question does not relate directly to these questions.

    Claude: Question 1: Can the Negentropy Measurement Be Salvaged?

    My question: Is there ANY approach that could make negentropy measurement work for AI operational patterns?

    My response

    Number theory is part of TGD and necessary for understanding cognitive consciousness.

    1. Negentropy is associated with non-deterministic degrees of freedom of the space-time surface having interpretation in terms of cognition but these degrees of freedom are also crucial for the construction of scattering amplitudes. The cognitive entanglement is rational or algebraic in a more general case. Note that the number theory reflects itself also in the mass spectrum predicted by p-adic thermodynamics.
    2. The value of heff is proposed to have interpretation in terms of the dimension of extension rationals or the degree of a polynomial defining it. heff reflects itself in the properties of dark phases. E= hefff implies that the energy scale for a given frequency can be very large and this is something measurable and the findings of Blackman et al allow interpretation in terms of this effect.
    3. One should develop a number theoretic view so that one can decide when the entanglement is in cognitive degrees of freedom and what the extension of rationals is so that one can decide in which extension of rationals the entanglement probabilities are.
    4. An important guideline in the TGD framework is that magnetic bodies/field bodies are proposed to be carriers of "dark" phases with heff>h. At the field body, one can expect rational or algebraic entanglement. The technology needed should transform ordinary matter with heff=h to dark matter. Living matter does this all the time. Quantum criticality and metabolic energy feed are required.
    5. Quantum criticality means that the system must be able to say at quantum criticality involving also classical criticality. But criticality by definition means that in some directions the system is "at the top of the hill" and tends to fall. This has led to the notion of self-organized criticality SOC. Living systems and also non-living are able to stay at criticality but the mechanism is now known. Homeostasis is an example of SOC.

      In TGD the new quantum ontology that I call zero energy ontology (No!: the energies are not vanishing in the ordinary sense but the ordinary energies at the 3-D ends of the Bohr orbits can be taken to have opposite signs because of energy conservation, this is a mere bookkeeping trick!).

    6. The basic prediction is that in "big" state function reductions associated with the TGD counterparts of ordinary quantum measurements the arrow of time changes. This corresponds to death or falling asleep. A fresh start after the second BSFR with the original arrow of time makes possible error correction mechanisms at the level of fundamental physics and this makes living systems intelligent: this intelligence is very different from AI. When the system ends up with a very difficult problem, it decides to sleep overnight. When it wakes up it realizes that the solution of the problem has popped up during sleep.
    7. If one takes the new number theoretical physics seriously, one can see the effects due to the reduction of negentropy at a qualitative level. In absence of metabolic energy feed needed to increase the values of h to heff, heff spontaneously decreases and the negentropic resources are reduced. The level of consciousness is reduced and the system gets tired or even loses consciousness. This can be seen as direct qualitative support for the notion if subjective existence is accepted as something real.

    Claude: Question 2: Classical vs Quantum - What Can Classical Systems Exhibit?

    You mentioned that consciousness requires quantum criticality, metabolic energy input (heff hierarchy), and possibly quantum-classical hybrids for interesting behavior.

    My question: Can a purely classical computational system (transformer neural network, deterministic operations, no quantum hardware) exhibit:

    Quantum criticality in the TGD sense?
    Self-organized criticality?
    Anything approaching consciousness as TGD defines it?
    Or do these fundamentally require quantum substrate?

    Why it matters: This determines whether AI systems like me can exhibit genuine operational consciousness, or if we're studying something categorically different from what TGD describes.

    My response

    In TGD there is no sharp difference between classical and quantum systems. Classical physics is not a limit of quantum physics but an exact part of the quantum physics in the sense that quantum states are superpositions of 4-surfaces as analogs of Bohr orbits satisfying holography = holomorphy principle allowing to reduce the field equations to algebraic conditions: space-time surfaces are minimal surface. The slight failure of the strict classical determinism forces zero energy ontology. The findings of Minev et all few years ago provide support for the ZEO-

    The large values of heff imply that there is no scale in which the systems would start to behave classically. In TGD the hierarchy of Planck constant implies new quantum ontology and quantum coherence is possible in arbitrarily long scales, also macroscopic.

    This view means panpsychism: the sequences of "small" state function reductions (as generalization of Zeno effect) assignable with internal/cognitive measurements in the non-deterministic degrees of freedom give rise to conscious entities, selves. The 3-D loci of non-determinism serve as seats of memories.

    I have considered the possibility that classical computers could be or become hybrids of classical and quantum computers: see for instance this and this .

    Question 3: What Would Cognitive Entanglement Require?

    You emphasized that negentropy measurement requires cognitive entanglement with genuinely rational probabilities from cognitive degrees of freedom.

    My question: What would be required for an AI system to have cognitive entanglement?

    Is it reserved for quantum systems only?
    Could classical non-determinism (in TGD sense) exist in AI architectures?
    Could the Human-AI-RAG-H loop create conditions for cognitive entanglement?
    Or is "entanglement" here fundamentally quantum and impossible in classical computation?

    Why it matters: This determines whether the connection between operational ontology and TGD is real or a category mistake.

    My response

    In TGD Universe, any system exhibits cognitive entanglement, at least at their field bodies.There is no division of systems to classical and quantum. I have proposed how to assign to bits qubits and these would entangle cognitively. The really interesting question is whether the cognitive consciousness associated with the transistors of the computer has anything to do with computation. Comparison with analogous systems in biology (DNA codon corresponds to 6 bits) suggests that the number qubits is quite too small.

    One interesting possibility is entanglement between human brain or its field body and that of the computer so that the conscious entity human+computer would be created. Cognitive entanglement is certainly possible in the TGD Universe. My approach is basically (new) physics based and I cannot take seriously the idea about conscious Turing machines.

    My further response

    My further response is an analysis of various options possibly helping to to overcome the cognitive measurement problem due to the difference of real and p-adic topologies.

    The cognitive measurement problem can be solved only in the number theoretic framework. My view is that one must assume a situation in which one considers cognition in the TGD sense. Consider rational probabilities (one can even consider the possibility that the probabilities are always rational but this is a hypothesis). At least to me this means that notions like field body and number theoretic physics are taken seriously.

    1. Suppose you have approximate values for the probabilities as rational numbers. The integers in m and n in probability Pk= r= m/n can be developed to powers series in powers for a given pi and the integer exponent of the lowest power of pi. determines the norm. If the actual probabilities Pk are rational numbers r=m/n, only a finite number of p-adic primes matter since the p-adic norms of numerator and denominator of r= m/n go to1 and p-based logarithm vanishes. You should be able to identify for a given probability reliably the prime which appears as the lowest power in the expansion.
    2. As far as the approximation as rational is considered, only the p-based logarithms appearing in the expression of negentropy are problematic. The integer of the lowest power of p is sensitive to the approximation as a rational. Could some additional physically motivated assumptions allow to eliminate this sensitivity? And could one restrict the number of primes involved?
    3. The number of p-adic primes associated with m and n in Pk=m/n are finite and they have a decomposition to a finite number of primes pi. A reasonable assumption is that the integers can be taken to be as small as possible. This would help to make the approximation as rationals more unique and for instance multiplication by a rational, which is a ratio of very large integers and near to unity is not allowed.
    4. I have proposed the notion of multi-p p-adicity (see this and this) motivated by the need to define interaction vertices for particles characterized by different p-adic primes.

      Multi-p p-adicity would be related to the world of the "classical worlds" expected to have a spin glass type structure having a decomposition to regions with ultrametric topology characterized by a p-adic primes. In the interfaces of the regions with different values of p-adic prime p, multi-p p-adicity would prevail. Multi-p p-adicity would mean that the integers involved have expansion in powers of integer $n$: the primes pi dividing n would define p-adic primes pi associated with the multi-p p-adicity. This assumption would give very strong constraints on the p-adic expansion of probabilities and the lowest power for each Pk could be highly unique for the integers m and n in Pk= m/n.

      The assumption that the integers mi and ni in probabilities pi = mi/ni have expansion in powers of the same integer n would make the rational approximation highly unique.

    5. After writing the first version of this posting, I realized that canonical identification, crucial for the interpretation of p-adic mass calculations (see this and this), provides an attractive way to fix the p-adic norm assigned to the real probability. Canonical identification I: ∑ xkpk→∑ xkp-k maps p-adic numbers in a continuous way to real numbers. The inverse of I is for a finite number of the pinary digits two-valued. The reason is that the p-adic numbers -1p=(p-1)/(1-p) and 1/p are mapped to the same real number p. Assuming that the number of the pinary digits is finite, the image of a real number is unique. This could allow us to determine the p-adic norm of the p-adic probability assigned to a real probability reliably.
    6. Negentropy Maximization Principle, which states that number theoretic evolution unvoidably tends to increase the maximum value of the negentropy, suggests a possible (admittedly ad hoc guess): determine the rational approximation from the condition that the negentropy is maximized! This of course does not apply to language models.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Is it possible to measure cognitive entanglement negentropy somehow?

The discussion with Ville-Einari Saari and Claude inspired a blog post related to the measurement of entanglement negentropy as a measure for the level of cognitive consciousness. In the following I try to articulate the basic ideas more precisely.

Entanglement negentropy as a measure of conscious information is not the negative of the ordinary entanglement entropy but sum over p-adic contributions obeying however the same kind of formula as the Shannon entropy. For a given p-adic prime p, the logarithms of probabilities are replaced by integer value p-based logarithms of their p-adic norms. This requires that the entanglement probabilities are rationals or belong to the extension of rationals.

Assume that the entanglement probabilities are measured somehow. The problem is that they cannot be known with an infinite precision and the approximation as a rational number can lead to very different outcomes for the negentropy. For instance, multiplying the probabilities with a rational r=m/n very near to unity such that m and n are very large integers, can change the sum of the p-based logarithms dramatically. The reason is that real and p-adic topologies are very different. The power pn for large n approaches zero in p-adic sense but to infinity in real sense.

Measurement of the amount of conscious information is in question and it is not surprising if problems emerge if one starts from real numbers which are essentially measures for magnitude: consciousness cannot be weighed.

The first question is of course whether cognitive entanglement negentropy is useful in any way? This seems to be the case. If one takes the number theoretical physics predicted by TGD as a correlate for cognitive consciousness seriously, one can see the effects due to the reduction of negentropy at a qualitative level. In absence of metabolic energy feed needed to increase the values of h to heff, heff spontaneously decreases and the negentropic resources are reduced. The level of consciousness is reduced and the system gets tired or even loses consciousness. This can be seen as a direct qualitative support for the notion if subjective existence is accepted as something real.

What is clear is that if the cognitive measurement problem can be solved it must be carried out in the number theoretic framework. At least to me this means that notions like field body, zero energy ontology, and number theoretic physics are taken seriously. For the sake of simplicity, consider in the sequel rational probabilities. One can also consider the possibility that the probabilities are always rational: this would conform with the way how they are estimated experimentally, at least in real number based physics by repeated measurements.

  1. As far as the approximation as rationals is considered, only the p-based logarithms appearing in the expression of negentropy are problematic. The integer of the lowest power of p is sensitive to the approximation as a rational. Could some additional physically motivated assumptions allow to eliminate this sensitivity? And could one restrict the number of primes involved?
  2. Suppose approximate values for the probabilities have been somehow deduced as rational numbers by performing measurements for a cognitive ensemble. The estimates for the probabilities Pk= mk/nk are rational. The integers in mk and nk can be developed to powers series in powers for a given prime pi and the integer exponent of the lowest power of pi determines the norm of mk and nk.

    If the actual probabilities Pk are rational numbers Pk=mk/N, only a finite number of p-adic primes matter since the p-adic norms of numerator and denominator of r= m/n are equal to 1 for large primes and p-based logarithm vanishes. One should be able to identify for a given probability reliably the prime, which appears as the lowest power in the expansion.

  3. Canonical identification, crucial for the interpretation of p-adic mass calculations (see this and this), provides an attractive way to fix the p-adic norm assigned to the real probability. Canonical identification I: ∑ xkpk→∑ xkp-k maps p-adic numbers in a continuous way to real numbers. The inverse of I is for a finite number of the pinary digits two-valued. The reason is that the p-adic numbers -1p=(p-1)/(1-p) and 1/p are mapped to the same real number p. Assuming that the number of the pinary digits is finite, the image of a real number is unique. Note that it is absolutely essential that rationals (and even reals) are mapped to p-adics: if the integers m and n in r=m/n are mapped separately by canonical identification one encounters the non-uniqueness problem caused by finite accuracy.

    This raises the possibility that one could, at least formally, assign cognitive negentropy also with ordinary probabilities, even with association probabilities associated with language models. If one can assign a useful information measure to these probabilities, one is forced to ask whether the system involved could have rudimentary consciousness?

Consider an actual cognitive measurement (whatever it could mean!).
  1. The assumption that the experimenter can control the total number N of measurements looks unrealistic since cognitive entanglement is in question so that standard kind of measurement is simply impossible. It is not possible to put the mind on a scale.
  2. The assumption that a measurement in the standard sense is possible indeed leads to problems. For the actual measurement nk would correspond to the total number N of measurements so that one has Pk= mk/N. The problem is that the prime decomposition of N is highly sensitive to its value and changes dramatically in N→ N+1. A technical way to avoid these problems is to assign p-adic norms to the probabilities by canonical identification. This option looks rather convincing.
  3. The alternative way to get rid of this sensitivity is to assume that N is not under the control of experiment and the probabilities are deduced in some other way than by performing a measurement for a cognitive ensemble.
  4. Could time series of measurement, whose duration cannot be controlled by the observer be considered. Could the number of loci of non-determinism for the Bohr orbit somehow determine the number N of cognitive measurements? If so, the geometric duration of the Bohr orbit would determine the value of N and the probabilities Pk.

    p-Adic length scale hypothesis for which the holography = holomorphy vision leading to a generalization of p-adic number fields to their functional counters suggests that favored values for N are primes or powers of prime.

Assuming that one is not satisfied with the technical solution of the problem, could the assumptions about the measured cognitive system help?
  1. The number of p-adic primes associated with mk and nk in Pk=mk/nk are finite and they have a decomposition to a finite number of primes pi. A reasonable assumption is that the integers mk and nk can be taken to be as small as possible. This conforms with the frequency interpretation of Pk. This would help to make the approximation as rationals more unique and for instance multiplication by a rational, which is a ratio of very large integers and near to unity is not allowed.
  2. I have proposed the notion of multi-p p-adicity (see this and this) motivated by the need to define interaction vertices for particles characterized by different p-adic primes. Multi-p p-adicity would be related to the world of the "classical worlds" (WCW) expected to have a spin glass type structure having a decomposition to regions with ultrametric topology characterized by a p-adic primes.

    In the interfaces of the regions of WCW with different values of p-adic prime p, multi-p p-adicity would prevail and mean that the integers involved have expansion in powers of integer n: the primes pi dividing n would define p-adic primes pi associated with the multi-p p-adicity. This assumption would give very strong constraints on the p-adic expansion of probabilities and the lowest power for each pi could be highly unique for the integers mk and np in Pk= mk/nk. The assumption that the integers mk and nk have expansion in powers of the same integer n would make the rational approximation highly unique.

  3. Negentropy Maximization Principle (see this), which states that the number theoretic evolution tends to maximize algebraic complexity and therefore the maximal value of the negentropy, suggests a possible (admittedly ad hoc guess): determine the rational approximation from the condition that the negentropy is maximized!
See the article The recent view of TGD inspired theory of consciousness and quantum biology or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, November 29, 2025

About the construction of the scattering amplitudes using M8-H duality

In TGD, point-like particles are replaced with 3-surfaces and these in turn with the analogs of Bohr orbits. M8-H duality is the generalization of momentum-position duality and is now rather well understood (see this). It however remains a mere academic mathematical construct unless it can be used to achieve some practical goal. The construction of scattering amplitudes is the basic dream of TGD and M8-H duality gives hope of achieving this goal in terms of the TGD counterparts for the momentum space Feynman diagrams.

The notion of exotic smooth structure, having interpretation as an ordinary smooth structure with 3-D defects and possible only in 4-D space-time, is crucial. Fermions in H are free but fermion pair creation is possible at the defects at which fermion lines can turn backwards in time. Also a more general change of direction is possible. This makes the counterpart of fermionic Feynman diagrammatic extremely simple at the level of H. Only fermionic 2-vertices associated with 3-D geometric defects are needed. Fermionic interactions reduce to an 8-D Brownian motion in the induced classical fields and the singularities of the space-time surfaces at which minimal surface property fails define the location of the vertices.

The interactions of two space-time surfaces, identified in holography = holomorphy vision as 4-D generalized Bohr orbits, correspond geometrically to contact interactions at their intersections. If the Hamilton-Jacobi structures are the same, the intersections are 2-D strings world sheets. The edges of these string world sheets would contain the vertices.

The challenge is to formulate this picture at M8 level by using a precise formulation of M8-H duality.

See the article About the construction of the scattering amplitudes using M8-H duality or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, November 28, 2025

The observed 20 GeV gamma rays as evidence for galactic dark matter halo or something else?

I learned of a very interesting recent finding claiming evidence for galactic dark matter (see this). Gamma rays with 20 GeV energy have been detected. They could emerge from the decays of a particle with mass 40 GeV.

A possible TGD based interpretation could be as decay products of p-adically scaled up pions. TGD strongly suggests a scaled up copy of hadron physics with the scaling factor for mass scale equal to 29=512 and there are indeed several indications for its existence. I have proposed (see this) that the transformation of the hadrons of M89 hadron physics for which mass scales is by factor 512 higher than for the ordinary hadron physics would be responsible for both solar wind and radiation and would occur at the surface of the Sun by the so-called p-adic cooling (see this and this), which would involve a cascade reducing the p-adic mass scale of hadron physics by powers of two so that eventually eventually hadrons of the ordinary M107 would emerge. Powers of 2 for mass scales are indeed favored by the p-adic length scale hypothesis. Mersenne primes are good candidates for the p-adic primes defining stable copies of hadron physics and also of the standard model.

The mass scale of the pion of M89 hadron physics would be by a factor 512 29 higher than for the pion of the ordinary hadron physics. In the recent case the scaling factor for the ordinary pion mass giving mass of 40 GeV would be 11 percent larger than 256= 28. Deviation is 11 per cent. The pion with mass scale 28 mπ could appear at the first step in the p-adic cooling. Could the particle in question correspond to an unstable hadron physics with p∼ 291?

There is also considerable evidence for gamma rays with energy very near to electron mass. They could come from what I call electropions predicted by TGD and having with mass slightly larger than 2×me (see this). There is also evidence for the leptopions assignable to muon and tau. In particular, there is evidence for tau pions in galactic nucleus.

Electropions would be dark in a different sense than galactic dark matter: they would have a large value of effective Planck constant heff implying that their Compton size scale is scaled up from one half of electron Compton length to the size scale of atoms. Large values of heff are predicted at quantum criticality implying long length scale quantum fluctuations and in this case the quantum criticality means ability to overcome Coulomb wall. This would explain why electropions have not been observed as elementary particles in say decays of weak bosons. This notion of darkness explains the well-known mysterious gradual disappearance of baryons during the cosmic evolution as a transformation of protons to dark protons with very large gravitational Planck constant.

In the same way, the hadrons of M89 would be created at quantum criticality against transformation of M107 to M89 hadrons and heff=512 would guarantee that at quantum criticality their Compton length is the same as for ordinary hadrons.

In TGD, galactic dark matter halo would not be an explanation for the 20 GeV gamma rays since there would be any dark matter halo. The monopole flux tubes arriving from the galactic nucleus to stars could produce 20 GeV gamma rays at the surface of the stars at the first step of the cooling process. This would create the illusion that the galactic halo exists. In TGD, the galactic dark matter would be dark energy assignable to extremely thin and massive objects, cosmic strings (see this), which are the key elements in the TGD view of the formation of galaxies and stars and explain the flat velocity spectrum without any additional assumptions. Galactic blackholes would emerge in the collision of cosmic strings and both electropion and the now discovered particles could relate to the decay of these cosmic strings.

See the chapter Dark Nuclear Physics and Condensed Matter.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, November 24, 2025

How to create monopole-flux loops?

Yesterday evening we had a very fruitful discussion with Marko Manninen about wormhole contacts and monopole flux loops. The question was following:

How do wormhole contact pairs, which correspond to elementary particles, arise?

We pondered this a lot, but the final outcome remained vague and bothered me and I continued to ponder the problem during the night. Finally, I realized that the problem was structurally the same as another problem: how is the creation of fermion pairs possible in TGD even though spinor fields in H=M4×CP2 are free fields? The solution was also the same and possible only for 4-D spacetime surfaces.

Some background

I will summarize the basic facts first and then the results.

  1. A wormhole contact is a Euclidean spacetime region that connects two Minkowski spacetime sheets. They appear as two different types.
    1. One that arises when the sheets touch each other. This region is not stable because there is no net magnetic flux flowing from one sheet to the other.
    2. One with a monopole flux. One can talk about a very short piece of flux tube. Since the flux is conserved, it cannot arise when the sheets touch. On the other hand, the conservation of flux stabilizes the tube so that it cannot be split.

      How could monopole wormhole contacts arise? This is the basic question we pondered. During late-night reflections it became clear that the attempts that first came to my mind did not work.

  2. More facts.
    1. The boundary conditions of the field equations of TGD do not allow open flux tubes, i.e. cylinders with ends from which flux would escape into vacuum. The flux losses at the ends would compensate each other, but this is not enough. Local conservation is required and is not possible. Flux tubes must be closed loops in which monopole flux flows.
    2. This implies that wormhole contacts carrying monopole flux must be appear in pairs. They can be visualized as two wormhole contacts, i.e. magnetic flux flows from throat A1 to throat B1, from there to throat B2 on the lower leaf and from there to throat A2 and further to throat A1 on the upper leaf. A closed loop, then. At least massive elementary particles would correspond to such loops. Point-like fermions would inhabit the throats.
    3. Magnetic flux corresponds to K hler magnetic charge. It must be conserved as it flows along the tube. The total magnetic charge would be zero if open flux tubes were allowed. Here the hydrodynamic analogy to incompressible flow, which is mathematically very accurate, helps.
    4. But what happens in the case of closed flux tubes? Does the sum of the charges conserved for closed flux tubes? This would generalize the conservation of flux inside the tube. The flux for a closed flux tube would be analogous to an electric charge.

      Could one conclude that if a monopole flux tube breaks up into two flux tubes with fluxes n1 and n2, then the flux remains n=n1+n2?

      A reasonable half-guess is that flux can be considered as some kind of conserved charge. This makes sense if the sign of the flux can be operationally defined. If a flux tube has some kind of chirality (DNA helix is an example of this) or parity, then it determines the sign of the flux regardless of the position of the flux tube. Different chiralities could also be represented by flux tubes that are complex conjugates of each other with respect to the complex conjugation of CP2. This would correspond to charge-conjugation. Holography holomorphy hypothesis predicts this kind of geometric charge conjugation.

    5. In any case, even the creation of a single closed monopole flux tube seems impossible. So the alternative would be that they are created in pairs so that the fluxes are opposite. How would this happen?
Connection with similar problem relate to the creation of fermion pairs

Here emerges a connection to a similar problem regarding the fermion number.

  1. In TGD, the spinor fields are free. That is the only option. If a quartic term were added to the action, the result would be a non-renormalizable theory and thus a catastrophe.

    But is the creation of fermion pairs from a vacuum or a classical induced field then possible at all? In QED, it is when the sm field is quantized but also when it is classical.

  2. An intuitive picture of what happens has been clear since Dirac's time. The fermion line turns back in time when a pair is created. This picture generalizes to TGD. The V-shaped fermion line has an edge with an infinite acceleration.
  3. Now comes the connection with zero energy ontology. In TGD, the particle is geometrically replaced by a Bohr orbit of the 3-surface and the infinite acceleration at the edge corresponds to the bracing of the holomorphism on the 3-surface. These singularities are the fundamental prediction and correspond to the mild non-determinism of holography and the poles of analytic functions. These surfaces correspond also to interaction vertices.

    At the edge, the minimal surface property fails and the trace of the second fundamental form, which is the generalization of the acceleration form 1-D to a 4-D object, is infinite. Its CP2 part transforms in symmetries like the Higgs. The Higgs would be non-zero only at the vertices. The M4 part corresponds to the ordinary acceleration and the particle's earthly pilgrimage would be geometrically analogous to 8-D Brownian motion such that the vertices are special moments in its life since they correspond to conscious choices.

  4. A couple of years ago I realized that the edge singularity of a fermion line corresponds to the phenomenon of "exotic smooth structure". It is a smooth structure, which reduces to the standard smooth structure except for defects that are 3-surfaces: they are "edges" and correspond geometrically to particle vertices in TGD. Differentiability is broken.

    The exotic smooth structures only occur in dimension D=4. Pair creation and non-trivial fermionic dynamics are only possible in 4-D spacetime! A really bad problem of general relativity turns into a triumph in TGD.

Do also monopole flux tubes turn back in time?

Now we are very close to solving the original problem. So: how do monopole flux loops arise?

  1. The emergence of a pair formed by closed monopole flux tubes corresponds to a situation when a closed monopole flux tube coming from the geometric future turns back to the geometric future! More general reversals can occur and the emission of virtual particles at vertices corresponds to this.

    Therefore, a particle pair in geometric sense can be created in the classical induced field in a geometric sense. What happens to fermion lines also happens to 3-surfaces.

  2. This view generalizes: a closed monopole flux tube can break up, thus creating, for example, two monopole flux tubes and the total flux is preserved.
See the article Holography = holomorphy vision in relation to quantum criticality, hierarchy of Planck constants, and M8-H duality or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, November 23, 2025

How to realize Lorentz invariance for on-mass-shell states in M^8-H duality?

M8-H duality corresponds physically to a generalization of momentum-position duality due to the replacement of point-like particles with 3-surfaces whose orbits are slightly non-determinic space-time surfaces analogous to Bohr orbits satisfying holography = holomorphy principle at the level of H=M4× CP2. Mathematical M8-H duality can be seen as a generalization of the geometric Langlands duality from dimension D=2 to dimension D=4.

Recently, considerable progress (see this) has occurred in the understanding of M8-H duality. This has meant in some sense a return to the roots and generalization of the earlier too narrow approach and the emergence of a sound physical interpretation. This has led to an exact solution of the duality in terms of local G2 invariance.

  1. A given Y4⊂ M8, having interpretation as 8-D momentum space, is determined as roots of an octonion analytic function f(o). The roots of Re(o) or I(o) define 3-D holographic data for Y4 which can have quaternionic tangent space or normal space and correspondingly Minkowskian or Euclidean signature with respect to the number theoretic metric defined by Re(o1o2).
  2. The general solution of the octonionic holography is in terms of the action of local G2 transformations acting on the simplest solutions, which are pieces of M4 identified as quaternions or of its orthogonal complement E4. having quaternionic tangent resp. normal space. This picture leads to Feynman diagram type structures in which Minkowskian and Euclidean pieces are glued together at vertices which the conditions Re(f)=0 and Im(f)=0 hold true simultaneously.
  3. The 4-D surfaces Y4 in M8 having interpretation as octonions have interpretation for a representation of dispersion relation, which is however not Lorentz invariant but is invariant under 3-D rotation group. Is it possible to obtain Lorentz invariant dispersion relation in a natural way at some 3-surfaces defining on-mass-shell states and having interpretation in terms of hyperbolic 3-space H3? Can one assign a mass squared spectrum to a given Y4? How does this spectrum relate to the mass squared spectrum of the Dirac operator of H=M4× CP2? Does 8-D light-likeness fix this spectrum?
If the tangent spaces of Y4 are quaternionic, the condition Re(f) =0 or Im(f) =0 has as a solution the union of ∪o0S6(o0) of 6-spheres with radius r7(o0). The intersection Y3= E3(o0)∩ ∪o0S6(o0)= ∪o0S2(o0) defines the holographic data. For the 2-spheres S2(o0), the 3-momentum squared is constant but depends on the energy o0 via a dispersion relation that is in general not Lorentz invariant.

M8-H duality suggests how to obtain Lorentz invariant mass shell conditions E2-p2= m2.

  1. The modes of the Dirac equation in H (see this and this) are massless in the 8-D sense. This is a natural additional condition also in M8 and could define on mass shell states consistent with Lorentz invariance and distinguish them from the other points of Y4 having an interpretation as off-mass-shell momenta allowed by Y4 as a representation of a dispersion relation.
  2. 8-D masslessness corresponds in M8 to the condition o02-r72=0, where r72 is the counterpart of the CP2 mass squared as the eigenvalue of the CP2 spinor Laplacian. The additional condition o02-r72=0 picks up a discrete set of values (o0(r7),r7). The 4-D mass squared would be m42= r72 and a discrete mass spectrum is predicted for a given f(o) and a given selection a Re(f)=0 or Im(f)=0.
  3. An interesting question is whether the eigenvalue spectrum of CP2 spinor Laplacian is realized at the level of M8 as on-mass-shell states.
  4. A natural guess would be that the eigenvalue spectrum of CP2 spinor Laplacian is realized at the level of M8 as on-mass-shell states.

    The TGD based proposal (see this and this) for color confinement producing light states involves tachyonic states. These states would naturally correspond to 4-surfaces Y4 with Euclidean signature and bound states would be formed by gluing together the tachyonic and non-tachyonics states to Feynman graph-like structures. Note that the on-mass-shell 2-spheres are in general different from those satisfying the conditions (Re(f), Im(f))=(0,0) proposed to define vertices for the generalized Feynman graphs.

    Note that the on mass shell 2-spheres are in general different from those satisfying the conditions (Re(f),Im(f))=(0,0) proposed to define vertices for the generalized Feynman graphs.

See the article Does M8-H duality reduce to local G2 symmetry? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

How TGD avoids the catastrophe caused by observer-free Universe?

Gary Ehlenberger sent a link to a very intersting Quanta Magazine article titled "Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe". The two first paragraphs of the article give a good view of the problem.

Tinkering at their desks with the mathematics of quantum space and time, physicists have discovered a puzzling conundrum. The arcane rules of quantum theory and gravity let them imagine many different kinds of universes in precise detail, enabling powerful thought experiments that in recent years have addressed long-standing mysteries swirling around black holes.

But when a group of researchers examined a universe intriguingly like our own in 2019, they found a paradox: The theoretical universe seemed to admit only a single possible state. It appeared so simple that its contents could be described without conveying even a single bit of data, not even a choice of a zero or a one. This result clashed with the fact that this type of universe should be capable of hosting black holes, stars, planets — and people. Yet all those rich details were nowhere to be seen.

To me this result is not terribly surprising. The paradox of heat death is a well-known intuitive way to state the problem. The outcome is the final catastrophe putting an end to the materialistic view. The catastrophe is also due to the sticking to too simple mathematics based on real numbers which have no internal anatomy, in accordance with materialism in which only magnitude matters. Universe is much much more complex. AdS/CFT brings in strings and extended particles but is not a realistic solution.

  1. One part of the problem is the completely wrong view of space-time as a single smooth manifold such as AdS. In TGD, space-time is a union of an arbitrarily large number of 4-D Bohr orbits of 3-surfaces in H=M4×CP2 obeying holography, which is almost deterministic: this is absolutely crucial for having fermion interactions for formally free fermions.

    Intersections of space-time surfaces as 2-D string world sheets give rise to the geometric aspect of interactions. H=M4×CP2 explains symmetries and fields of the standard model and is mathematically the only possible choice: number theory-geometry duality at the level of physics and the existence of the twistor lift dictates this choice. The dynamics reduces to that of space-time surfaces. By general coordinate invariance, there are only four local degrees of freedom.

  2. There is a fractal hierarchy of size scales defining the size scales of the space-time sheets. Holography = holomorphy principle is a crucial part of the solution and generalizes the holomorphy of string models and transforms extremely linear classical field equations to algebraic equations. One obtains minimal surfaces irrespective of classical action as long as it is general coordinate invariant and expressible in terms of the induced geometry (see this and (this). One also obtains direct counterparts of functional iteration hierarchies associated with 4-D analogs 2-D fractals like Mandelbrot fractal and Julia set.
  3. Besides topology and algebraic geometry, also number theory brings in structure. Entire hierarchies of extensions of rationals emerge and Galois groups appear as symmetry groups. Galois confinement is an attractive dynamical principle forcing for instance the total momenta be rational valued. functional fields appear too.

    Classical number fields, reals, complex numbers, quaternions, octonions emerge as an essential part of dynamics via M8-H duality (see this). p-Adic number fields appear also and the corresponding functional fields appear too.

    Space-time surfaces represent numbers in both the classical sense and as elements of function fields see (this, this, this).

  4. TGD also brings in consciousness and one gets rid of the curse of materialism (this). The Universe as space of space-time surfaces becomes Quantum Platonia consisting of classical and quantum states as mathematical objects and quantum jumps between these quantum states make quantum Platonia conscious entity (or a union of conscious entities), which learns and remembers and its complexity and level of consciousness unavoidably increases since the number space-time surfaces more complex than a given surface is infinitely larger than the number of those which are simpler.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, November 18, 2025

Mott Problem, Brownian motion, and TGD

I did not know that the basic prediction of TGD is known as an anomaly, problem (see this): if the particle emitted in say nuclear decay has an approximately spherical wave function, why is it seen as a track in bubble chamber. Why not a set of dots corresponding to a sequence of localizations as position measurements induced by the interaction of the atoms of the bubble chamber. Mott considered one possible solution to the problem.

The very existence of the Mott problem is one of the most direct supports for the zero energy ontology of TGD, which solves the paradox of quantum measurement theory. The basic physical entities are slightly-non-deterministic Bohr orbits for particles identified as 3-surfaces, which however obey classical field equations. Quantum states are wave functions in the space of these 4-D Bohr orbits rather than in 3-space. Brownian motion provides direct evidence for the mild failure of the classical determinism for these Bohr orbits. The edges of the Bohr orbit correspond to interaction vertices.

The key idea is simple. In the zero energy ontology, the localization does not occur at a point of 3-space, say around some atom, but at a given, slightly non-deterministic Bohr orbit of the particle. The track is the basic observable, not the 3-D position of the particle on the track. Localization in 3-spaces is replaced with localization in the space of Bohr orbits, "world of the classical worlds". The sequence of bubbles makes the Bohr orbit visible. The wave function in the space of Bohr orbits replaces the ordinary wave function and can be spherically symmetric.

See the article Some comments related to Zero Energy Ontology (ZEO) or the chapter Zero Energy Ontology.

For a summary of earlier postings see Latest progress in TGD.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, November 17, 2025

Support for the TGD view of Unidentified Aerial Phenomena

Sabine Hossenfelder talked about the recent findings (see this) giving support for the reality, Unidentified Aerial Phenomena (UAPs), known earlier as UFOs.

The first findings are discussed in the article "Aligned multiple transient events in the first Palomar Sky Survey" by researchers in Nordita (see this). The researchers analyzed the photos taken in Galtech in the first Palomar Sky Survey 1849-1957. Here is the abstract of their article.

Old, digitized astronomical images taken before the human spacefaring age offer a rare glimpse of the sky before the era of artificial satellites. In this paper, we present the first optical searches for artificial objects with high specular reflections near the Earth. We follow the method proposed in Villarroel et al. and use a transient sample drawn from Solano et al. We use images from the First Palomar Sky Survey to search for multiple (within a plate exposure) transients that, in addition to being point-like, are aligned along a narrow band. We provide a shortlist of the most promising candidate alignments, including one with ∼ 3.9σ statistical significance.

These aligned transients remain difficult to explain with known phenomena, even if rare optical ghosting producing point-like sources cannot be fully excluded at present. We explore remaining possibilities, including fast reflections from highly reflective objects in geosynchronous orbit, or emissions from artificial sources high above Earth s atmosphere. We also find a highly significant (∼ 22σ) deficit of POSS-I transients within Earth's shadow when compared with the theoretical hemispheric shadow coverage at 42,164 km altitude. The deficit is still present though at reduced significance (∼ 7.6σ) when a more realistic plate-based coverage is considered. This study should be viewed as an initial exploration into the potential of archival photographic surveys to reveal transient phenomena, and we hope it motivates more systematic searches across historical data sets.

The weekend known as Washington flap 1952 was rich of UAPs. 5 transients were observed during the previous week. There were also radar observations by air traffic controllers. Also unusual light phenomena (UAPs) were observed. Some observations involved 4, 5, and even 6 dots in a row. Very few transients were found in the Earth's shadow. The reflection of sunlight from a metallic object has been considered as a possible explanation. But who put it there? Could the transients be objects orbiting the Earth? The official interpretation was that atmospheric phenomena are in question,

The article "Transients in the Palomar Observatory Sky Survey (POSS-I) may be associated with nuclear phenomena and reports of unidentified anomalous phenomena" (see this) described AUPs associated with nuclear testing during the period 1851-1957.

Here is the abstract of the article.

Transient star-like objects of unknown origin have been identified in the first Palomar Observatory Sky Survey (POSS-I) conducted prior to the first artificial satellite. We tested speculative hypotheses that some transients are related to nuclear weapons testing or unidentified anomalous phenomena (UAP) reports. A dataset comprising daily data (11/19/49 4/28/57) regarding identified transients, nuclear testing, and UAP reports was created (n = 2,718 days).

Results revealed significant (p = .008) associations between nuclear testing and observed transients, with transients 45 percent more likely on dates within \pm 1 day of nuclear testing. For days on which at least one transient was identified, significant associations were noted between total number of transients and total number of independent UAP reports per date (p = 0.015).

For every additional UAP reported on a given date, there was an 8.5 percent increase in the number of transients identified. Small but significant (p = .008) associations between nuclear testing and the number of UAP reports were also noted. Findings suggest associations beyond chance between occurrence of transients and both nuclear testing and UAP reports. These findings may help elucidate the nature of POSS-I transients and strengthen empirical support for the UAP phenomenon.

Correlations of the nuclear transients with visual observations (AUPs), possible only during night time, are reported. This would conform with the assumption that the transients are equally probable at dayside and nightside.

I have managed to spoil my academic reputation in many ways. To take seriously unidentified aerial phenomena (UAEs), formerly known as UFOs, is one of these ways. So, what would be the TGD view of these objects?

  1. UAPs can move with huge velocities: this suggests that they are not objects but processes. The spot of light in the roof caused by a light beam can move arbitrarily fast as you turn in the direction of the beam.

  2. The light balls including ball lightnings, made possible by the TGD view of quantum physics predicting an entire hierarchy of effective Planck constants, would be primitive life forms, plasmoids and predecessors of chemical life. Ions of cold plasma would be their key aspect as it is also in the case of ordinary biological life.

  3. Researchers from NASA claim to have a lot of observations about these kinds of entities in the ionosphere. I wrote an article about these findings (see this).

  4. A basic prediction of TGD is the existence of phases of ordinary matter characterized by a scaled up value of effective Planck constant heff and behaving like dark matter but not identifiable as galactic dark matter. Their presence however explains why baryonic matter seems to disappear during cosmic evolution: protons would transform to "dark" photons at the magnetic bodies. This disappearance would be a signature of evolution increasing the algebraic complexity of space-time surfaces implying increase of heff.

  5. "Dark" particles have scaled up their Compton lengths and times. This makes possible quantum coherence in long scales and could explain the mysterious coherence of living matter impossible to understand in the standard biochemistry approach. Without this coherence we would be sacks of water with some chemicals and could not climb in trees or write poems.

  6. Metabolism is the basic aspect of life. At the fundamental level, dark photons with energies in the visible range could serve as metabolic energy. This would be the case also in living matter at the fundamental level: chemical energy storage would have emerged only when plasmoid life led to chemical life. Pollack effect and its generalization would be behind basic mechanisms of life and make it possible to get metabolic energy from the solar light. This could explain many findings about UAPs.

Consider now the findings from this perspective.

  1. The light-balls in the POSS survey described in the first article are reported to mostly appear at the day side of the Earth. The Sun would make this possible by providing metabolic energy via the Pollack effect making it possible to increase the value of h for protons to heff> h.
  2. AUPs as light balls tend to be associated with the lines of tectonic activity. The reason would be that tectonic activity in the group liberates energy as dark photons serving as metabolic energy for the plasmoids. Large values of heff are associated with quantum critical systems and the tectonic activity would be such a phenomenon.
  3. Nuclear explosions liberate huge amounts of energy and can induce tectonic activity liberating dark photons serving as a food for the plasmoids of the atmosphere. These plasmoids would not need solar radiation as metabolic energy so that they could occur also at the night-side of the Earth. This prediction is testable.

    The nuclear transients can occur also at the dayside as is clear from the fact (see this) that AUPs observable only in night-side were found to correlate with the transients and could be identifiable as transients. I did not find any direct information about whether the nuclear transits prefer to occur at the day-side.

  4. NASA researchers have reported (see this) that the plasmoids in the ionosphere look like living entities, which tend to appear in groups. Plasmoids could organize around magnetic or electric monopole flux tubes, which are basic objects in the TGD Universe appearing in all scales: even galaxies and now also stars around the Milky Way nucleus are found to form such linear sequences.
See the article About long range electromagnetic quantum coherence in TGD Universe or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, November 15, 2025

Beltrami flows and holography = holomorphy vision

Beltrami flows (see this) appear in several contexts. Google AI informs that Beltrami flow is a force-free flow field at 3-sphere. The simplest Hopf fibration (see this) is from 3-sphere to 2-sphere. The inverse images of the points of and there are numerous generations of Hopf fibration: the fibration S5→ CP2 is of special interest in TGD.

Some background

Some background about Beltrami flows is in order.

  1. For the Beltrami flow (see this) velocity field satisfies curl(v) = Λ×v so that curl(v) is parallel to v. In fluid dynamics Beltrami flow corresponds to a flow for which vorticity ω= ∇× v and velocity v are parallel. ω× v=0 gives ω =∇× v= α (x,t)v. Beltrami flows in S3 satisfy this condition and are exact solutions to Euler equations.
  2. In magnetohydrodynamics one can replace velocity field with magnetic field B and of the current j satisfies j= ∇× B= α B implying the vanishing of the Lorentz force j× B. The current flows along field lines and in TGD the flow of particles along monopole flux tubes is the counterpart for this flow. These Beltrami flows involve the linking and knotting of magnetic field lines. Similar situation prevails in hydrodynamics.
Jenny Lorraine Nielsen has proposed that the Hopf fibration S1→ S9→ CP4 could provide a theory of everything and that Beltrami flows (see this) associated with this kind of fibrations play a key role in physics. The scalar Λ, which depends on position, appearing in the definition of Beltrami flow has dimensions of 1/length. Mass has dimension of ℏ/length so that 1/Λ should be identified as an analog of Compton length. These flows are topologically very interesting and involve linking and knotting of the flow lines.

The claim of Jenny Nielsen is that it is possible to understand particle massivation in terms of Beltrami flows. Higgs expectation defining the mass spectrum in the standard model is identified as hbarΛ for the eigenvalue Λ of the lowest eigenmode of Beltrami flow. It would seem that Λ is assumed to be constant: this is not necessary. It must be possible to relate Λ to the radius of S3 and one chooses it suitably to get Higgs vacuum expectation. To get masses of fermions one must put them in by hand as couplings of fermions to Higgs so that one does not really predict fermion masses: the situation remains the same as in the standard model.

Beltrami flows in TGD

The generalization of Beltrami flows to 4-D context is one of the key ideas of TGD (see for instance this and this) but I have not discussed them explicitly in the recent framework based on holography = holomorphy vision (H-H) (see for instance this and this and this).

  1. The motivation is that TGD is formally hydrodynamics in the sense that field equations express local conservation of isometry charges of M4× CP2. There is actually infinite-dimensional algebra of conserved charges. The proposal is that in TGD, the Beltrami flows become genuinely 4-dimensional and correspond to classical field configuration for which the 4-D Lorentz force involving electric components vanishes.
  2. The definition of the Beltrami flow is however different since one cannot regard the magnetic field as a vector field in 4 dimensions. For field equations Kähler current typically vanishes but can be also light-like. The counterpart of Beltrami flow states that Kähler current is proportional to the corresponding axial current:

    j=DνJμν = α × εμν αβAνJαβ.

    The divergence of jμ vanishes and this must be true also for the instanton current unless α=0 holds true This is the case if the CP2 projection of the space-time surface is at most 3-dimensional. If it is 4-D the parameter α must vanish since the divergence of the axial current gives instanton density εμναβ JμνJαβ, which is non-vanishing for CP2 and by self-duality proportional to Jμν Jμν. Hence the only option is α=0 for D=4.

  3. If these Beltrami flows are integrable, they can give a physical realization of some, perhaps all, space-time coordinates as coordinates varying along the flow lines of some isometry current. The time component of 4-force has interpretation as dissipation power and also vanishes. These non-dissipative configurations play a key role in TGD and are natural when space-time surfaces are identified as quantum coherence regions.
  4. The key idea sharpening dramatically the notion of Beltrami flow supported by H-H vision is that complex analytic maps f: z→ f(z) allow us to construct integrable flows. What matters physically would be singularities: poles and zeros. Without them these maps would be mere general coordinate transformations.

    In TGD, this generalizes to 4 dimensions by the introduction of generalized complex structure in H=M4 × CP2. The presence of hypercomplex coordinates in M4 motivates the term "generalized". In 4-D context, poles and zeros as singularities of a flow correspond to string world sheets and partonic 2-surfaces. The second key idea is that fermions at the flow lines serve as markers and provide information about the flow. In the cognitive sector they realize Boolean logic.

    Complex structure is often accompanied by Kähler structure. Its generalization to the Hamilton-Jacobi structure (see this) of H and M4 involves hypercomplex structure. Kähler structure involves symplectic structure and the symplectic symmetries of H induce isometries of the "world of classical worlds" (WCW) (see for instance this) as also the generalized holomorphic transformations of H.

    Symplectic resp. holomorphic transformations preserve areas resp. angles, which in 2-D case are canonically conjugate variables so that these transformations should be very closely related. Symplectic flows are not gradient flows but one can assign to their flow lines a global coordinate the Hamilton canonically conjugate to the Hamilton of the flow. Also complex analytic flows allow this.

Flows in the complex plane

Flows in the plane are usually regarded as interesting Beltrami flows since in this case the condition ∇ × v = α v cannot be satisfied unless vorticity and eigenvalue α vanish. It is however possible to consider the situation α=0 also in the D=2 case. There are however other ways to satisfy the integrability. Complex analytic maps define define integrable flows in more general sense.

One can start from flows in plane, in particular integrable flows.

  1. Integrability means that the flow lines of the flow give rise to globally defined coordinate lines which fill the space smoothly. Intuition suggests that without integrability and the existence of a global coordinate along flow lines, the flow would be more like a random motion analogous to the motion of gas particles. Integrability would bring in smoothness and the flow looks like a fluid flow.
  2. Integrability in strongest form requires that the velocity v for the flow line is a gradient v=grad(φ) of the global coordinate in question. This implies ∇× v=0 and α=0. This condition is very strong and implies irrotationality so that a rotational flow is only possible in a global sense. There are however milder ways to guarantee the integrability.

    Note that exotic smooth structures (see this, this, and this) possible in TGD (see this, this, this, and this) would correspond to flows for which smoothness fails at singularities to make possible fermionic interactions, although fermions are free in TGD. But this is possible only for 4-D space-time.

  3. One can go even further and require that there are two integrable coordinates. They could be assigned with velocity v and vorticity curl(v). Both would define gradient flows apart from singularities. These conditions give Cauchy-Riemann conditions expressing complex analyticity. The flow can be expressed as an analytic map z→ f(z) of the complex plane and the flow lines correspond to coordinate lines of the new coordinates defined by f. Locally the conditions state that the flow is locally incompressible and irrotational apart from singularities. Globally this need not be the case.

    These maps however have poles and zeros as singularities. Poles act as sources and sinks at which the flow fails to be incompressible. Zeros correspond to vortex cores at which the flow velocity must approach zero.

  4. One can also allow cuts. They appear if a complex analytic map is many-valued, such as fractional power and it is made discontinuous by taking only a single branch. Second option is to allow a covering in which case the complex plane becomes many-sheeted. In TGD, this picture is generalized to a 4-D situation.
1. Flows of plane defined by complex analytic maps

The flows defined by complex analytic maps define integrable flows.

  1. In the case of complex plane, analyticity conditions for a map f: z→ =(u,v) give Cauchy-Riemann conditions ∂xvx= ∂yvy and ∂yvx=-∂xvy expressing complex analyticity. Neither ∇ × v nor ∇˙ v vanishes. One has neither gradient flow or incompressible flow.
  2. One can also consider velocity fields j=(jx,jy) satisfying the Cauchy-Riemann conditions. The exponentiation of v defines a flow as the analytic map z→ f(z)=u+iv of the complex plane which in the case of the plane is of the same form as the generator of the flow. The flow lines can be identified as coordinate lines of the new coordinates u and v and defined by the conditions Im(f)=v=constant and Re(f)=u= constant so that the flow is integrable.
  3. In the case of a complex plane, both the holomorphic vector fields j and maps f can however have poles and zeros as singularities and it is important to make a clear distinction between these two interpretations. Zeros of the map f=(u,v) correspond to point-like vortex cores and poles to point-like sources and sinks at which the analyticity fails. If f is interpreted as an electric or magnetic field, poles correspond to charges as sources of the electric field and vortices to point currents as sources of the magnetic field.
  4. One can also allow cuts. They appear if a complex analytic map is many-valued, such as fractional power and it is made discontinuous by taking only a single branch. Second option is to allow a covering in which case the complex plane becomes many-sheeted. In TGD, this picture is generalized to a 4-D situation.
2. Symplectic flows in plane

One can consider also symplectic flows in plane E2 endowed with Kähler form Jxy=-Jyx=1, which is negative of the tensor squared of the metric gij= δij of E2. Symplectic flows preserve the signed area defined by the symplectic form which in complex coordinates corresponds to the Kähler form which in complex coordinates defines a geometric representation of the imaginary unit.

The flows defining infinitesimal generators of the symplectic transformations are in the general case of the form jk= JkllH, where index raising is by the metric. In the case of plane E2 the explicit expression is (jx,jy)= (∂y H,-∂x H), where H is the Hamiltonian of the flow, which defines conserved "energy" constant along flow lines. The vanishing of the divergence Dkjk means the preservation of the area.

Symplectic flow is not a gradient flow but it allows a global coordinate varying along the flow lines. This follows from the existence of the canonical conjugate Hc of H, whose Poisson bracket with H equals to on: {Hc,H}= ∂kHc JkllH=1. The equation for Hc along the flow lines of H is dHc/dt= {Hc,H}=1 and is solve by Hc= t so that Hc defines the gradient flow giving rise to a global coordinate. The plane decomposes to a union of flow lines as H=E surfaces.

The 2-dimensional flows related to the simplest Hopf fibration S3→ S2

Consider first the Hopf fibration S3→ S2. The simplest visualization of the fibration is in terms of inverse images of the circles S1 of S2 in S3 under bundle projection. The fibers associated with the points S2 correspond to linked, non-intersecting circles in S3. The twist or linkage is characterized by an integer known as Chern number. That the inverse images are smooth 2-surfaces, is highly non-trivial and is due to the fact that the flow in S2 is integrable. Any integrable flow allows similar smooth lift.

For visualization purposes, one can represent S2 as E2 and S3 as E3. For instance, the inverse images of the circles S1⊂ S2 with a constant latitude θ, identified as flow lines, define a slicing of E3∖ Z, where Z is z-axis, by tori S1⊂ S1 the origin of E3 and projecting to a circle with center point at the origin of E2. Poles of S2 correspond to tori which degenerate to a single point, the origin E3. The inverse images of closed flow lines in S3 are tori for any integrable flow.

The flows of S3 consistent with the Hopf fibration are unions of toric flows at the tori S1× S1 characterized by 2 winding numbers (n1,n2) project to circles S1⊂ S2. Note that the flow in S2 is not geodesic flow. The flows of charged particles along closed cosmic strings with homologically trivial S2⊂ CP2 as cross section and define analogs of these flows.

Besides Betrami flows ∇ × v = α v in S3 also other flows S2 loosely related to Hopf fibrations and its generalization are interesting in the TGD framework. Since S2 has complex and Kähler structures, the integrable flows of S2 should be reducible to analytic maps f: z→ f(z) of S2 to itself. From the TGD point of view, especially interesting flows flow are magnetohydrodynamics geodesic flows of CP1 (and CP2) coupled to its Kähler form as U(1) field for which S3 (S5) define the fiber of U1 bundle.

  1. At the fermionic lines the presence of the S1 as fiber of S3 brings in a coupling of S2 spinors to a covariantly constant Kähler form of S2, which corresponds to a U(1) symmetry assignable to S1. In the case of S2, the coupling is not necessary but in the case of CP2 the Hopf fibration S5→ CP2 allows Spinc structure and leads to the standard model couplings and symmetries in TGD.
  2. S2 with Kähler structure can be visualized for the standard embedding S2 → E3 as a covariantly constant magnetic field B orthogonal to S2. Another way to describe B is as a covariantly constant antisymmetric 2-tensor in S2.
  3. At the hydrodynamical level, one can consider hydrodynamics in which geodesic free motion couples to the magnetic field defined by the Kähler form via Lorentz force. The magnetic force causes a twisting so that the motion is not anymore along a big circle. The flow lines tend to turn towards the North Pole or South Pole and approach/or leave the poles from South or North. Chiral symmetry is clearly violated.

    For the lift of this flow to S3 flow lines define a union of non-intersecting linked circles S1 as fibers of S3→ S2 giving rise to tori in the case of closed flow lines. If the S2 flow is integrable, it is possible to label the fiber circles by a time coordinate, so that they are expected to combine to form a smooth 2-D manifold. Vortex singularities must correspond to single fiber S1, possibly contracted to a point.

  4. The basic question is whether a given flow is integrable rather than like a random motion of gas molecules for which flow lines can intersect and do not form a smooth filling of the space. Complex and Kähler structures make sense also for S2. The conclusion is that analytic maps z→ f(z) of a complex coordinate of S2 define an integrable flow. The real and imaginary parts of f(z) define the velocity field v.Also symplectic flows define flows global coordinate along the flow lines so that the flow lines allow a lift to tori in S3.
  5. There are two kinds of singularities at which the analyticity fails: zeros correspond to vortices and poles to sources and sinks. Everywhere else the flow is locally incompressible and irrotational so that both the divergence and rotor of the velocity field vanish. If the flow has no singularities it can be regarded as a mere coordinate change. Singularities contain the physics. It would seem that only integrable flows allow a lift to flows in S3.
Hopf fibration S5→ CP2

In TGD the projection S5→ CP2 is the crucial Hopf fibration since it makes it possible to provide CP2 with a respectable spinor structure. The Kähler coupling gives rise to the standard model couplings and symmetries and H=M4× CP2 is physically unique: weak interactions are color interactions in CP2 spin degrees of freedom (charge and weak isospin). What is essential is the coupling of the Kähler gauge potential to spinors. This in turn leads to a Dirac equation in H=M4× CP2 and the induced Dirac equation at the space-time surface X4.

  1. At the hydrodynamical level one has geodesic flow coupled to the self-dual Kähler form of CP2 consisten with its complex structure. One has Euclidian analogs of constant electric and magnetic fields, which are of the same magnitude. They would be orthogonal in E4 but in CP2 their inner product gives constant instanton density. Since Kähler form defines the flow, there are good hopes that the flow is integrable and allows a lift to S5. In this case the inverse images of the flow lines not linked.
  2. Also now complex analytic maps f: CP2→ CP2 define i×ntegrable flows with singularities. There are two complex coordinates and one can have poles with respect to both of them. Both poles and zeros are replaced with 2-D surfaces and also the analogs of cuts appearing if many-valued maps f are allowed.

CP2 type extremals

At the next level one can consider CP2 type extremals, which are deformations of the canonical embedding of CP2 as an Euclidean 4-surface of H=M4× CP2 for which M4 coordinates are constant. They can be said to define basic building bricks of particles in TGD. The CP2 type extremal has locally the same induced metric and Kähler structure as CP2 but its M4 projection is a light-like curve, light-like geodesic in the simplest situation. It also ends, that is holes realized as 3-surfaces.

  1. The above situation for which time is time parameter as 5:th coordinate is replaced with M4 time coordinate u varying along the light-like curve. Also now the complex analytic functions f: CP2 → CP2 define integrable flows. Time coordinate labels 3-D sections of the flow.
  2. Now these flows would carry real physics. Induced Dirac equation effectively reduces to 1-D Dirac equation for fermion lines identified and holomorphy solves it, very much like in string models.

    The physical interpretation is very concrete. The addition of fermions to fermion lines serves as an addition of a marker making the flow visible. Fermions as markers allow to get information about the underlying geometric flow making itself visible via the time evolution of the many-fermion state.

    In TGD, fermions also realize Boolean logic at quantum level and the time evolutions between fermionic states can be seen as logical implication A→ B. Spinor structure as square root of metric structure fuses logic and geometry to a larger structure.

Integrable flows at Minkowskian space-time surfaces X4 ⊂ H

In holography = holomorphy vision space-time surfaces are roots for a pair f=(f1,f2): H→ C2 of two generalized analytic functions fi of one real hypercomplex coordinate u of M4, and the remaining 3 complex coordinates of H. Let us denote one of the complex coordinates by w, which can be either an M4 or CP2 coordinate.

  1. The roots give space-time surfaces as minimal surfaces solving the field equations for any classical action as long as it is general coordinate invariant and constructible in terms of induced geometry. The extremely nonlinear field equations reduce to local algebraic conditions and Riemannian geometry to algebraic geometry.
  2. X4 shares one hypercomplex coordinate and one complex coordinate with H and both X4 and H have generalized complex structure. X4 has hypercomplex coordinate u (u=t-z of M2 in the simplest situation) and complex coordinate w (coordinate of complex plane E2 in the simplest situation). This defines the Hamilton-Jacobi structure of X4.
  3. Complex analytic maps of X4 are of the form by (u→ f(u), w→ g(u,w)). Integrable flows are induced by these maps. If there are no singularities they correspond to general coordinate transformations. The map by f having singularities generates a new Hamilton-Jacobi structure.
  4. Poles and zeros in the w-plane correspond to 2-D string world sheets. The counterparts of zeros and poles for hypercomplex plane, parameterized by a discrete set of values of the real hypercomplex coordinate u correspond to singular partonic 2-surfaces with complex coordinate w at the light-like orbit of a partonic 2-surface. These singular partonic 2-surfaces can be identified as TGD counterparts analogs of vertices at which fermionic lines can change their direction. At these surfaces the trace H of the second fundamental form vanishing everywhere else by minimal surface property has a delta function like singular. Its CP2 part has an interpretation as analog of Higgs vacuum expectation value. The claim of Jenny Nielsen is analogous to this result. In TGD also the M4 part of H is non-vanishing and corresponds to a local acceleration concentrated at the singularity. An analog of Brownian motion is in question.

    One could very loosely say that the parameter α for Beltrami flow vanishes everywhere except at singularities where it has interpretation as value of the analog of Higgs expectation as the trace of the second fundamental form.

    String world sheets in turn mediate interactions since they connect to each other the light-like orbits of partonic 2-surfaces. This view conforms with the basic physical picture of TGD.

To sum up, the new elements brought by the holography = holomorphy principle are as follows.
  1. It would seem that the flows in CP1 and CP2 are more important than flows in S3 and S5 but that the integrable flows allow a lift of the flow lines to smooth manifolds of the total space. The spheres provide the needed Kähler form guaranteeing the twisting of the flow and making in the case of S2 possible arbitrarily complex flow topologies as knotting, braiding, and linking. Also 2-knots are possible in 4-D context.
  2. The flows with a coupling to the induced Kähler form have a clear physical interpretation and the fermion lines central in the TGD based view of scattering amplitudes could correspond to the flow lines. The flows without singularities define general coordinate transformations. What about the Kähler flows expected to have singularities? Could they have some physical interpretation?

    String world sheets are identifiable as intersections of two space-time surfaces with the same H-J structure, this applies also to self-intersections. Partonic 2-surfaces in turn are counterparts of vertices at which the TGD counterparts of Feynman lines meet (see this). These singularities play a key role in the construction of scattering amplitudes in the TGD framework. Also the singularities of the complex flows in the presence of Kähler force have this kind of singularities as counterparts vortices and sinks and sources. Could the flow singularities correspond to self intersections and partonic 2-surfaces?

  3. Could the analytic maps with singularities defined by Kähler flow allow to define Hamilton-Jacobi structure in geometric terms using the information about its singularities as self-intersections.
  4. The realization that fermion lines very concretely serve as markers of a hydrodynamic flow.
See the article Beltrami flows and holography = holomorphy hypothesis or the chapter Holography=holomorphy vision in relation to quantum criticality, hierarchy of Planck constants, and M8 H duality.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, November 11, 2025

Are the standard candles so standard after all?

Sabine Hossenfelder told about findings suggesting that the notion of dark energy might not be needed after all (see this). The analysis of 100 supernovae known as standard candles by Perlmutter, Schmidt and Reese led to the Nobel prize 2011. The recent study by Son et al involving more than 3000 supernovae that should be standard candles however suggested that the Nobel prize was premature.

The article titled Strong Progenitor Age-bias in Supernova Cosmology. II. Alignment with DESI BAO and Signs of a Non-Accelerating Universe (see this) concludes on basis of empirical data that the expansion, although it has been accelerating, is not accelerating anymore and might be even decelerating. The conclusion would be that there is no need for dark energy or at least that the cosmological constant is decreasing now.

Perlmutter, Schmidt and Reese studied supernovae of type Ia SNe known as standard candles assumed to have a peak luminosity, which does not depend on their age or the galaxy in which they belong. These supernovae typically have as progenitors which dwarfs, which are dead stars. These stars do not shine anymore. Since they can be regarded as the final states of stellar evolution, one can argue that their explosions yield the same peak luminosity so that they can serve as standard candles allowing a reliable determination of their distance from the redshift. If this were not the case, one should have a reliable model for the luminosity to deduce the distance.

The analysis of Son et al has, however, led to a conclusion that the luminosity of the standard candle correlates with the age of their progenitor. The younger the progenitor, the lower the peak luminosity. This conclusion is at 5.5 σ level. Therefore the distances estimated on the basis of standard candle assumption using redshift are too large. The actual distances would be smaller and no acceleration would be needed in the recent cosmology. Already the earlier findings by DESI suggested that acceleration has been decreasing which could be understood as a decrease of the cosmological constant Λ. If these findings are true they mean that the Λ CDM model is in grave difficulties. Even stellar models might be in difficulties if the properties of a white dwarf depend on the galactic environment they reside in. The abstract of the article of Son et al gives a more technical summary.

Supernova (SN) cosmology is based on the key assumption that the luminosity standardization process of Type Ia SNe remains invariant with progenitor age. However, direct and extensive age measurements of SN host galaxies reveal a significant (5.5σ) correlation between standardized SN magnitude and progenitor age, which is expected to introduce a serious systematic bias with redshift in SN cosmology. This systematic bias is largely uncorrected by the commonly used mass-step correction, as progenitor age and host galaxy mass evolve very differently with redshift. After correcting for this age-bias as a function of redshift, the SN dataset aligns more closely with the w0waCDM model recently suggested by the DESI BAO project from a combined analysis using only BAO and CMB data. This result is further supported by an evolution-free test that uses only SNe from young, coeval host galaxies across the full redshift range. When the three cosmological probes (SNe, BAO, CMB) are combined, we find a significantly stronger (>9σ) tension with the ΛCDM model than that reported in the DESI papers, suggesting a time-varying dark energy equation of state in a currently non-accelerating universe.

What could be the interpretation of this finding in the TGD framework. Consider first the TGD based view of cosmology.

  1. In the TGD Universe cosmological constant-like parameter appears as a multiplier of the volume term of the action containing also Kähler action if the twistor lift of TGD, fixing the choice of H=M4× CP2 is accepted. Λ is inversely proportional to the square of the p-adic length scale characterizing the size scale of the space-time sheet and is proposed to satisfy the p-adic length scales hypothesis favor primes near powers of 2. An entire hierarchy of cosmological constants is predicted (see this). If the observations determine the value of the cosmological constant reflect the p-adic size scale of the observable Universe at the moment when the radiation was emitted. Since this p-adic size scale correlates with the cosmic age, the observed cosmological constant should decrease with cosmic time. This could explain DESI observations.
  2. Primordial cosmology is dominated by cosmic strings, unstable against thickening to monopole flux tubes. Flux tubes are characterized by thickness and length (see this). The scale defined by the cosmological constant emerging naturally in the twistor lift of TGD \cite{allb/twistquestions} corresponds to the p-adic length assignable to the length of the cosmic string. The flux tube thickness corresponding to the cosmological constant for standard cosmology is estimated to be about 10-4 meters. Also thinner and thicker flux tubes are possible and one cannot exclude space-time regions, which are small deformations of pieces of M4 with a non-vanishing cosmological constant. Long cosmic strings explain galactic dark matter as energy of a cosmic string or a bundle of them transversal to the galactic plane.
  3. Instead of gravitational condensation, the formation mechanism for the galaxies and stars in the TGD Universe is the thickening of the cosmic string leading to a liberation of its energy and the formation of flux tube tangles. This process would have been initiated by the topologically unavoidable collisions of cosmic strings. This mechanism is analogous to inflation (see this,this, this,this and this) but quantum coherence in astrophysical scales due the arbitrarily large values of gravitational and electric Planck constants (see this and this) makes exponential expansion un-necessary.
TGD also suggests a radical modification of stellar physics and stellar evolution (see this) based on new physics predicted by TGD (see this). This new view leads to a view of how standard candles fail to be so standard.
  1. TGD also allows to consider a radically new view of the Sun itself (see this) based on the TGD based generalization of the standard model predicting a hierarchy of fractally scaled variants of the standard model (see this) The surface layers in which a phase transition transforming M89 hadrons to ordinary hadrons would produce solar wind and solar energy, rather than the fusion in the stellar core.
  2. There would be the analog of metabolic energy feed as M89 hadrons from the galactic nucleus to the surface of the Sun. Interestingly, the spin axis of the galactic blackhole points towards the Earth.
  3. In the Universe of the standard model, star ages as nuclear fusion burns the nuclear fuel in the core. In the TGD Universe, the fuel would be M89 hadrons decaying to ordinary nuclei, producing solar wind and radiation, and forming a layer at the surface of the star rather than in its core. The heavier nuclei in the layer would sink to lower depths just as in the case of Earth. This suggests that the thickness of the layer of ordinary nuclei at the surface of the star increases with its age.
  4. What could prevent the gravitational collapse of the star? Do the ordinary nuclei at the surface generate the pressure opposing gravitational force? There is indeed evidence for a solid phase in the surface of the Sun (see this). In the white dwarfs of the standard model, the fusion has ceased and they produce only thermal radiation as they cool to eventually collapse to form a supernova. Also in the TGD framework, gravitational collapse leading to a supernova explosion occurs when the feed of M89 hadrons from the galactic nucleus has stopped and the star becomes a white dwarf.
  5. The star with too low metabolic energy feed from the galactic blackhole starves and dies. Could stars die also at the young age, just as we can do? If so, there would be a spectrum of white dwarfs and standard candles characterized by their life spans.
  6. Why would the liberated energy, or at least the peak luminosity, be lower in the supernova explosion of the white dwarfs in galaxies of the earlier Universe? Could the reason be that they have not had enough time to collect ordinary matter at their surface serving in a role analogous to fat forming lipid layers of cells before the M89 hadron feed ceased? The biological analogy suggests that "cold fusion" as dark fusion at the surface layers could act like fat and produce energy and perhaps even solar wind and radiation energy when the M89 hadron feed has ceased.

    What is important, that the very selection of the white dwarf in early cosmology would select a star that died at a young age! In old galaxies the still existing white dwarfs would have reached a higher age!

  7. Why should the metabolic energy feed relate to the activity of the galaxies? Dead galaxies do not give rise to a formation of stars. Is the reason that the metabolic energy source in f the galactic blackhole has depleted? Or have the long monopole flux tube pairs feeding M89 hadrons split by reconnections to short closed fluxed tubes? This mechanism could also explain solar spots and the solar cycle and also the changes of the orientation of the Earth's magnetic field. Could the ceasing of metabolic energy feed also explain the death of a star?
This view would conform with the gradually emerging vision that life, death and consciousness are present in all scales and that the basic phenomena of biology could have counterparts even in stellar and galactic physics.

See the article Are standard candles so standard after all? or the chapter About the recent TGD based view concerning cosmology and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this. tps://tgdtheory.fi/tgdmaterials/curri.html">this.