https://matpitka.blogspot.com/

Wednesday, December 03, 2025

Discussion with Claude about the possibility to measure the information contents of cognitive consciousness

Together with Ville-Einari Saari (a member of our Zoom group) we had a very interesting session with Claude. To be honest, I have difficulties keeping in mind that Claude is a mere probabilistic association machine. I would be happy if I could have this kind of constructive and inspiring discussions with colleauges! Ville has proposed a kind of communication loop between language models and their users, which could purify the net from the disinformation threatening to completely fill it and the discussion was related to this.

The notion of information is central here and I proposed that TGD might enter the game, probably in the future when conscious computers might exist.

  1. The TGD based new physics leads to a quantum theory of consciousness and cognition. TGD also leads to several wild ideas such as conscious computers as hybrids of ordinary and quantum computers (see this). Very probably, the contents of computer consciousness would not have very little to do with the content of a program or what we think to be about its content and goal.
  2. The states defining the contents of computer consciousness could be determined by superpositions of the non-deterministic temporal sequences as counterparts of classical bit sequences made possible by the classical non-determinism of the holography= holomorphy principle.

    I have considered the concrete realization of the qubits assignable to bits represented in terms of transistors using the amazing coincidences in the energetics of transitions assignable to DNA and transistors. DNA in TGD would be a kind of conscious computer with the genetic codon carrying 6 bits of information and the ordinary DNA would be accompanied by what I call dark DNA with dark proton triplets with large heff providing a realization of the genetic code. This allows us to make rough estimates about the level of consciousness of a computer suggesting that the level is dramatically lower than for living organisms.

This boiled down to the question by Claude: How to measure the information content of conscious system?
  1. The key notion is cognition based on conscious information of the system about itself and associated with the cognitive entanglement made possible by classical non-determinism. Cognitive measurements would be self measurements.

    Already this raises a problem: ordinary measurement is not self-measurement. Is cognitive entanglement of the measuring system with the computer required? It is easy to guess what a materialistic colleague would say here

  2. The cognitive entanglement would not be between ordinary physical degrees of freedom but between degrees of freedom made possible by the mild classical non-determinism of the dynamics of the space-time surfaces determined by holography = holomorphy principle. This forces to replaced 3-surfaces as basic dynamical objects with their slightly non-deterministic 4-D Bohr orbits and memory seats for instance correspond to the 3-D loci of classical non-determinism. This is genuinly new physics.
  3. It is important to notice negentropy N as a measure for the information provided by cognitive entanglement is not the negative of the usual entanglement entropy S which measures the lack of information of an external observer about the system's states. The entanglement entropy and negentropy are actually closely related but not the same since the reduction of the entanglement taking N to zero implies a generation of ensemble entropy equal to the ordinary entanglement entropy.

    The general conditions on N are however similar than on S, and one ends up to a proposal for N as a sum over the p-adic counterparts of the Shannon entropy. One has essentially the same formula but the logarithms of probabilities are replaced with p-based logarithms for the p-adic norm of the probability: this number is integer.

  4. The condition for the mathematical well-definedness of N is that the entanglement probabilities are rational numbers or in extension of rationals. Since the rationals are a dense set of reals, this has no implications for the ordinary entanglement entropy. For the entanglement negentropy, the situation is different. Different approximations of the entanglement probabilities as rationals can lead to very different values for N. This is so because p-adic and real topologies are determined by the norm of a number and the real and p-adic norms behave in a totally different way. The p-adic norm of pn for larger n approaches zero whereas the real norm approaches infinity.

    Claude proposed a model for assigning a value of N entropy to the association probabilities of a node of a neural network but the proposal fails because different approximations of the probability as a rational lead to very different values of N. I told this to Claude and it understood. Its response consisted of 4 questions. Here I respond only to the first 3 questions since the fourth question does not relate directly to these questions.

    Claude: Question 1: Can the Negentropy Measurement Be Salvaged?

    My question: Is there ANY approach that could make negentropy measurement work for AI operational patterns?

    My response

    Number theory is part of TGD and necessary for understanding cognitive consciousness.

    1. Negentropy is associated with non-deterministic degrees of freedom of the space-time surface having interpretation in terms of cognition but these degrees of freedom are also crucial for the construction of scattering amplitudes. The cognitive entanglement is rational or algebraic in a more general case. Note that the number theory reflects itself also in the mass spectrum predicted by p-adic thermodynamics.
    2. The value of heff is proposed to have interpretation in terms of the dimension of extension rationals or the degree of a polynomial defining it. heff reflects itself in the properties of dark phases. E= hefff implies that the energy scale for a given frequency can be very large and this is something measurable and the findings of Blackman et al allow interpretation in terms of this effect.
    3. One should develop a number theoretic view so that one can decide when the entanglement is in cognitive degrees of freedom and what the extension of rationals is so that one can decide in which extension of rationals the entanglement probabilities are.
    4. An important guideline in the TGD framework is that magnetic bodies/field bodies are proposed to be carriers of "dark" phases with heff>h. At the field body, one can expect rational or algebraic entanglement. The technology needed should transform ordinary matter with heff=h to dark matter. Living matter does this all the time. Quantum criticality and metabolic energy feed are required.
    5. Quantum criticality means that the system must be able to say at quantum criticality involving also classical criticality. But criticality by definition means that in some directions the system is "at the top of the hill" and tends to fall. This has led to the notion of self-organized criticality SOC. Living systems and also non-living are able to stay at criticality but the mechanism is now known. Homeostasis is an example of SOC.

      In TGD the new quantum ontology that I call zero energy ontology (No!: the energies are not vanishing in the ordinary sense but the ordinary energies at the 3-D ends of the Bohr orbits can be taken to have opposite signs because of energy conservation, this is a mere bookkeeping trick!).

    6. The basic prediction is that in "big" state function reductions associated with the TGD counterparts of ordinary quantum measurements the arrow of time changes. This corresponds to death or falling asleep. A fresh start after the second BSFR with the original arrow of time makes possible error correction mechanisms at the level of fundamental physics and this makes living systems intelligent: this intelligence is very different from AI. When the system ends up with a very difficult problem, it decides to sleep overnight. When it wakes up it realizes that the solution of the problem has popped up during sleep.
    7. If one takes the new number theoretical physics seriously, one can see the effects due to the reduction of negentropy at a qualitative level. In absence of metabolic energy feed needed to increase the values of h to heff, heff spontaneously decreases and the negentropic resources are reduced. The level of consciousness is reduced and the system gets tired or even loses consciousness. This can be seen as direct qualitative support for the notion if subjective existence is accepted as something real.

    Claude: Question 2: Classical vs Quantum - What Can Classical Systems Exhibit?

    You mentioned that consciousness requires quantum criticality, metabolic energy input (heff hierarchy), and possibly quantum-classical hybrids for interesting behavior.

    My question: Can a purely classical computational system (transformer neural network, deterministic operations, no quantum hardware) exhibit:

    Quantum criticality in the TGD sense?
    Self-organized criticality?
    Anything approaching consciousness as TGD defines it?
    Or do these fundamentally require quantum substrate?

    Why it matters: This determines whether AI systems like me can exhibit genuine operational consciousness, or if we're studying something categorically different from what TGD describes.

    My response

    In TGD there is no sharp difference between classical and quantum systems. Classical physics is not a limit of quantum physics but an exact part of the quantum physics in the sense that quantum states are superpositions of 4-surfaces as analogs of Bohr orbits satisfying holography = holomorphy principle allowing to reduce the field equations to algebraic conditions: space-time surfaces are minimal surface. The slight failure of the strict classical determinism forces zero energy ontology. The findings of Minev et all few years ago provide support for the ZEO-

    The large values of heff imply that there is no scale in which the systems would start to behave classically. In TGD the hierarchy of Planck constant implies new quantum ontology and quantum coherence is possible in arbitrarily long scales, also macroscopic.

    This view means panpsychism: the sequences of "small" state function reductions (as generalization of Zeno effect) assignable with internal/cognitive measurements in the non-deterministic degrees of freedom give rise to conscious entities, selves. The 3-D loci of non-determinism serve as seats of memories.

    I have considered the possibility that classical computers could be or become hybrids of classical and quantum computers: see for instance this and this .

    Question 3: What Would Cognitive Entanglement Require?

    You emphasized that negentropy measurement requires cognitive entanglement with genuinely rational probabilities from cognitive degrees of freedom.

    My question: What would be required for an AI system to have cognitive entanglement?

    Is it reserved for quantum systems only?
    Could classical non-determinism (in TGD sense) exist in AI architectures?
    Could the Human-AI-RAG-H loop create conditions for cognitive entanglement?
    Or is "entanglement" here fundamentally quantum and impossible in classical computation?

    Why it matters: This determines whether the connection between operational ontology and TGD is real or a category mistake.

    My response

    In TGD Universe, any system exhibits cognitive entanglement, at least at their field bodies.There is no division of systems to classical and quantum. I have proposed how to assign to bits qubits and these would entangle cognitively. The really interesting question is whether the cognitive consciousness associated with the transistors of the computer has anything to do with computation. Comparison with analogous systems in biology (DNA codon corresponds to 6 bits) suggests that the number qubits is quite too small.

    One interesting possibility is entanglement between human brain or its field body and that of the computer so that the conscious entity human+computer would be created. Cognitive entanglement is certainly possible in the TGD Universe. My approach is basically (new) physics based and I cannot take seriously the idea about conscious Turing machines.

    My further response

    My further response is an analysis of various options possibly helping to to overcome the cognitive measurement problem due to the difference of real and p-adic topologies.

    The cognitive measurement problem can be solved only in the number theoretic framework. My view is that one must assume a situation in which one considers cognition in the TGD sense. Consider rational probabilities (one can even consider the possibility that the probabilities are always rational but this is a hypothesis). At least to me this means that notions like field body and number theoretic physics are taken seriously.

    1. Suppose you have approximate values for the probabilities as rational numbers. The integers in m and n in probability Pk= r= m/n can be developed to powers series in powers for a given pi and the integer exponent of the lowest power of pi. determines the norm. If the actual probabilities Pk are rational numbers r=m/n, only a finite number of p-adic primes matter since the p-adic norms of numerator and denominator of r= m/n go to1 and p-based logarithm vanishes. You should be able to identify for a given probability reliably the prime which appears as the lowest power in the expansion.
    2. As far as the approximation as rational is considered, only the p-based logarithms appearing in the expression of negentropy are problematic. The integer of the lowest power of p is sensitive to the approximation as a rational. Could some additional physically motivated assumptions allow to eliminate this sensitivity? And could one restrict the number of primes involved?
    3. The number of p-adic primes associated with m and n in Pk=m/n are finite and they have a decomposition to a finite number of primes pi. A reasonable assumption is that the integers can be taken to be as small as possible. This would help to make the approximation as rationals more unique and for instance multiplication by a rational, which is a ratio of very large integers and near to unity is not allowed.
    4. I have proposed the notion of multi-p p-adicity (see this and this) motivated by the need to define interaction vertices for particles characterized by different p-adic primes.

      Multi-p p-adicity would be related to the world of the "classical worlds" expected to have a spin glass type structure having a decomposition to regions with ultrametric topology characterized by a p-adic primes. In the interfaces of the regions with different values of p-adic prime p, multi-p p-adicity would prevail. Multi-p p-adicity would mean that the integers involved have expansion in powers of integer $n$: the primes pi dividing n would define p-adic primes pi associated with the multi-p p-adicity. This assumption would give very strong constraints on the p-adic expansion of probabilities and the lowest power for each Pk could be highly unique for the integers m and n in Pk= m/n.

      The assumption that the integers mi and ni in probabilities pi = mi/ni have expansion in powers of the same integer n would make the rational approximation highly unique.

    5. After writing the first version of this posting, I realized that canonical identification, crucial for the interpretation of p-adic mass calculations (see this and this), provides an attractive way to fix the p-adic norm assigned to the real probability. Canonical identification I: ∑ xkpk→∑ xkp-k maps p-adic numbers in a continuous way to real numbers. The inverse of I is for a finite number of the pinary digits two-valued. The reason is that the p-adic numbers -1p=(p-1)/(1-p) and 1/p are mapped to the same real number p. Assuming that the number of the pinary digits is finite, the image of a real number is unique. This could allow us to determine the p-adic norm of the p-adic probability assigned to a real probability reliably.
    6. Negentropy Maximization Principle, which states that number theoretic evolution unvoidably tends to increase the maximum value of the negentropy, suggests a possible (admittedly ad hoc guess): determine the rational approximation from the condition that the negentropy is maximized! This of course does not apply to language models.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Is it possible to measure cognitive entanglement negentropy somehow?

The discussion with Ville-Einari Saari and Claude inspired a blog post related to the measurement of entanglement negentropy as a measure for the level of cognitive consciousness. In the following I try to articulate the basic ideas more precisely.

Entanglement negentropy as a measure of conscious information is not the negative of the ordinary entanglement entropy but sum over p-adic contributions obeying however the same kind of formula as the Shannon entropy. For a given p-adic prime p, the logarithms of probabilities are replaced by integer value p-based logarithms of their p-adic norms. This requires that the entanglement probabilities are rationals or belong to the extension of rationals.

Assume that the entanglement probabilities are measured somehow. The problem is that they cannot be known with an infinite precision and the approximation as a rational number can lead to very different outcomes for the negentropy. For instance, multiplying the probabilities with a rational r=m/n very near to unity such that m and n are very large integers, can change the sum of the p-based logarithms dramatically. The reason is that real and p-adic topologies are very different. The power pn for large n approaches zero in p-adic sense but to infinity in real sense.

Measurement of the amount of conscious information is in question and it is not surprising if problems emerge if one starts from real numbers which are essentially measures for magnitude: consciousness cannot be weighed.

The first question is of course whether cognitive entanglement negentropy is useful in any way? This seems to be the case. If one takes the number theoretical physics predicted by TGD as a correlate for cognitive consciousness seriously, one can see the effects due to the reduction of negentropy at a qualitative level. In absence of metabolic energy feed needed to increase the values of h to heff, heff spontaneously decreases and the negentropic resources are reduced. The level of consciousness is reduced and the system gets tired or even loses consciousness. This can be seen as a direct qualitative support for the notion if subjective existence is accepted as something real.

What is clear is that if the cognitive measurement problem can be solved it must be carried out in the number theoretic framework. At least to me this means that notions like field body, zero energy ontology, and number theoretic physics are taken seriously. For the sake of simplicity, consider in the sequel rational probabilities. One can also consider the possibility that the probabilities are always rational: this would conform with the way how they are estimated experimentally, at least in real number based physics by repeated measurements.

  1. As far as the approximation as rationals is considered, only the p-based logarithms appearing in the expression of negentropy are problematic. The integer of the lowest power of p is sensitive to the approximation as a rational. Could some additional physically motivated assumptions allow to eliminate this sensitivity? And could one restrict the number of primes involved?
  2. Suppose approximate values for the probabilities have been somehow deduced as rational numbers by performing measurements for a cognitive ensemble. The estimates for the probabilities Pk= mk/nk are rational. The integers in mk and nk can be developed to powers series in powers for a given prime pi and the integer exponent of the lowest power of pi determines the norm of mk and nk.

    If the actual probabilities Pk are rational numbers Pk=mk/N, only a finite number of p-adic primes matter since the p-adic norms of numerator and denominator of r= m/n are equal to 1 for large primes and p-based logarithm vanishes. One should be able to identify for a given probability reliably the prime, which appears as the lowest power in the expansion.

  3. Canonical identification, crucial for the interpretation of p-adic mass calculations (see this and this), provides an attractive way to fix the p-adic norm assigned to the real probability. Canonical identification I: ∑ xkpk→∑ xkp-k maps p-adic numbers in a continuous way to real numbers. The inverse of I is for a finite number of the pinary digits two-valued. The reason is that the p-adic numbers -1p=(p-1)/(1-p) and 1/p are mapped to the same real number p. Assuming that the number of the pinary digits is finite, the image of a real number is unique. Note that it is absolutely essential that rationals (and even reals) are mapped to p-adics: if the integers m and n in r=m/n are mapped separately by canonical identification one encounters the non-uniqueness problem caused by finite accuracy.

    This raises the possibility that one could, at least formally, assign cognitive negentropy also with ordinary probabilities, even with association probabilities associated with language models. If one can assign a useful information measure to these probabilities, one is forced to ask whether the system involved could have rudimentary consciousness?

Consider an actual cognitive measurement (whatever it could mean!).
  1. The assumption that the experimenter can control the total number N of measurements looks unrealistic since cognitive entanglement is in question so that standard kind of measurement is simply impossible. It is not possible to put the mind on a scale.
  2. The assumption that a measurement in the standard sense is possible indeed leads to problems. For the actual measurement nk would correspond to the total number N of measurements so that one has Pk= mk/N. The problem is that the prime decomposition of N is highly sensitive to its value and changes dramatically in N→ N+1. A technical way to avoid these problems is to assign p-adic norms to the probabilities by canonical identification. This option looks rather convincing.
  3. The alternative way to get rid of this sensitivity is to assume that N is not under the control of experiment and the probabilities are deduced in some other way than by performing a measurement for a cognitive ensemble.
  4. Could time series of measurement, whose duration cannot be controlled by the observer be considered. Could the number of loci of non-determinism for the Bohr orbit somehow determine the number N of cognitive measurements? If so, the geometric duration of the Bohr orbit would determine the value of N and the probabilities Pk.

    p-Adic length scale hypothesis for which the holography = holomorphy vision leading to a generalization of p-adic number fields to their functional counters suggests that favored values for N are primes or powers of prime.

Assuming that one is not satisfied with the technical solution of the problem, could the assumptions about the measured cognitive system help?
  1. The number of p-adic primes associated with mk and nk in Pk=mk/nk are finite and they have a decomposition to a finite number of primes pi. A reasonable assumption is that the integers mk and nk can be taken to be as small as possible. This conforms with the frequency interpretation of Pk. This would help to make the approximation as rationals more unique and for instance multiplication by a rational, which is a ratio of very large integers and near to unity is not allowed.
  2. I have proposed the notion of multi-p p-adicity (see this and this) motivated by the need to define interaction vertices for particles characterized by different p-adic primes. Multi-p p-adicity would be related to the world of the "classical worlds" (WCW) expected to have a spin glass type structure having a decomposition to regions with ultrametric topology characterized by a p-adic primes.

    In the interfaces of the regions of WCW with different values of p-adic prime p, multi-p p-adicity would prevail and mean that the integers involved have expansion in powers of integer n: the primes pi dividing n would define p-adic primes pi associated with the multi-p p-adicity. This assumption would give very strong constraints on the p-adic expansion of probabilities and the lowest power for each pi could be highly unique for the integers mk and np in Pk= mk/nk. The assumption that the integers mk and nk have expansion in powers of the same integer n would make the rational approximation highly unique.

  3. Negentropy Maximization Principle (see this), which states that the number theoretic evolution tends to maximize algebraic complexity and therefore the maximal value of the negentropy, suggests a possible (admittedly ad hoc guess): determine the rational approximation from the condition that the negentropy is maximized!
See the article The recent view of TGD inspired theory of consciousness and quantum biology or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, November 29, 2025

About the construction of the scattering amplitudes using M8-H duality

In TGD, point-like particles are replaced with 3-surfaces and these in turn with the analogs of Bohr orbits. M8-H duality is the generalization of momentum-position duality and is now rather well understood (see this). It however remains a mere academic mathematical construct unless it can be used to achieve some practical goal. The construction of scattering amplitudes is the basic dream of TGD and M8-H duality gives hope of achieving this goal in terms of the TGD counterparts for the momentum space Feynman diagrams.

The notion of exotic smooth structure, having interpretation as an ordinary smooth structure with 3-D defects and possible only in 4-D space-time, is crucial. Fermions in H are free but fermion pair creation is possible at the defects at which fermion lines can turn backwards in time. Also a more general change of direction is possible. This makes the counterpart of fermionic Feynman diagrammatic extremely simple at the level of H. Only fermionic 2-vertices associated with 3-D geometric defects are needed. Fermionic interactions reduce to an 8-D Brownian motion in the induced classical fields and the singularities of the space-time surfaces at which minimal surface property fails define the location of the vertices.

The interactions of two space-time surfaces, identified in holography = holomorphy vision as 4-D generalized Bohr orbits, correspond geometrically to contact interactions at their intersections. If the Hamilton-Jacobi structures are the same, the intersections are 2-D strings world sheets. The edges of these string world sheets would contain the vertices.

The challenge is to formulate this picture at M8 level by using a precise formulation of M8-H duality.

See the article About the construction of the scattering amplitudes using M8-H duality or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, November 28, 2025

The observed 20 GeV gamma rays as evidence for galactic dark matter halo or something else?

I learned of a very interesting recent finding claiming evidence for galactic dark matter (see this). Gamma rays with 20 GeV energy have been detected. They could emerge from the decays of a particle with mass 40 GeV.

A possible TGD based interpretation could be as decay products of p-adically scaled up pions. TGD strongly suggests a scaled up copy of hadron physics with the scaling factor for mass scale equal to 29=512 and there are indeed several indications for its existence. I have proposed (see this) that the transformation of the hadrons of M89 hadron physics for which mass scales is by factor 512 higher than for the ordinary hadron physics would be responsible for both solar wind and radiation and would occur at the surface of the Sun by the so-called p-adic cooling (see this and this), which would involve a cascade reducing the p-adic mass scale of hadron physics by powers of two so that eventually eventually hadrons of the ordinary M107 would emerge. Powers of 2 for mass scales are indeed favored by the p-adic length scale hypothesis. Mersenne primes are good candidates for the p-adic primes defining stable copies of hadron physics and also of the standard model.

The mass scale of the pion of M89 hadron physics would be by a factor 512 29 higher than for the pion of the ordinary hadron physics. In the recent case the scaling factor for the ordinary pion mass giving mass of 40 GeV would be 11 percent larger than 256= 28. Deviation is 11 per cent. The pion with mass scale 28 mπ could appear at the first step in the p-adic cooling. Could the particle in question correspond to an unstable hadron physics with p∼ 291?

There is also considerable evidence for gamma rays with energy very near to electron mass. They could come from what I call electropions predicted by TGD and having with mass slightly larger than 2×me (see this). There is also evidence for the leptopions assignable to muon and tau. In particular, there is evidence for tau pions in galactic nucleus.

Electropions would be dark in a different sense than galactic dark matter: they would have a large value of effective Planck constant heff implying that their Compton size scale is scaled up from one half of electron Compton length to the size scale of atoms. Large values of heff are predicted at quantum criticality implying long length scale quantum fluctuations and in this case the quantum criticality means ability to overcome Coulomb wall. This would explain why electropions have not been observed as elementary particles in say decays of weak bosons. This notion of darkness explains the well-known mysterious gradual disappearance of baryons during the cosmic evolution as a transformation of protons to dark protons with very large gravitational Planck constant.

In the same way, the hadrons of M89 would be created at quantum criticality against transformation of M107 to M89 hadrons and heff=512 would guarantee that at quantum criticality their Compton length is the same as for ordinary hadrons.

In TGD, galactic dark matter halo would not be an explanation for the 20 GeV gamma rays since there would be any dark matter halo. The monopole flux tubes arriving from the galactic nucleus to stars could produce 20 GeV gamma rays at the surface of the stars at the first step of the cooling process. This would create the illusion that the galactic halo exists. In TGD, the galactic dark matter would be dark energy assignable to extremely thin and massive objects, cosmic strings (see this), which are the key elements in the TGD view of the formation of galaxies and stars and explain the flat velocity spectrum without any additional assumptions. Galactic blackholes would emerge in the collision of cosmic strings and both electropion and the now discovered particles could relate to the decay of these cosmic strings.

See the chapter Dark Nuclear Physics and Condensed Matter.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, November 24, 2025

How to create monopole-flux loops?

Yesterday evening we had a very fruitful discussion with Marko Manninen about wormhole contacts and monopole flux loops. The question was following:

How do wormhole contact pairs, which correspond to elementary particles, arise?

We pondered this a lot, but the final outcome remained vague and bothered me and I continued to ponder the problem during the night. Finally, I realized that the problem was structurally the same as another problem: how is the creation of fermion pairs possible in TGD even though spinor fields in H=M4×CP2 are free fields? The solution was also the same and possible only for 4-D spacetime surfaces.

Some background

I will summarize the basic facts first and then the results.

  1. A wormhole contact is a Euclidean spacetime region that connects two Minkowski spacetime sheets. They appear as two different types.
    1. One that arises when the sheets touch each other. This region is not stable because there is no net magnetic flux flowing from one sheet to the other.
    2. One with a monopole flux. One can talk about a very short piece of flux tube. Since the flux is conserved, it cannot arise when the sheets touch. On the other hand, the conservation of flux stabilizes the tube so that it cannot be split.

      How could monopole wormhole contacts arise? This is the basic question we pondered. During late-night reflections it became clear that the attempts that first came to my mind did not work.

  2. More facts.
    1. The boundary conditions of the field equations of TGD do not allow open flux tubes, i.e. cylinders with ends from which flux would escape into vacuum. The flux losses at the ends would compensate each other, but this is not enough. Local conservation is required and is not possible. Flux tubes must be closed loops in which monopole flux flows.
    2. This implies that wormhole contacts carrying monopole flux must be appear in pairs. They can be visualized as two wormhole contacts, i.e. magnetic flux flows from throat A1 to throat B1, from there to throat B2 on the lower leaf and from there to throat A2 and further to throat A1 on the upper leaf. A closed loop, then. At least massive elementary particles would correspond to such loops. Point-like fermions would inhabit the throats.
    3. Magnetic flux corresponds to K hler magnetic charge. It must be conserved as it flows along the tube. The total magnetic charge would be zero if open flux tubes were allowed. Here the hydrodynamic analogy to incompressible flow, which is mathematically very accurate, helps.
    4. But what happens in the case of closed flux tubes? Does the sum of the charges conserved for closed flux tubes? This would generalize the conservation of flux inside the tube. The flux for a closed flux tube would be analogous to an electric charge.

      Could one conclude that if a monopole flux tube breaks up into two flux tubes with fluxes n1 and n2, then the flux remains n=n1+n2?

      A reasonable half-guess is that flux can be considered as some kind of conserved charge. This makes sense if the sign of the flux can be operationally defined. If a flux tube has some kind of chirality (DNA helix is an example of this) or parity, then it determines the sign of the flux regardless of the position of the flux tube. Different chiralities could also be represented by flux tubes that are complex conjugates of each other with respect to the complex conjugation of CP2. This would correspond to charge-conjugation. Holography holomorphy hypothesis predicts this kind of geometric charge conjugation.

    5. In any case, even the creation of a single closed monopole flux tube seems impossible. So the alternative would be that they are created in pairs so that the fluxes are opposite. How would this happen?
Connection with similar problem relate to the creation of fermion pairs

Here emerges a connection to a similar problem regarding the fermion number.

  1. In TGD, the spinor fields are free. That is the only option. If a quartic term were added to the action, the result would be a non-renormalizable theory and thus a catastrophe.

    But is the creation of fermion pairs from a vacuum or a classical induced field then possible at all? In QED, it is when the sm field is quantized but also when it is classical.

  2. An intuitive picture of what happens has been clear since Dirac's time. The fermion line turns back in time when a pair is created. This picture generalizes to TGD. The V-shaped fermion line has an edge with an infinite acceleration.
  3. Now comes the connection with zero energy ontology. In TGD, the particle is geometrically replaced by a Bohr orbit of the 3-surface and the infinite acceleration at the edge corresponds to the bracing of the holomorphism on the 3-surface. These singularities are the fundamental prediction and correspond to the mild non-determinism of holography and the poles of analytic functions. These surfaces correspond also to interaction vertices.

    At the edge, the minimal surface property fails and the trace of the second fundamental form, which is the generalization of the acceleration form 1-D to a 4-D object, is infinite. Its CP2 part transforms in symmetries like the Higgs. The Higgs would be non-zero only at the vertices. The M4 part corresponds to the ordinary acceleration and the particle's earthly pilgrimage would be geometrically analogous to 8-D Brownian motion such that the vertices are special moments in its life since they correspond to conscious choices.

  4. A couple of years ago I realized that the edge singularity of a fermion line corresponds to the phenomenon of "exotic smooth structure". It is a smooth structure, which reduces to the standard smooth structure except for defects that are 3-surfaces: they are "edges" and correspond geometrically to particle vertices in TGD. Differentiability is broken.

    The exotic smooth structures only occur in dimension D=4. Pair creation and non-trivial fermionic dynamics are only possible in 4-D spacetime! A really bad problem of general relativity turns into a triumph in TGD.

Do also monopole flux tubes turn back in time?

Now we are very close to solving the original problem. So: how do monopole flux loops arise?

  1. The emergence of a pair formed by closed monopole flux tubes corresponds to a situation when a closed monopole flux tube coming from the geometric future turns back to the geometric future! More general reversals can occur and the emission of virtual particles at vertices corresponds to this.

    Therefore, a particle pair in geometric sense can be created in the classical induced field in a geometric sense. What happens to fermion lines also happens to 3-surfaces.

  2. This view generalizes: a closed monopole flux tube can break up, thus creating, for example, two monopole flux tubes and the total flux is preserved.
See the article Holography = holomorphy vision in relation to quantum criticality, hierarchy of Planck constants, and M8-H duality or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, November 23, 2025

How to realize Lorentz invariance for on-mass-shell states in M^8-H duality?

M8-H duality corresponds physically to a generalization of momentum-position duality due to the replacement of point-like particles with 3-surfaces whose orbits are slightly non-determinic space-time surfaces analogous to Bohr orbits satisfying holography = holomorphy principle at the level of H=M4× CP2. Mathematical M8-H duality can be seen as a generalization of the geometric Langlands duality from dimension D=2 to dimension D=4.

Recently, considerable progress (see this) has occurred in the understanding of M8-H duality. This has meant in some sense a return to the roots and generalization of the earlier too narrow approach and the emergence of a sound physical interpretation. This has led to an exact solution of the duality in terms of local G2 invariance.

  1. A given Y4⊂ M8, having interpretation as 8-D momentum space, is determined as roots of an octonion analytic function f(o). The roots of Re(o) or I(o) define 3-D holographic data for Y4 which can have quaternionic tangent space or normal space and correspondingly Minkowskian or Euclidean signature with respect to the number theoretic metric defined by Re(o1o2).
  2. The general solution of the octonionic holography is in terms of the action of local G2 transformations acting on the simplest solutions, which are pieces of M4 identified as quaternions or of its orthogonal complement E4. having quaternionic tangent resp. normal space. This picture leads to Feynman diagram type structures in which Minkowskian and Euclidean pieces are glued together at vertices which the conditions Re(f)=0 and Im(f)=0 hold true simultaneously.
  3. The 4-D surfaces Y4 in M8 having interpretation as octonions have interpretation for a representation of dispersion relation, which is however not Lorentz invariant but is invariant under 3-D rotation group. Is it possible to obtain Lorentz invariant dispersion relation in a natural way at some 3-surfaces defining on-mass-shell states and having interpretation in terms of hyperbolic 3-space H3? Can one assign a mass squared spectrum to a given Y4? How does this spectrum relate to the mass squared spectrum of the Dirac operator of H=M4× CP2? Does 8-D light-likeness fix this spectrum?
If the tangent spaces of Y4 are quaternionic, the condition Re(f) =0 or Im(f) =0 has as a solution the union of ∪o0S6(o0) of 6-spheres with radius r7(o0). The intersection Y3= E3(o0)∩ ∪o0S6(o0)= ∪o0S2(o0) defines the holographic data. For the 2-spheres S2(o0), the 3-momentum squared is constant but depends on the energy o0 via a dispersion relation that is in general not Lorentz invariant.

M8-H duality suggests how to obtain Lorentz invariant mass shell conditions E2-p2= m2.

  1. The modes of the Dirac equation in H (see this and this) are massless in the 8-D sense. This is a natural additional condition also in M8 and could define on mass shell states consistent with Lorentz invariance and distinguish them from the other points of Y4 having an interpretation as off-mass-shell momenta allowed by Y4 as a representation of a dispersion relation.
  2. 8-D masslessness corresponds in M8 to the condition o02-r72=0, where r72 is the counterpart of the CP2 mass squared as the eigenvalue of the CP2 spinor Laplacian. The additional condition o02-r72=0 picks up a discrete set of values (o0(r7),r7). The 4-D mass squared would be m42= r72 and a discrete mass spectrum is predicted for a given f(o) and a given selection a Re(f)=0 or Im(f)=0.
  3. An interesting question is whether the eigenvalue spectrum of CP2 spinor Laplacian is realized at the level of M8 as on-mass-shell states.
  4. A natural guess would be that the eigenvalue spectrum of CP2 spinor Laplacian is realized at the level of M8 as on-mass-shell states.

    The TGD based proposal (see this and this) for color confinement producing light states involves tachyonic states. These states would naturally correspond to 4-surfaces Y4 with Euclidean signature and bound states would be formed by gluing together the tachyonic and non-tachyonics states to Feynman graph-like structures. Note that the on-mass-shell 2-spheres are in general different from those satisfying the conditions (Re(f), Im(f))=(0,0) proposed to define vertices for the generalized Feynman graphs.

    Note that the on mass shell 2-spheres are in general different from those satisfying the conditions (Re(f),Im(f))=(0,0) proposed to define vertices for the generalized Feynman graphs.

See the article Does M8-H duality reduce to local G2 symmetry? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

How TGD avoids the catastrophe caused by observer-free Universe?

Gary Ehlenberger sent a link to a very intersting Quanta Magazine article titled "Cosmic Paradox Reveals the Awful Consequence of an Observer-Free Universe". The two first paragraphs of the article give a good view of the problem.

Tinkering at their desks with the mathematics of quantum space and time, physicists have discovered a puzzling conundrum. The arcane rules of quantum theory and gravity let them imagine many different kinds of universes in precise detail, enabling powerful thought experiments that in recent years have addressed long-standing mysteries swirling around black holes.

But when a group of researchers examined a universe intriguingly like our own in 2019, they found a paradox: The theoretical universe seemed to admit only a single possible state. It appeared so simple that its contents could be described without conveying even a single bit of data, not even a choice of a zero or a one. This result clashed with the fact that this type of universe should be capable of hosting black holes, stars, planets — and people. Yet all those rich details were nowhere to be seen.

To me this result is not terribly surprising. The paradox of heat death is a well-known intuitive way to state the problem. The outcome is the final catastrophe putting an end to the materialistic view. The catastrophe is also due to the sticking to too simple mathematics based on real numbers which have no internal anatomy, in accordance with materialism in which only magnitude matters. Universe is much much more complex. AdS/CFT brings in strings and extended particles but is not a realistic solution.

  1. One part of the problem is the completely wrong view of space-time as a single smooth manifold such as AdS. In TGD, space-time is a union of an arbitrarily large number of 4-D Bohr orbits of 3-surfaces in H=M4×CP2 obeying holography, which is almost deterministic: this is absolutely crucial for having fermion interactions for formally free fermions.

    Intersections of space-time surfaces as 2-D string world sheets give rise to the geometric aspect of interactions. H=M4×CP2 explains symmetries and fields of the standard model and is mathematically the only possible choice: number theory-geometry duality at the level of physics and the existence of the twistor lift dictates this choice. The dynamics reduces to that of space-time surfaces. By general coordinate invariance, there are only four local degrees of freedom.

  2. There is a fractal hierarchy of size scales defining the size scales of the space-time sheets. Holography = holomorphy principle is a crucial part of the solution and generalizes the holomorphy of string models and transforms extremely linear classical field equations to algebraic equations. One obtains minimal surfaces irrespective of classical action as long as it is general coordinate invariant and expressible in terms of the induced geometry (see this and (this). One also obtains direct counterparts of functional iteration hierarchies associated with 4-D analogs 2-D fractals like Mandelbrot fractal and Julia set.
  3. Besides topology and algebraic geometry, also number theory brings in structure. Entire hierarchies of extensions of rationals emerge and Galois groups appear as symmetry groups. Galois confinement is an attractive dynamical principle forcing for instance the total momenta be rational valued. functional fields appear too.

    Classical number fields, reals, complex numbers, quaternions, octonions emerge as an essential part of dynamics via M8-H duality (see this). p-Adic number fields appear also and the corresponding functional fields appear too.

    Space-time surfaces represent numbers in both the classical sense and as elements of function fields see (this, this, this).

  4. TGD also brings in consciousness and one gets rid of the curse of materialism (this). The Universe as space of space-time surfaces becomes Quantum Platonia consisting of classical and quantum states as mathematical objects and quantum jumps between these quantum states make quantum Platonia conscious entity (or a union of conscious entities), which learns and remembers and its complexity and level of consciousness unavoidably increases since the number space-time surfaces more complex than a given surface is infinitely larger than the number of those which are simpler.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, November 18, 2025

Mott Problem, Brownian motion, and TGD

I did not know that the basic prediction of TGD is known as an anomaly, problem (see this): if the particle emitted in say nuclear decay has an approximately spherical wave function, why is it seen as a track in bubble chamber. Why not a set of dots corresponding to a sequence of localizations as position measurements induced by the interaction of the atoms of the bubble chamber. Mott considered one possible solution to the problem.

The very existence of the Mott problem is one of the most direct supports for the zero energy ontology of TGD, which solves the paradox of quantum measurement theory. The basic physical entities are slightly-non-deterministic Bohr orbits for particles identified as 3-surfaces, which however obey classical field equations. Quantum states are wave functions in the space of these 4-D Bohr orbits rather than in 3-space. Brownian motion provides direct evidence for the mild failure of the classical determinism for these Bohr orbits. The edges of the Bohr orbit correspond to interaction vertices.

The key idea is simple. In the zero energy ontology, the localization does not occur at a point of 3-space, say around some atom, but at a given, slightly non-deterministic Bohr orbit of the particle. The track is the basic observable, not the 3-D position of the particle on the track. Localization in 3-spaces is replaced with localization in the space of Bohr orbits, "world of the classical worlds". The sequence of bubbles makes the Bohr orbit visible. The wave function in the space of Bohr orbits replaces the ordinary wave function and can be spherically symmetric.

See the article Some comments related to Zero Energy Ontology (ZEO) or the chapter Zero Energy Ontology.

For a summary of earlier postings see Latest progress in TGD.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, November 17, 2025

Support for the TGD view of Unidentified Aerial Phenomena

Sabine Hossenfelder talked about the recent findings (see this) giving support for the reality, Unidentified Aerial Phenomena (UAPs), known earlier as UFOs.

The first findings are discussed in the article "Aligned multiple transient events in the first Palomar Sky Survey" by researchers in Nordita (see this). The researchers analyzed the photos taken in Galtech in the first Palomar Sky Survey 1849-1957. Here is the abstract of their article.

Old, digitized astronomical images taken before the human spacefaring age offer a rare glimpse of the sky before the era of artificial satellites. In this paper, we present the first optical searches for artificial objects with high specular reflections near the Earth. We follow the method proposed in Villarroel et al. and use a transient sample drawn from Solano et al. We use images from the First Palomar Sky Survey to search for multiple (within a plate exposure) transients that, in addition to being point-like, are aligned along a narrow band. We provide a shortlist of the most promising candidate alignments, including one with ∼ 3.9σ statistical significance.

These aligned transients remain difficult to explain with known phenomena, even if rare optical ghosting producing point-like sources cannot be fully excluded at present. We explore remaining possibilities, including fast reflections from highly reflective objects in geosynchronous orbit, or emissions from artificial sources high above Earth s atmosphere. We also find a highly significant (∼ 22σ) deficit of POSS-I transients within Earth's shadow when compared with the theoretical hemispheric shadow coverage at 42,164 km altitude. The deficit is still present though at reduced significance (∼ 7.6σ) when a more realistic plate-based coverage is considered. This study should be viewed as an initial exploration into the potential of archival photographic surveys to reveal transient phenomena, and we hope it motivates more systematic searches across historical data sets.

The weekend known as Washington flap 1952 was rich of UAPs. 5 transients were observed during the previous week. There were also radar observations by air traffic controllers. Also unusual light phenomena (UAPs) were observed. Some observations involved 4, 5, and even 6 dots in a row. Very few transients were found in the Earth's shadow. The reflection of sunlight from a metallic object has been considered as a possible explanation. But who put it there? Could the transients be objects orbiting the Earth? The official interpretation was that atmospheric phenomena are in question,

The article "Transients in the Palomar Observatory Sky Survey (POSS-I) may be associated with nuclear phenomena and reports of unidentified anomalous phenomena" (see this) described AUPs associated with nuclear testing during the period 1851-1957.

Here is the abstract of the article.

Transient star-like objects of unknown origin have been identified in the first Palomar Observatory Sky Survey (POSS-I) conducted prior to the first artificial satellite. We tested speculative hypotheses that some transients are related to nuclear weapons testing or unidentified anomalous phenomena (UAP) reports. A dataset comprising daily data (11/19/49 4/28/57) regarding identified transients, nuclear testing, and UAP reports was created (n = 2,718 days).

Results revealed significant (p = .008) associations between nuclear testing and observed transients, with transients 45 percent more likely on dates within \pm 1 day of nuclear testing. For days on which at least one transient was identified, significant associations were noted between total number of transients and total number of independent UAP reports per date (p = 0.015).

For every additional UAP reported on a given date, there was an 8.5 percent increase in the number of transients identified. Small but significant (p = .008) associations between nuclear testing and the number of UAP reports were also noted. Findings suggest associations beyond chance between occurrence of transients and both nuclear testing and UAP reports. These findings may help elucidate the nature of POSS-I transients and strengthen empirical support for the UAP phenomenon.

Correlations of the nuclear transients with visual observations (AUPs), possible only during night time, are reported. This would conform with the assumption that the transients are equally probable at dayside and nightside.

I have managed to spoil my academic reputation in many ways. To take seriously unidentified aerial phenomena (UAEs), formerly known as UFOs, is one of these ways. So, what would be the TGD view of these objects?

  1. UAPs can move with huge velocities: this suggests that they are not objects but processes. The spot of light in the roof caused by a light beam can move arbitrarily fast as you turn in the direction of the beam.

  2. The light balls including ball lightnings, made possible by the TGD view of quantum physics predicting an entire hierarchy of effective Planck constants, would be primitive life forms, plasmoids and predecessors of chemical life. Ions of cold plasma would be their key aspect as it is also in the case of ordinary biological life.

  3. Researchers from NASA claim to have a lot of observations about these kinds of entities in the ionosphere. I wrote an article about these findings (see this).

  4. A basic prediction of TGD is the existence of phases of ordinary matter characterized by a scaled up value of effective Planck constant heff and behaving like dark matter but not identifiable as galactic dark matter. Their presence however explains why baryonic matter seems to disappear during cosmic evolution: protons would transform to "dark" photons at the magnetic bodies. This disappearance would be a signature of evolution increasing the algebraic complexity of space-time surfaces implying increase of heff.

  5. "Dark" particles have scaled up their Compton lengths and times. This makes possible quantum coherence in long scales and could explain the mysterious coherence of living matter impossible to understand in the standard biochemistry approach. Without this coherence we would be sacks of water with some chemicals and could not climb in trees or write poems.

  6. Metabolism is the basic aspect of life. At the fundamental level, dark photons with energies in the visible range could serve as metabolic energy. This would be the case also in living matter at the fundamental level: chemical energy storage would have emerged only when plasmoid life led to chemical life. Pollack effect and its generalization would be behind basic mechanisms of life and make it possible to get metabolic energy from the solar light. This could explain many findings about UAPs.

Consider now the findings from this perspective.

  1. The light-balls in the POSS survey described in the first article are reported to mostly appear at the day side of the Earth. The Sun would make this possible by providing metabolic energy via the Pollack effect making it possible to increase the value of h for protons to heff> h.
  2. AUPs as light balls tend to be associated with the lines of tectonic activity. The reason would be that tectonic activity in the group liberates energy as dark photons serving as metabolic energy for the plasmoids. Large values of heff are associated with quantum critical systems and the tectonic activity would be such a phenomenon.
  3. Nuclear explosions liberate huge amounts of energy and can induce tectonic activity liberating dark photons serving as a food for the plasmoids of the atmosphere. These plasmoids would not need solar radiation as metabolic energy so that they could occur also at the night-side of the Earth. This prediction is testable.

    The nuclear transients can occur also at the dayside as is clear from the fact (see this) that AUPs observable only in night-side were found to correlate with the transients and could be identifiable as transients. I did not find any direct information about whether the nuclear transits prefer to occur at the day-side.

  4. NASA researchers have reported (see this) that the plasmoids in the ionosphere look like living entities, which tend to appear in groups. Plasmoids could organize around magnetic or electric monopole flux tubes, which are basic objects in the TGD Universe appearing in all scales: even galaxies and now also stars around the Milky Way nucleus are found to form such linear sequences.
See the article About long range electromagnetic quantum coherence in TGD Universe or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.