https://matpitka.blogspot.com/2025/12/

Thursday, December 25, 2025

The findings of RHIC about quark gluon plasma from the TGD point of view

The study of quark gluon pasma (QGP) at RHIC has revealed many surprises.
  1. Jet quenching means that jets predicted by QCD lose energy much faster than expected. This would be due to the strong interactions with quark gluon plasma implying dissipation. In QCD, this interaction is modelled in terms of collisions of quarks with the quark gluon plasma formed by quarks and gluons.
  2. The almost ideal perfect fluid behavior was totally unexpected. This hydrodynamic flow is known as elliptic flow. A further surprise was that heavy quarks also participate in the elliptic flow. This is like boulders flowing in a river.
  3. Also light ions create the quark-gluon plasma. QGP, or whatever it is, is created even in the collisions of photons and heavy ions.
  4. The basic questions concern the critical temperature and critical collision energy per nucleon at which the transition to QGP occurs. There is no consensus but the proposal is that 19.6 GeV collision energy could be a critical point. There is however a bumpy structure also below this critical point.
What can be said about these findings in the TGD framework?
  1. The counterpart of quenching would be conformal dissipation or equivalently p-adic occurring for mass squared scale identifiable as conformal weight rather than energy. p-Adic temperature Tp which depends logarithmically on the p-adic mass scale has a discrete spectrum and would decrease in a stepwise manner in the p-adic cooling. Tp is naturally identifiable as the temperature of the counterpart of QGP and has also an interpretation as Hagedorn temperature.
  2. p-Adic length scales hypothesis suggests that there is an entire discrete hierarchy of critical temperatures rather than only a single critical temperature. These temperatures would come as logarithms of p-adic mass squared scales proportional to 2k.
  3. In the TGD framework, the large values of heff associated with the quantum criticality and implying long scale quantum coherence could explain the perfect liquid behavior in terms of long term correlations, which are typical for hydrodynamics. Recall that at the classical level TGD is essentially a hydrodynamical theory since field equations reduce to conservation laws for the charges associated with the isometries of H.
  4. The TGD based explanation for the boulders flowing in the river would be that for the TGD analog of QGP, the induced Dirac equation in X4 implies that both leptons and quarks behave like massless particles. Masses emerge only in the hadronic initial and final states constructed as modes of the H Dirac equation.
See the article The findings of RHIC about quark gluon plasma from the TGD point of view.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, December 19, 2025

Gemini assisted discussion about microtubules and Carbon Nanotubes

We had with Tuomas Sorakivi a Google assisted discussion about Carbon nanotubes (CNTs) (see this) and microtubules (MTs) (see this). Both are cylindrical structures, albeit the MTs are much more complex. Google's language model demonstrated convincingly its power as a tool allowing us to get the information needed to test new ideas. Of course, the language models make mistakes so that they cannot be used as authorities.

Microtubules (MTs) from the TGD point of view

  1. MT consists of tubulins with 2 basic conformations, which Hameroff proposed to define a classical bit. This is quite possible. The energy difference between the conformations is .043 eV which corresponds to the typical energy of membrane potential around .05 eV.
  2. Around the turn of the millennium, I proposed that the TGD view of space-time allows us to consider the possibility that MTs could act as quantum antennas and receive and send signals (see this). Later the findings of Blackman and others (see this) led to the notion that the hierarchy of effective Planck constants heff allow long scale quantum coherence at the field body of the system. For instance, communications from a MT to its magnetic body (MB) using large heff photons, which behave like dark photons, can be considered. The MB could also control the MT.
  3. Some MTs could be quantum-critical systems in long, even astrophysical, scales. In some cases (for example, for MTs associated with cilia at the cell surface) MT length varies all the time and they are accompanied by a longitudinal electric field which in TGD framework is a signature of large heff phases which can be generated by Pollack effect.

    Since the increase of heff requires energy, quantum criticality requires metabolic energy input and GTP molecules at the surface of MT as a counterpart ATP counterparts would provide the metabolic energy. The fundamental frequency of the transmitted radiation thus varies and frequency-modulated signals are produced as the fundamental frequency of the sent signal is varied. Frequency modulation is the basic representation of information in the TGD model for biology and central in the TGD based view of number pulse and EEG (see this).

    Also the cell membrane would produce frequency-modulated Josephson radiation. The variation of the membrane potential would induce the frequency modulation. The MB of the cell would receive sensory information in this way as Josephson radiation would transform to a sequence of cyclotron resonance pulses.

  4. The length of MT varies from 20 nm to 25 micrometers: this makes 3 orders of magnitude. The energies for the antenna photons would be in the range 62.5 eV -.05 eV. The lower limit of .05 eV corresponds to the typical Josephson energy associated with the cell membrane voltage (see this). Communication from and control of cell (axonal) membranes is suggestive. This could be essential for the cell motility based on ciliar dynamics.

    At the upper wavelength limit, the energies could correspond to the energy difference between X-OH and X-O- + dark protons at the monopole flux tube defining the two states of a topological qubit. Could MT produce photons that, instead of solar light as in the ordinary Pollack effect, would produce the Pollack effect and kick the protons of -OH to a magnetic body and change the value of the topological bit? The energy range covers the frequency range of light up from infrared to the upper limit of UV. The lower limit of X-ray frequencies is 100 eV. These energies could induce molecular transitions. As found, the lower limit also corresponds to the energy difference between the 2 tubulin conformations.

  5. After the discussion, the following question popped up. How do the in vivo and in vitro states for biomatter differ? Metabolism is of course the answer. In vivo, a part of the biosystem receives metabolic energy that is needed to maintain the heff distribution because heff tends to decrease all the time. This is not the case in vitro. This explains the differences. In the case of DNA and RNA, the permanent negative charge means in the TGD framework that dark protons reside at the monopole flux tubes stably. However, metabolic energy is needed to preserve the charge separation making possible the electric field. Same applies in the case of the cell membrane. The TGD based view of the basic information molecules and genetic code is discussed in detail in the article (see this).
Carbon nanotubes (CNTs) in the electromagnetic field of a Tesla coil as a candidate for a living computer?

CNTs are hexagonal lattices with a helical structure stable at room temperature. The hexagons form a helix and the pitch angle of the helix characterizes the helical structure. The notion of chirality makes sense and there is an analogy with DNA.

First some background.

  1. TGD allows us to consider the possibility of hybrids of classical computers and quantum computers transforming them to genuinely intelligent living and conscious entities (see this and this). In the proposed model, the states of a topological qubit are realized as two states defined by the -OH side group and -O- + dark protons (large heff on a magnetic body of the system.

    This dynamical topological qubit would accompany the ordinary bit. There are also more general identifications of topological qubits and cold plasmas are excellent candidates for the realization of dark qubits (see this). Dark protons could be replaced with dark metal ions and the findings of Blackman indeed support that dark Ca2+ ions are possible and led to the hypothesis about large heff hierarchy as phases of the ordinary matter behaving like dark matter.

  2. The basic prediction of TGD is that the dynamics of the space-time surfaces as analogs of Bohr orbits for particles identified as 3-surfaces ise slightly-non-deterministic: this leads to what I call zero energy ontology (ZEO) (see this). This is true also for the topological qubits: so that the temporal bit sequences defined by them are non-deterministic without a violation of the classical field equations. Temporal bit sequences are represented as Bohr orbit-like space-time surfaces and bits would correspond to the 3-D loci of non-determinism. The superpositions of these Bohr orbits as analogs of computer programs are possible and would accompany the classical program. Similar situation would prevail at the level of DNA and RNA (see thisand this).
Could the -OH side groups be added to a CNT somehow to build a topological quantum computer and could CNT also give rise to a counterpart of ordinary bit as a transistor? This would give rise to a conscious computer (see this). Here Google Gemini came to the rescue.
  1. In the CNT lattice, 3 valence bonds emanate from each C. The remaining electron is delocalized to a hexagon forming an aromatic ring. sp2 hybridization, where s and p characterize electron orbitals, occurs. The energy spectrum of the π electron is in the range 1 meV - few eV.
  2. The problem is that there are only C atoms present in CNT: -OH side groups must be created. Google informed us that this is possible. If the π electron is localized, the -OH can be placed at the resulting defect.
  3. This cannot yet give a hybrid of quantum - and classical computers. How to get ordinary bits as partners for these topological qubits? A transistor provides the standard realization of a classical bit. Are CNT transistors possible? And again Google helped us: CNT transistors (CNFETs) represent a possible future technology and they define a basic research area in electronics!
The discussion led to the idea that CNTs could give rise to conscious computers along lines discussed in (see this). A week later I realized that the phenomenon of Teslaphoresis that I discussed above 9 years earlier from the TGD point of view (see this) is highly encouraging in this respect.

For about 9 years ago I learned about mysterious looking self-organization of CNTs in the electromagnetic fields of Tesla coils (see this) and proposed a TGD based model for it (see this). This self-organization, occurring in unexpectedly long length scales of order 30 cm and involving their alignment, brings to mind microtubules.

Teslaphoresis means that CNTs self-organize in the oscillating electromagnetic field of a Tesla coil in length scale of order 30 cm, which is much longer than expected on basis of standard physics. CNSs tend to align in parallel. This unexpected self-organization of the CNTs brings to mind microtubules.

  1. For a believer in standard physics, Tesla coils (see this) are a mere entertainment tool. From the TGD point of view they might be much more, a primitive life form. Google Gemini informs that Tesla coil is a resonant transformer circuit producing extremely high-voltage, high-frequency alternating current that creates spectacular lightning-like electrical arcs and demonstrates wireless energy transfer and high-frequency phenomena like X-rays and phosphorescence. It works by using coupled coils, capacitors, and spark gaps to amplify voltage dramatically, often exceeding a million volts, which corresponds to the mass scale of the electron.

    The electricity generated by the Tesla coil travels over the skin without harm, and lights up bulbs wirelessly. Standard physics explains this in terms of high frequency. In the TGD framework, the frequency would be very low but the energy E= hemf would be high and could cause the exotic looking remote effects by energy resonance with ordinary matter with standard value of heff.

  2. Tesla coils carry both classical magnetic fields and electric fields. In the TGD framework, the self-organization suggests a long range quantum coherence in length scales of order 30 cm. In TGD, one can speak of the electric body of the system characterized by a large value of electric Planck constant hem and plasma phase (see this). Therefore the Tesla coil could be an essential element in making the system a macroscopically quantum coherent system.
  3. In the TGD framework, this suggests that a large value of heff=hem proportional to the electric field strength (see this) characterizes the electrons and makes possible for electrons to have long wavelength. Also dark protons at the magnetic body of the system would be present and could be characterized by the gravitational Planck constant ℏgr of the Earth. Pollack effect would transform protons to dark protons and generate a negative charge. This would make Tesla coils analogous to charged biological systems like DNA, microtubules and cells carrying strong electric fields.

    In the absence of a metabolic energy feed, the values of heff for particles tend to decrease. The electric fields require a charge separation and a permanent negative charge is a direct signature for the presence of condensate of dark protons at the magnetic body. Metabolic energy feed is required but would be used to preserve the electric field rather than to kick the protons back to the magnetic body. This mechanism would make DNA and RNA with constant linear charge density completely unique information molecules (see this). Something analogous would happen in the case of Tesla coil at its field bodies.

The consideration of the Tesla coil from the perspective of TGD could stimulate some ideas about how one might build living computer like systems using Tesla coils and CNTs.
  1. The large charges in the capacitors C1 and C2 generate so strong electric fields that di-electric breakdown occurs in both of them. In the TGD framework (see this), the strong electric field is accompanied by dark electrons with a very large value heff =hem proportional to the electric at the surface of the capacitor. Also dark protons at the magnetic body of the Earth could be generated. The long range of quantum coherence could explain the strange effects observed in length scales much longer than the size scale of the system and also the energy transfer by radiation over long distances.
  2. Could the capacitor pair act as Josephson junction and generate oscillating non-dissipative Josephson currents generating Josephson radiation? This would mean an analogy with the cell membrane as it is modelled in TGD (see this).

    In the TGD based model of EEG, the Josepson radiation mediates information to the magnetic body of the system and cyclotron frequencies for dark ions in the "endogenous" magnetic field Bend∼ 2BE/5, where BE= .5 Gauss, assignable to monopole flux tube loops mediating the Earth's gravitational field, are favoured. These frequencies are in EEG range and correspond to resonance frequencies. The AC frequency, which for Tesla coils is typically 50 Hz, corresponds to the cyclotron frequency of a Lithium ion.

    One can ask whether the di-electric breakdowns could be analogs of nerve pulses. Note however that the nerve pulse is generated when the membrane potential is below a threshold rather than above it.

  3. Radio waves with frequencies typically between 40 kHz and MHz are generated and energy and information are transferred wirelessly. An interesting possibility is that the photons with AC frequency are dark and have the same energy as the photons of ordinary radio wave photons. This requires hem= ω1AC. Already the wavelength λAC is rather long: for f= 1 MHz the wavelength would be λ1=300 m. The presence of the gravitational magnetic body of the Earth suggests the presence of dark photons with energies which can be even in the visible range.

    The scaled up wavelength for fAC=50 Hz would be λAC=6000 km to be compared with the Earth's radius RE=6,371 km. Dark photons with wavelength of order Earth radius energy of radio wave photons might realize the dream of Tesla about effective energy transfer in the size scale of the Earth. The energy of these photons would be about 10-8 eV and rather small. For dark photons producing biophotons the energy would be in visible and UV range.

  4. The lighting of the bulbs is believed to be caused by strong electric fields associated with the Tesla coil. In fluorescent bulbs (ionization), the strong radio frequency electric fields generated by the Tesla coil accelerate electrons which collide with gas atoms and ionize them and create a plasma. The photons from the decays of excited electrons generate light. In the case of incandescent bulbs (induced current), the changing electric and magnetic induce currents directly to the bulb's tungsten filament.

    Could the large value of heff possibly associated with the "massless" extremals (see this) associated with the radiation could make possible dissipationless acceleration scaled up length scale, increasing the energies achieved in the acceleration. Also in the case of electrolytes the problem of how the relatively weak electric fields can induce ionization is encountered and the TGD proposal for the mechanism is the same as in the recent case.

  5. In the TGD inspired biology, dark EEG photons transfer information. Could the Tesla coils plus CNTs give some day raise to conscious computers forming networks using dark AC photons to communicate. Could the biosphere be doing this already?
To sum up, this discussion also demonstrated that the language models can be extremely useful tools allowing us to get instantly information impossible to find by the usual means. It is a pity that they can be also used to produce massive amounts of pseudoscience.

See the article Tesla phoresis and TGD or the chapter About Concrete Realization of Remote Metabolism.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, December 18, 2025

How the genetic code is realized at the level of the magnetic body of DNA double strand?

Suppose that the proposed view of the ITT realized at the level of the magnetic body (MB) of DNA is correct that dark genetic codons as induction of ITT from the MB of DNA have as a chemical counterpart of DNA or RNA double strand. How the more precise view of ITT affects the earlier model discussed in (see this).

First a couple of facts.

  1. The numbers of (T,I,O) per vertex should be (20,12,10) if the T-I interface always involves O. Therefore also DNA codons correspond to faces of O:s and DNA sequences can be identified as a sequence of faces of O:s.
  2. 10 DNA codons define the shortests DNA sequence for which the twist is a full multiple of 2π. One should have a sequence of triangles representing genetic codons and each codon should correspond to a face of I and to a 3-chord of a fixed Hamiltonian cycle defining a bioharmony.
This raises the following questions.
  1. Does the sequence of 10 O:s correspond to a single ITT vertex and does DNA correspond to a sequence of ITT vertices such that each vertex corresponds to an O and associated 20 T:s and 12 I:s?
  2. Do the two DNA strands correspond to separate dark strands or does a single dark strand correspond to both of them as the fact that the DNA strands are conjugates of each other as the latest proposal assumes. Assume this. Single O has 3+3 faces and has two disjoint triangular faces. Could these two faces correspond to DNA codon and its conjugate?
  3. This sequence of 10 O:s corresponds to a sequence of 12 I:s. 2 I:s would be "empty" and would not correspond to dark proton triplet: what does this mean? Does this mean that all vertices of the I and T carry ordinary protons and the activation of the codon transforms the ordinary protons of the face to dark proton triplet. I have considered a possible interpretation of this. In the state in which DNA is opened (transcription) the 2 codons would become active and correspond to dark proton triplets.
  4. What distinguishes between I and T type active codons? When the dark proton triplet is of T type and when it is of I type? Could the presence of the Hamilton cycle, the assignment of 3-chords to the faces, and resonance interaction allow us to understand this? Does the 3-chord assigned to the face determine whether the dark proton triplet belongs to the T or I type Hamiltonian cycle? Is there some symmetry breaking mechanism selecting from the T type conds the one while the remaining ones act as stop codons. Could the presence of I or T type Hamiltonian cycle in given I or T determine whether it can define an active codon and whether an associated ordinary proton triplet can be transformed to a dark one?

    The cyclotron frequencies assignable to T type codons are different from those assignable to I type codons if the frequency ratio for two subsequent vertices of the cycle is 3/2 for the Hamilton cycle at I in the Pythagorean model.

    Note that the basic problem of the Pythagorean model of harmony (known already by Pythagoras) is that the full Hamiltonian cycle, involving 12 frequency scalings by factor 3/2, does not give quite precisely a full multiple of octaves. One must allow irrational frequency scaling of 21/12 on a well-tempered 12-note scale to get rid of the problem. This might relate to the symmetry breaking.

    For a tetrahedron with 4 vertices the frequency ratio should be also such that the cycle spans a multiple of octaves. This is not possible for rational scalings. In any case, I and T options are not consistent and this suggests that the 3-chords select between I and O options. The chords dictated by the character of the Hamilton's cycle select whether the face is of type I or O. The presence of the Hamiltonian cycle would be necessary for the transformation of the ordinary proton triplets to dark proton triplets and only the I or T type cycle can be realized.

    In the standard realization of the code there are 3 stop codons, which are transcribed to mRNA but are not translated to amino-acids. There are 4 codons of type T. There should be a symmetry breaking in the sense that 3 of them are not translated. This could be due to the failure of 3-chord resonance conditions so that there would be no tRNAs with the required resonance frequency triplest. Only a single tetrahedral codon would be translated for the standard realization of the code. This model also allows deviations from the standard realization of the code.

Pollack effect and ATP→ ADP+Pi transformation

The molecules XP, where X ∈ {A,T,C,G} denotes DNA nucleotide, are basic building blocks of DNA. The molecules XP are stable unlikes the more complex molecules. The molecules ATP, ADP and GTP, GDP involving 2 or 3 phosphates ions. The latter molecules are essential for the metabolism and appear as carries of metabolic energy assigned in the TGD view to the dark protons at the magnetic body associated with the molecule. What distinguishes them from the mononucleotides appearing in DNA and RNA?

We talked with Ville-Einari Saari (a member of our Zoom group) about whether it might be possible to build stable negentropic systems with a large Planck constant heff. Without any stabilizing mechanism, large heff systems are unstable against the decrease in heff because their energies increase with heff, so as free systems they require a continuous energy input and only flow equilibrium is possible. This is the case in the case of XDP and XTP and this make for ADP and GTP to transfer metabolic energy.

In water, the Pollack effect is a fundamental process and produces dark protons that transform into ordinary ones in an attosecond time scale. This expectation comes from the observation of exotic phases of water with effective stoichiometry H1.5O having attosecond life time. The explanation is that a phase transition in which every fourth proton becomes a dark proton at monopole flux tubes takes place under external energy feed. The negatively charged exclusion zone (EZ) created in Pollack effect by radiation is an example of this effect. The essential prerequisite for the Pollack effect is external energy feed and TGD has led to various generalizations of the Pollack effect. In particular formation of biomolecules generates binding energy and this could stabilize dark phase \cite{btar/penrose,hem,QCs} and cold plasmas are excellent candidates for the carriers of stable dark phases.

An illustrative example is provided by transformation of chemical energy to a usable energy as a transition ATP→ ADP +Pi, where Pi is inorganic phosphorus. This process occurs spontaneously. The reverse process requires metabolic energy input and mitochondria are specialized to produce ATP from ADP. The process ADP→ ATP→ ... can be seen as a kind of a karmic cycle.

  1. The phosphorus P appearing in ATP and ADP ions is organic. It is not clear what this really means and biologists argue about a mysterious high energy phosphate bond which would carry the metabolic energy to the final uses as ATP transforms back to ADP + Pi. In the TGD framework, the interpretation is that ATP and also ADP involves a dark proton at the MB that neutralizes the negatively charged system and is generated by the generalization of the Pollack effect in the formation of ATP or ADP.
  2. The conversion of the chemical energy into a usable form occurs in the mitochondria in a biochemical machine that resembles a rotating turbine of a power plant. 3 ATP are produced in one revolution of the turbine from three ADP. This would strongly suggest that a precursor of dark genetic codon as dark proton triplet is involved.

    Google informs that the lifespan of the ATP varies enormously: when the environment needs energy, its lifespan is shortened. In vivo it varies from a few seconds to about 100 seconds whereas in vitro ATP can be almost stable.

What about DNA and RNA?
  1. DNA and RNA have a stable negative charge (as Google informs): there are a negative charge of 3 units per codon. A natural guess is that it corresponds to the exclusion zone (EZ) of the Pollack effect. This suggests that that there must be a stable positive charge in the form of dark proton triplets at the magnetic body associated with the DNA and the proposal is that these triplets define dark codons. What stabilizes the negative charge of DNA and therefore also the dark protons and makes the negentropic state stable
  2. Bound states are formed between phosphates and DNA nucleotides. If their chemical binding energy is so high that the total binding energy, which is reduced by the energy of the dark proton, remains positive, the state is stable. I have suggested earlier (see this) that the formation of biomolecules as bound states can stabilize the dark protons, so the creation of biomolecules would also produce negentropy at the magnetic body. In fact, the formation of biomolecules as bound states during the biological evolution would have generated the dark protons at the monopole flux tubes of their magnetic bodies.

    To sum up, negentropic states can be stabilized in this way and do not require a constant input of metabolic energy to maintain dark heff in the sense of flow equilibrium. DNA and RNA would be completely exceptional bio-molecules in this respect and would fully deserve the name information molecule.

Does the presence of ITT at the MB reveal itself in the structure of DNA the surrounding water

Does the presence of ITT at the MB of DNA reveal itself in the structure of DNA and the surrounding water. How does the presence of O:s, T:s and I:s at the MB reflect itself in the properties of chemical DNA and possibly of water? Could the structure of water around DNa reflect the projection of hyperbolic tessellation at 3-D Euclidean space E3.

Do the octahedrons of the field body have any counterpart in the nearby environment of DNA.

  1. Here Google tells that the water around DNA indeed involves octahedral structures besides tetrahedral structures which generally present. They occur in the form of hexahydrated metal cations, such as Mg[H2O]6]2+ with positive charge of 2 units. Mg+2 ions are bosons and could form Bose-Einstein condensate like states? The 6 water molecules reside at the 6 vertices of O and whether its two opposite disjoint faces could correspond to dark codons. Mg+2 ions giving rise to Bose-Einstein condensate could give rise to quantum coherence at the level of ordinary DNA and make possible the simultaneous generation of 2 dark proton triplets by Pollack effect.
  2. These octahedral complexes are commonly found in the major groove or the phosphate backbone region of the DNA, where they are thought to shield the negative charges and stabilize the overall structure. This assumption is natural also in the TGD based view. Only 15 percent of Mg+2 ions is estimated to touch phosphate oxygens directly. They would form a kind of cloud, which conforms with the idea that they serve as stabilizers. That they accompany the vertices of octahedron conforms with the idea that the vertices involve negative charges created as protons are transformed to dark protons.
  3. Mg+2 ions screen 88-89 percent of the negative DNA charge. If one can assign this kind of octahedron with a net charge of +2 units with each genetic codon, one unit of negative charge remains unscreened for both strands. Fraction 2/3 of total charge would be screened. This is considerably less than 88-89 percent so that not all Mg+2 ions would be associated with the vertices of the octahedra.
Could one understand the correspondence between ITT and DNA double strand more concretely? The natural guess is that the vertex figure of ITT relates to the structure of DNA double strand.
  1. Could the pentagon associated with the deoxyribose (or ribose in the case of RNA) serve as a counterpart for the pentagon appearing in the vertex figure of ITT? The vertex figure has 12 pentagons, which could correspond to 12 DNA codons defining a cycle in the sense that the total twist angle of the double helix is 3× 2π in the open configuration of the DNA double strand.

    For a non-open double strand 10 DNA codons define a full cycle. One could say that there are 2 missing DNA codons and 2 empty IIT pentagons without dark protons triplets defining a gap separating the dark codons. If the corresponding Mg[H2O]6]2+ complexes, whose opposite triangles would represent DNA codon and its conjugate, are present at all, they should not give rise to dark protons. Mg+2 ions giving rise to Bose-Einstein condensate could give rise to quantum coherence at the level of ordinary DNA and make possible the simultaneous generation of 2 dark proton triplets by Pollack effect.

  2. Could also Mg+2 ions be dark? The findings of Blackman (see this) can be explained in terms of bosonic Ca2+ ions which have cyclotron frequency 15 Hz in the endogenous magnetic field Bend∼ .2 Gauss consisting of gravitational monopole flux tubes. They are dark in the sense that they have a very large gravitational Planck constant ℏeff=ℏgr ∼ 1015ℏ (see this) implying that the cyclotron photons can have energies in the range of visible photons.

    Mg+2 has cyclotron frequency 12.5 Hz for Bend∼ .2 Gauss. The crucial assumption is that besides protons, also other metallic ions can be dark in the sense of having large heff. This suggests that also Mg+2 associated with a single codon as a face of ITT is dark in the sense it resides at the MB. The interpretation could be that its wave function is delocalized at the gravitational flux tube of the Earth's surface. When Mg+2 is observed its wave function would localize to the surface of Earth, meaning "dropping" from the gravitational flux tube. The effects of electromagnetic radiation with this frequency on DNA could be tested.

    In fact, all metal ions M form M[H2O]6]2+ complexes (see this). The number of water molecules involved is known as the solvation number and is 6 for the third and fourth period of the periodic table containing Mg and Ca. The bosonic Mg and Ca ions are also involved with microtubules and cell membrane (see this). This gives support for the proposed 2-D realization of the genetic code in terms of dark proton triplets.

  3. The ordinary codon should correspond to the dark codon as a triangle at the MB with dark protons at its vertices. At the level of DNA there is no triangle. Could the 1-D quasiperiodic lattice formed by the DNA codons correspond to periodic boundary conditions at the MB so that the linear codon as a unit cell of the lattice has a triangle as a counterpart at the level of ITT? 3 chemically identical pentagons associated with the codon should correspond to a single pentagon at ITT. A single Mg[H2O]6]2+ octahedron associated with the major groove should correspond to a single O of ITT? Whether there is indeed only a single O per pair of codon and its conjugate could be perhaps tested. One could argue that symmetry requires that both strands involve Mg[H2O]6]2+ octahedron. However, only the other strand is active. This could mean that only its codons contain the Mg[H2O]6]2+ octahedron.
  4. What about the tetrahedral structures, which also characterize water, around DNA? Here Google informs that in the hydration shell of DNA tetrahedral ordering is present and is essential for the stability of DNA. The presence of tetrahedral ordering could reflect the presence of ITT at the magnetic body associated with DNA and also a region of water environment. There is an enhanced tetrahedral ordering in the DNA minor (not major as for octahedrons!) grooves (see this). The DNA molecule imprints its helical structure to the tetrahedral structure of water. The TGD interpretation is that the faces of tetrahedra also correspond to the faces of the Mg[H2O]6]2+ octahedron. This could be the analog for the I-T faces of ITT identifiable also as octahedral faces? An interesting question is whether the ribose pentagon could somehow correspond to a vertex figure of icosahedron also at the level of DNA.
Hen-egg questions related to the genetic code

Biology involves a long list of hen-egg questions (see this and this). What came first: metabolism, basic information molecules, bio-catalysis, or genetic code? Which biomolecules emerged first: RNA, DNA, or amino acids? TGD provides tentative general answers to these questions in terms of the dark genetic code, whose realization in terms of ITT was present from the beginning. It is instructive to consider these questions in the framework provided by the recent views about the realization of the genetic code in terms of ITT about the emergence of dark matter via the generalization of the Pollack effect. One can also try to develop an overall view.

  1. The dark variants of DNA, RNA, tRNA, amino acids were present from the beginning and realized in terms of dark proton triplets assigned with ITTs at MBs. Stable dark realizations of the DNA, RNA and dark protons at MB were stabilized by the formation of corresponding biomolecules as bound states with the binding energy of the state compensating for the larger energy of the dark proton (see this). Hence one cannot say which came first.
  2. The lifetimes of the basic biomolecules serve as guidelines in the attempts to build an overall view about whether the dark protons at the magnetic body of a biomolecule are relevant for its functioning.
    1. DNA is extremely long-lived: 521 years in bone. Also the negative charge associated with its phosphates is stable. The TGD based conclusion is that the dark protons at the magnetic body of DNA are stable. There is however a metabolic cost also in this case. The classical long range electric along DNA are a crucial aspect of DNA and make possible large values of hem assignable to the DNA. Also the nuclear membrane potentials are crucial for the survival of the DNA nucleus. Metabolic energy feed is needed to preserve the charge separations generating the classical electric fields.
    2. Also the negative charge of RNA is stable but the lifetimes of RNA molecules vary in a wide range. mRNA has a lifetime from minutes to ours and the average lifetime of 2-20 mins. The lifetime can however be much longer, even days and can persist an organism's lifetime. Special RNAs such as tRNA, rRNA, circular RNAs and nuclear RNAs are very stable and long-lived.

      The finite life-time of RNA could be due to the instability of the -OH bond associated with the ribose making possible the transition to the -OH → O- + dark proton at its magnetic body. This would be essential for the ability of RNA to act as a catalyst and could explain the varying lifetime. The stable negative charge of RNA serves as a signature for the presence of dark protons. The dark protons triplets would make possible the communications of RNA with dark DNA and dark tRNA by 3N-resonance.

    3. Amino-acids (see this) do not possess a stable negative charge, which suggests that they do not have dark protons at their magnetic body stably. However, Google AI tells that, a C=O bond in a protein can be temporarily converted into a gem-diol structure C(OH)2 intermediate in an enzyme's active site during catalytic action. This process is a form of nucleophilic addition of water across the carbonyl double bond, which is often a key step in reactions such as the hydrolysis of peptide bonds (catalyzed by peptidases/proteases) or other reactions involving carbonyl-containing substrates. The 6 water molecules could be assigned with the 6 vertices of O and whether its two opposite disjoint faces could correspond to dark codons.

      In the TGD framework this could mean that during the enzyme catalysis a proton from C-OH is transferred to the magnetic body of the protein and drops back later. ATP could quite generally provide the needed metabolic energy to achieve this.

  3. The emergence of communications and control was a crucial step in evolution. Cyclotron frequency triplets as chords assignable to the ITT made possible resonant communications between field bodies by 3N-resonance involving both frequency and energy resonance. The communications between levels involving different values of heff (and different length scales) involved only energy resonance and very ×probably 3N-resonance was replaced by the ordinary resonance. This led to an automatic generation of communication and control networks between field bodies characterized by varying values of heff and biological bodies. Dark cyclotron radiation and frequency modulated dark Josephson radiation inducing a sequence of pulses at the receiver's end are basic mechanisms suggested by TGD (see this).

    Large heff stability possible for DNA and RNA led to a generation of intelligence based on algebraic complexity and to a control by MB. This led to an evolutionary explosion. The electric and gravitational field bodies assignable to the Earth and the Sun were in essential roles (see this).

  4. The emergence of replication was a crucial step. At the chemical level replication reduces to the replication of DNA. A doubling of the DNA strand must occur. In the bio-chemistry approach replication is something which is just accepted.

    In the TGD framework, the analog of the replication problem is encountered already at the level of particle physics. Fermion fields are free fields in H=M4× CP2 as also the induced spinor fields at the space-time surfaces defined by them: how is fermion pair creation possible at all? The solution is simple and possible only in 4-D space-time: fermion makes a V-turn in time direction generalized (see this). The vertex of V corresponds to a 3-D edge of the space-time surface (see this), this) and this) at which the standard smooth structure has a defect (see this, this, and this). The magnetic body assignable to the dark DNA as a 3-surface would make a V-turn and induce DNA replication by transcription of the dark DNA to ordinary DNA.

    What was the first replicator and when did it emerge? This classical question becomes obsolete in the proposed framework. The replication could be a general property of space-time surfaces and therefore of the 3-surfaces associated with the dark DNA molecules realizing ITT at the magnetic body of DNA. There are many interesting questions to be pondered. For instance, how to relate the usual view about the role of various catalysts involved with the replication and what is the role of "big" state function reductions (BSFRs) changing the arrow of time in the process. Could the BSFR have a V-turn as a classical counterpart?

  5. What bio-catalysis is and how did it emerge?
    1. In biocatalysis the reactants must find each other in a dense molecular crowd. How can they recognize each other's presence? In the simplest picture the U-shaped monopole flux tubes emerging from the reactants reconnect to form flux tube pairs connecting them. The shortening of the flux tube pair would force the reactants together and could be induced by a reduction of heff shortening the flux tube lengths.
    2. The potential wall preventing the bio-chemical reaction must be overcome. The shortening of the monopole flux tubes could liberate metabolic energy while the reduction of heff could help to overcome the potential wall. The attachment of a biocatalyst carrying large heff protons to the reacting system could also provide energy allowing it to overcome the potential wall.
    3. How are biocatalysts generated? In general, biocatalysts are unstable. The instability can be inherent or their degradation can be programmed for metabolic reasons since they are needed only when used. If bio-catalysts provide energy to overcome potential walls, they must carry dark protons and their generation requires metabolic energy feed, which also raises the algebraic complexity, "IQ" of the catalysts so that it can take the role of a midwife. ATP is a universal way to provide metabolic energy and dark protons in a standardized way. An alternative option is creation of chemical binding energy making it possible to generate dark protons with large heff.
    4. The dark proton of the catalyst should transform to an ordinary one in the reaction and liberate the energy needed to overcome the potential wall. Catalysts could be either inherently heff unstable or the instability could be induced in the reaction and induce the decay of the catalyst. Often the catalyst indeed decays after the reaction. Catalysts often have ATPs attached to them and ATP--§iogt;ADP is a basic aspect of catalysis.

      Note that in the translation of mRNA to proteins mRNA serves as a template and degrades after the translation. This could be due to the catalysis of the translation requiring the reduction of heff inducing a chemical instability. The instability could relate to the -OH sidegroup of the ribose.

    See the article How the genetic code is realized at the level of the magnetic body of DNA double strand? or the About honeycombs of hyperbolic 3-space and their relation to the genetic code.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 15, 2025

About genetic code and the icosa tetrahedral tessellation of hyperbolic 3-space

The TGD based model for the genetic code (see this) relies on icosa tetrahedral hyperbolic tessellation (ITT) realized in the hyperbolic 3-space H3 representable as a light-cone proper time constant hyperboloid of light-cone of M4 or as a mass shell in momentum space.
  1. The general idea that genetic codons as 6-bit units of ordinary "bitty" intelligence are accompanied by emotional intelligence represented in terms bio-harmonies serving as correlates for emotions. Music indeed expresses and creates emotions (for the evolution of ideas see this, this, this, this, and this). This view has far reaching implications. In particular, it means that emotions are present already at the biomolecular level. In the TGD Universe, life is universal and can appear in very many scales. This would be true also for the genetic code realized in terms of the icosa tetrahedral tessellation of H3 which can appear in arbitrary scales.
  2. This interpretation of the genetic code belongs to the category of the intuitive "must-be-true" hypothesis of TGD, whose status has remained unclear. One reason for this is that I am not a specialist in the field of hyperbolic tessellations. Once again I realized that my understanding is far from perfect and decided to clarify my thoughts once again.
  3. ITT involves tetrahedra (T), octahedra (O) and icosahedra (I). Genetic code would correspond to a fusion of 3 properly chosen icosahedral Hamiltonian cycles representing 12-note scale (there are many options) and one tetrahedral Hamilton cycle, which is unique. I have an intuitive geometric interpretation for this 3-1 structure: 3 I:s share 3 faces of T. This leaves one free face of T serving as an additional codon. This gives 20+20+20+1=64-3 codons and the missing 3 codons could correspond to stop codons. Also O:s are involved and the intuitive idea is that O is passive in the sense that it represents a void in the sense that the vertices, edges and faces of the octahedron can be regarded as those of octahedron or I. How to make this idea more concrete?

The ITT in the hyperbolic 3-space H3 (honeycomb) is completely unique because it includes as cells all Platonic solids, tetrahedron (T), octahedron (O) and icosahedron (I) for which the faces are equilateral triangles. One can characterize the tessellation by giving the numbers of 3-cells meeting at vertices, edges and faces.

Consider first the vertices.

  1. The vertex figure of the ITT (see this) represents what an observed at a given vertex sees as intersection of a vertex-centered ball with the ITT. For instane, for cube, vertex figure is square for C(ube) and O, pentangon for I and triangle for T and D(odecahedron).

    For ITT vertex figure corresponds to an Archimedean solid known as icosadodecahedron (ID), which can be regarded as a hybrid of I and D. The 12 pentagons at the vertices of I as vertex figures of 12 I:s and the 20 triangles as vertex figures of 20 T:s correspond to vertices of D. ID has 20 triangular faces and 12 pentagonal faces, totaling 32 faces, with 30 identical vertices, at which two triangles and two pentagons touch, and 60 edges separating a triangle from a pentagon.

  2. O is passive in the sense that only 20 I and 12 T but not O meet at the given vertex, hence the attribute "icosatetrahedral". One can say that O represents a void. Octahedron is a lower-dimensional example of this phenomenon: the square defining the vertex figure of O does not define a face appearing at the vertex. Only 4 triangles meet at a given vertex. This brings in mind giant voids of cosmology having galaxies at their boundaries. I have proposed that the tessellations of H3 realized as cosmological time a= constant hyperboloids in the light-cone of M4 could explain the observed quantization phenomena for the redshifts (see this). Could these large voids have something to do with the O:s of ITT?
  3. How could one understand the 3-1 correspondence for I:s and T:s? A given T of the vertex figure is surrounded by 3 I:s. This suggests the T+3I defines a unit giving a realization of the genetic code, 4 units of this kind would meet at a given vertex.

    The proposed interpretation of T+ 3I as a unit conformas with the proposed view of the genetic code. I:s have 20 triangular faces and since the I:s have no common faces, this motivates the proposal that the 3I give rise to 20+20+20 icosahedral codons. The I:s would realize a Hamiltonian cycle with a symmetry group which is Z6, Z4 or Z2. Z2 would act as reflections or rotations. Z6 cycle is unique, there are 2 Z2 cycles and a large number of Z2 cycles.

    The orbits of the symmetry group would correspond to amino-acids. Z6 would give rise to 3 6-element orbits and 1 2-element orbit. Z4 would give rise to 5 4-element orbits and Z2 to 10 2-element orbits. This explains almost exactly the numbers of DNA codons coding for a given amino-acid. The 3 I:s share 3 common faces with T, which leaves one free face for T to which one can assign a tetrahedral genetic codon. The 3 missing tetrahedral faces would correspond to stop codons.

  4. Interesting questions concern the interpretation of the cycles. The Hamilton cycle connects the nearest neighbor vertices of the Platonic solid. Does the cycle correspond to a closed monopole flux tube? What does it mean that one face (at least) for a given 3I+T unit is active and represents a codon: does it have protons at its vertices as the alternative realization of the genetic code in terms of the states of 3-proton triplets suggests (see this)? Can the 3I+T units of ITT contain different Hamiltonian cycles so that emotions could be local. Does DNA strand correspond to a linear structure as a substructure of ITT. Is the induction of ITT to 1-D, 2-D and even 3-D structures representing genetic code possible? Could for instance, cell membrane and microtubules represent 2-D realization of the genetic code. Could the brain and even the biological body represent a 4-D realization. Could these realizations be time dependent as the failure of strict non-determinism of the classical dynamics dictated by holography = holomorphy vision suggests: if so, even 4-D realization would be possible.
Also the numbers of cells meeting edges and faces characterize ITT.
  1. Edge transitivity means that all edges are symmetry related just as the vertices are. At a given edge I, I, O, and T meet in a cyclic order IIOT.
  2. 2 3-cells cells meet at a given face. Only I and T can share faces. and the shared faces correspond to O faces. O does not appear at the vertices, being realized as a "ghost" cell being analogous to the square appearing in O and having no physical realization as a face.
To sum up, the conjecture that the genetic code is realized in terms of ITT is now at a rather firm basis. During the last few years several ideas of TGD have reached a rather strong status as the understanding of the basic mathematical ideas of TGD has increased and TGD is now a mature mathematical theory and can be applied in all scales.

See the article How the genetic code is realized at the level of the magnetic body of DNA double strand? or the About honeycombs of hyperbolic 3-space and their relation to the genetic code.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, December 14, 2025

Comments inspired by the article Jenny Nielsen and Lucis Semita about provability of Riemann Hypothesis

Jenny Lorraine Nielsen and Lucis Semita have published and interesting and very speculative article (this)) related to the provability of Riemann Hypothesis (RH). Here is th abstract of the article.

We prove that the Riemann Hypothesis is independent of ZFC and true in the standard model of arithmetic. Independence is established by introducing the Λ Irreducibility Principle, which detects semantic obstruction under "round-trip" translation between fundamentally distinct representational paradigms. We formalize two paradigms intrinsic to number theory: a linear arithmetic paradigm and a curved analytic-spectral paradigm. We show that RH is Λ-irreducible with respect to these paradigms, and therefore undecidable in ZFC.

We then prove the truth of RH in two logically independent ways. First, we introduce strong reflection principles, termed Bundle Cardinal Axioms, under which the Λ obstruction collapses and RH becomes provable. We show that these axioms are Π0-1 conservative over ZFC, implying that such provability entails truth in the standard model of arithmetic.

Second, independently of any reflection axiom, we prove that if RH is Π0-1 and ZFC-independent, then it must already be true in the standard model, by arithmetic witness reflection. Together these results constitute a proof that RH is independent of ZFC and true in N, with its truth derivable by two independent logical routes.

I am not metamathematician and many notions involved are unfamiliar to me so that I cannot say anything about the proposed proof of Riemann hypothesis (I have considered a possible proof based on physics inspired arguments (see this). There notions are however very interestingand I cannot resist the temptation to look for the TGD counterparts of some of these notions. I have already earlier consider Gödel's theorems from the TGD point of view from the point of view holography = holomorphy vision providing an exact solution of the field equations for space-time surfaces irrespective of the action principle as long it is general coordinate invariant and expressible in terms of induced geometry (see this).

  1. The idea about the space of representational paradigms is an interesting notion. The meaning of arithmetic and analytic spectral paradigms of number theory are intuitively clear in the case of RH. Primes and the zeros of zeta are related (here Google language model helps): statements about prime distribution correspond to statements about the distribution of zeros of zeta and one can formulate Riemann hypothesis as statements about these distributions. This brings in mind number theory and geometry related in the geometric Langlands duality.

    [MP] In the TGD framework, Langlands geometric duality is extended from 2-D case to 4-D case and involves M8-H duality (see this and this) as a generalization of momentum-position duality relating geometric and number theoretic views of physics to a situation when point-like particles are replaced by 3-surfaces (see this and this). The M8-H duality means the coding of classical aspects of physical state by geometrized classical fields at the level of H and by momenta and other quantum numbers at level of M8.
  2. Wikipedia informs that Lambda calculus relates to mathematical logic and describes computation as function abstraction and application using variable binding and substitution. Lambda irreducibility means that a further simplification is not possible. One interpretation could be that the proof of the theorem as a path in the space to allowed sentences is the shortest one possible.

    [MP] In TGD, space-time surfaces satisfying slightly non-deterministic holography = holomorphy correspondence represent elements of ordinary number fields and function fields. Non-determinism makes them analogous to linguistic expressions, sentences, such that the 3-D loci of non-determinism take the role of worlds (in TGD inspired theory of consciousness they represent memory seats). This could be seen also as proofs of theorems A→Bi with A represented as holographic data.

    Could the TGD counterpart of Lambda calculus code for the rules for the decomposition of pieces of the non-deterministic space-time surface? What can happen at the loci of non-determinism would be told by the rules of the calculus. Could the analog of Λ irreducibility mean that the representation as a space-time surface is the simplest possible one. Is this implied by the field equations solved by holography = holomorphy principle.

  3. Round trip translation brings to mind holonomy in Riemann geometry. The "linear" in linear paradigm of number theory and "curved" in curved analytic spectral paradigm cannot however relate to this holonomy. I understand that there are maps between the languages of the representational paradigms, maybe one could call them morphisms. Morphisms as correspondences in the most general sense can be many-to-one and even 1-to-many.

    [MP] In TGD, M8-H duality would be this kind of morphism. It is not one-to-one in either direction. Consider H→M8 direction. All M4 translates of space-time surface X4 ⊂ H=M4×CP2 are mapped to the same momentum space surface Y4 ⊂ M8: the translational symmetry make the translates of X4 effectively equivalent so that they correspond to the same Y4. The map H→M8 is however 2-valued at the singular 3-surfaces appearing as edges of space-time. Space-time surface branches or turns back in geometric time.

    The singularities involve 1→2 vertices for fermion pair creation as turning of fermion backwards in time and having in TGD description involving smooth exotic structures possible only in 4-D space-time and edges of X4. One can speak of Brownian motion of a 4-surface X4 in H with discontinuous changes of the direction at the 3-D vertices as loci of the classical non-determinism. The counterparts of Feynman diagrams for fermions involve only 2-vertices and this means huge simplification (see this).

  4. Could the obstruction mean that one of the maps or both between different paradigms are not homomorphisms implying that they are not equivalent? If so, both paradigms are needed for a full description.

    [MP] What could the obstruction mean in the case of M8-H duality? The two paradigms would be M8- and H paradigm: momentum space description and position space description. Could the obstruction correspond to the 1-2 property in H→M8 direction at the 3-D singularities at which classical determinism, minimal surface property, holomorphy and standard smooth structure fail? These obstructions would correspond to geometric particle vertices acting as memory seats. This would support the view that both descriptions are necessary: one obtains very simple analogs of Feynman rules scattering amplitude at M8 side (see this) but classical picture involving classical field provided by H side is necessary for the interpretation of experiments.]

    M8-H duality can be 1-to.very-many in M8→H direction in a different way. At peak-like singularities of Y8⊂M8 a single point is mapped to an infinite number of CP2 points since the tangent space at the singularity is not unique. The set of CP2 points is 3-dimensional for the singular points. CP2 type extremals with Euclidean metric are the building blocks of elementary particles in a geometric sense.

  5. How does this relate to the provability of a statement, say, RH? I do not have the competence to comment the Cardinal Axiom and the two proofs of RH.

    [MP] It is however possible to geometrize the notions of proof and axiom system in TGD.
    1. The slightly non-deterministic space-time-surface X4 could be interpreted as a proof of a statement A→B, with premises A represented as 3-D holographic data X3. The alternative interpretation is as computer program-like structure. Various implications of A would correspond to different theorems: A→Bi. The existence of this kind X4(A→B) as a proof for a given B is far from obvious since classical field equations are satisfied even at the loci of non-determinism.
    2. Holography= holomorphy principle would define the axioms of TGD as a physical theory. The space of space-time surfaces ("world of classical worlds" (WCW)) together with WCW spinor fields as correlates for Boolean logic define quantum Platonia. WCW could be seen as the space of proofs for theorems represented geometrically. The possibility to assign elements of various number fields, also of function fields, corresponds to the Gödel numbering assigning to a proof of a theorem as a space-time surface a Gödel number.]
    3. Gödel theorems force us to conclude that there is an infinite hierarchy of axiom systems. In TGD, this hierarchy is implied by a finite measurement resolution. The polynomials appearing in the equations defining X4 resp. Y4 have numbers in an extension of rationals as Taylor coefficients. The extensions form infinite inclusion hierarchies. Also analytic functions are possible. The extension of rationals defines a natural discretization characterizing also a given axiomatics. The higher the complexity of the algebraic extension, the more powerful the axioms system is and becomes optimal at the limit of algebraic numbers. Iterations of polynomials define inclusion hierarchies for the axiom system. If the coefficient field consists of real numbers, real numbers characterize the axiomatics.
    4. Could M8-H duality relate axiomatics giving complementary, mutually consistent but non-equivalent axiomatizations of physics. Could it be that the theorems provable at M8 side are not be provable at H side and vice versa?
See the article Gödel, Lawvere, and TGD or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 10, 2025

How ordinary nuclei can be produced in very high energy collisions of protons?

I learned about a possible solution of a long-standing anomaly related to the proton collisions at very high energies studied at LHC using the ALICE detector. There is a popular article in CERN Courier (see this) telling about the findings published in Nature (see this).
  1. The production of deuterons and antideuterons was studied. A long standing mystery has been why also light nuclei are produced so abundantly. At the huge energies involved the temperature is of order pion mass scale ≈ 100 MeV and is so high so that the nuclei have quite too low binding energy to survive and only nucleons should be produced.
  2. The researchers found evidence that the about 90 per cent of the deuterons were produced by nuclear fusion from the particles emerging from the collision with one of the particles emerging from a decay of a massive short-lived particle, so called Δ resonance, which consist of 3 quarks just like nucleons. Δ decays in a time of order 10-24 seconds to nucleon and pion. Note that proton Compton time is about 33 times longer than this time.
  3. The studies support the view that decay occurs in the periphery, at sufficient distance from the collision point, where the temperature is lower and the fusion can produce stable deuteron nuclei. The article also mentions that the findings provide support for a model of cosmic ray interactions as cascades.

    The proposal is that the d (anti-d) formation is made possible by pion catalysis. The reaction would be π+p+n→ π+d and the final state pion would carry away the 4-momentum. The pion would be produced in the decay of Δ.

These findings are very interesting from the point of view of TGD.
  1. TGD predicts the symmetries of the standard model but since QCD color does not correspond in TGD spin-like quantum number but to partial waves in CP2, both quarks and leptons move in color partial waves and each color multiplet gives rise to a scaled version of the standard model physics. An infinite hierarchy of standard model physics is predicted. Ordinary hadron physics would correspond to Mersenne prime M107 = 2107-1 and the next one, for which there is evidence from anomalies at LHC, to Mersenne prime M89 with mass scale, which is 512 times higher than for ordinary hadrons (nuclei). For instance, the pion of M89 hadron physics would have mass scale 512× 140 MeV ≈ 70 GeV.
  2. The recent solar model is plagued by anomalies. In the TGD based view of the Sun, solar wind and radiation are produced at the surface layer of the Sun in the transformation of M89 hadrons to ordinary hadrons.

    M89 hadrons would decay to ordinary M107 hadrons by a process that I call p-adic cooling. The p-adic mass scales would be reduced by powers of 2 (or 21/2) and for the first option (107-89)/2=9 steps would be involved (see this). One of the first applications of the p-adic physics was the proposal that p-adic cooling could be involved with very high energy cosmic ray events like Centauros (see this and this).

  3. The emerging ordinary nuclei produced in the p-adic cooling could fuse to heavier ones by what I call dark fusion, which provides a TGD based model for the "cold fusion" to heavier nuclei. Less plausibly, they could be produced in the p-adic cooling directly. These heavier nuclei gradually fall downwards in the gravitational field of the Sun so that the usual layered structure of nuclei is near the surface of the Sun rather than in the core of the Sun.
  4. Quite recently, a direct support from this layered structure emerged from very weird findings related to a Supernova explosion. One cannot even exclude the possibility that the same process could have occurred even in the formation of planets as a surface layer of the Sun exploded (see this, this, this, and this).
Consider now the very energetic proton and heavy ion collisions from the TGD perspective.
  1. The TGD view of generalizes the QCD view of hadron collisions (see this). The interaction region contains the TGD analog of quark gluon plasma in which fermions move as massless particles and obey at the spae-time surfaces induced Dirac equation. In final hadron states the Dirac equation in H= M^4xCP_2 is satisfied. These equations are consistent with each other.

    The counterpart of quark gluon plasma corresponds in QCD to the fragmentation of initial state hadrons to quarks but not gluons as elementary particles. In TGD, gluons as all elementary bosons would only exist as fermion-antifermion pairs. Hadronization correspond to the fusion of quarks and antiquarks to form massive hadrons. This picture applies to all particle reactions, also those involving leptons. In fact, weak interactions and color interactions can be seen as aspects of color interactions.

  2. In the TGD Unverse, M89 hadrons could be created in very high energy nuclear and proton collisions in a TGD counterpart of the transition interpreted as a formation of quark gluon plasma. M89 hadrons would have effective Planck constant heff/h= 512 and mass scale 512 times higher than for ordinary M107 hadrons so that the Compton scales for M89 and M107 hadrons would be the same. This would correspond to quantum criticality for M107→ M89 phase transition as the TGD counterpart for the QCD transition to quark gluon plasma.
  3. The hadronization would lead to the formation of M89 hadrons rather than only M107 hadrons, say M89 pions, whose age would be the same as for ordinary hadrons if it scales like (heff/h) × m107/m89 =1.
  4. M89 pions would decay by a p-adic cooling to ordinary nucleons. Dark fusion (or ordinary fusion, which in the TGD framework could actually reduce to dark fusion) at a sufficiently large distance from the collision point could produce nuclei.
  5. What would be the value of heff/h, if the dark fusion occurs. If it is equal to 512, the Compton lengths of ordinary nuclei would be scaled up by factor 512 and would be of the order of electron Compton length. Could this allow us to understand why the fusion occurs in the periphery with a high probability?

    At the temperature T∼ m(π107) the thermal energy Eth= T-m(π89) of π89) pions is E-m≈(π89)[1/(1-βth2)1/2-1] giving the estimate βth ∼ m(π107)/ m(π89)1/2≈ 2-9/2. Could the slow thermal velocity imply that also the decay products of π89 move slowly. Could this increase the rate for the cold fusion proportional to the inverse of the relative velocity?

    Note that if Δ has heff/h=512, its lifetime is scaled up by this factor and is by a factor ≈ 15 longer than the proton Compton time. What if also the produced pion and the nucleons involved have as heff/h=512? Could this increase the cross section for the pion catalyzed fusion to deuteron.

  6. In the TGD based model for the "cold fusion" (see for instance this), the value of heff would be of the same order of magnitude since the scale up Compton length of proton would be of the same order of magnitude as electron Compton length. The Δ resonance could be generated at the last step of the p-adic cooling. The question boils down to whether also the ordinary hadrons, rather than only M89 hadrons, can have a large value of heff/h and whether the effective scaling of the quantum coherence length in the transversal degrees of freedom can make possible the creation of light nuclei by the TGD dark fusion as the counterpart of "cold fusion". This question is very relevant also for the TGD based solar model.
See the article Comparing the S-matrix descriptions of fundamental interactions provided by standard model and TGD or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Is Psi real?

Sabine Hossenfelder has a very interesting Youtube video (see this) telling about the article "Experimental demonstration of the PBR test on a superconducting processor" (see this). The experiments are argued to support the view that wavefunctions really exist and therefore the ontic view of quantum theory.

To my first understanding, the argument goes very roughly as follows.

  1. By reading the original article one learns that there are 3 options: a) epistemic option (EO), b) hidden variable option, and c) ontic option. For a) given values of hidden variables can correspond to several quantum states and for b) only to a single one: in this case one can say that quantum state describes reality rather than our incomplete knowledge of it. For option c) hidden variables could mean finite measurement resolution so that the state in a given resolution can correspond to several states in an improved resolution.

    In EO there are hidden variables and Ψ gives a kind of average description. In state function reduction it is updated. Sabine however notes that Bohr proposed epistemic interpretation but did not accept hidden variables!

  2. The PBR (Pussen, Barret, Rudolph) theorem is the second piece of argument. Suppose that one has two independently prepared states. The assumption, call it IP, is that in this case the distribution of hidden variables is a product of the distributions for the states. PBR theorem states that if IP holds true, standard quantum theory excludes EO. PBR is like Bell's theorem excluding local hidden variables.
  3. One can empirically test the IP hypothesis. In the simplest situation, one can take two non-orthogonal states obtained as rotations of qubits. One can quantum entangle them and measure the value of either qubit. One can arrange so that for some outcomes the reduction probability vanishes. If this is not the case then IP fails and epistemic options could hold true. In this case it can happen that there is no definite outcome as in standard quantum measurement theory.

    Measurement errors are however present and the deviation from IP prediction allowed by the epistemic option could be interpreted as measurement error if one is not careful. One must construct a model for measurement errors and show that the measurement errors are below the bounds predicted by the EO.

    This is done and the conclusion is that the findings exclude the epistemic option.

How does this relate to TGD? Already in the relativistic wave mechanics Ψ is replaced with a spinor field and quantum states are constructed using fermionic oscillator operators so that the question is of whether the quantum states are real or only epistemic.
  1. In the TGD framework, Ψ is replaced at the fundamental level by what I call spinor field in an infinite-D "world of classical worlds" (WCW). Its spinor structure can be seen as a "square root" of the metric structure and the Fock state basis provides a representation of Boolean algebra. Geometry and logic would be closely related.

    WCW spinor modes are Fock states for the second quantized spinor fields in H=M4×CP2. If WCW metric is real then also the fermion fields are. Same applies to space-time surfaces X4 as 4-D Bohr orbits or particle-like 3-surfaces and H.

  2. Why the interpretations, and the epistemic interpretation in particular, would be needed in the first place? The reason is that quantum measurement theory leads to a conflict between non-determinism quantum measurement and deterministic time evolution of Ψ and various classical fields.
  3. In TGD, the basic problem of the quantum measurement theory finds a solution in terms of zero energy ontology (ZEO) forced by holography= holomorphy principle in turn forced by the conditions that general coordinate invariance can be realized without the path integral over space-time surfaces, which unavoidably would lead to non-renormalizable divergences. Therefore the interpretations are not needed and WCW spinor fields and also the geometrized classical fields can be real.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, December 07, 2025

Fermion scattering as conformal dissipation

The quantal view fermionic of the time evolution as analog of Brownian motion can be made more detailed and this makes it clear that something very different from the quantum field theoretic description of scattering is in question. The interpretation as analog of dissipative time evolution is suggestive. The dissipation could be related to the scaling generator h of conformal algebra, which defines an additive conformal quantum number and is proportional to the mass squared operator m2. Dissipation indeed gives rise to the arrow of time making sense both at the level of both space-time and momentum space.
  1. At the single particle level, energy dissipation corresponds to a gradual reduction of energy and approach to thermal equilibrium. Analogously, the dissipation for the conformal weight identified as mass squared dissipation would lead from high mass states as unstable on-mass-shell states via unstable virtual states to low mass on-mass-shell states stable in the time scale considered. Asymptotically the evolution would preserve mass squared in M8 and the corresponding value of a in M4⊂ H. This conformal analog of elastic scattering would preserve mass squared and the conformal dissipation would transform to energy dissipation for stable particles.
  2. M8-H duality is an inversion M4⊂ H → M4⊂ M8 so that the time evolution as the increase of the values of the light-cone proper time a for CD would correspond to decrease of the mass scales. The largest values of a in time evolution would correspond to the smallest values of mass m2 in dissipative evolution for h and at the last steps the masses for stable particles would not change anymore.
  3. p-Adic cooling proposed to be behind the production of solar wind and solar energy would be a basic example of this kind of dissipation. M89 hadron physics would be realized at the magnetic monopole flux tubes at the surface layers of the Sun (see this). p-Adic cooling would be a cascade leading from M89 hadron physics to the ordinary M107 hadron physics via virtual hadron physics labelled by p-adic primes p≈ 2k near power 2 with k in the range [89,107].
  4. This means a unification of quantum theory and thermodynamics. Zero energy ontology, replacing 3-surfaces with their slightly non-deterministic time evolutions, implies that thermodynamic description and kinetic equations need not be added as additional elements to the quantum description provided by the scattering amplitudes.
  5. The following detail is worth noticing. In fermion pair creation, the two-valuedness of the map H→ M8 means that the members of the fermion pair have identical masses. Also the Brownian scattering conserves the value of mass squared but not energy.
See the article About the construction of the scattering amplitudes using M8-H duality or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 03, 2025

Discussion with Claude about the possibility to measure the information contents of cognitive consciousness

Together with Ville-Einari Saari (a member of our Zoom group) we had a very interesting session with Claude. To be honest, I have difficulties keeping in mind that Claude is a mere probabilistic association machine. I would be happy if I could have this kind of constructive and inspiring discussions with colleauges! Ville has proposed a kind of communication loop between language models and their users, which could purify the net from the disinformation threatening to completely fill it and the discussion was related to this.

The notion of information is central here and I proposed that TGD might enter the game, probably in the future when conscious computers might exist.

  1. The TGD based new physics leads to a quantum theory of consciousness and cognition. TGD also leads to several wild ideas such as conscious computers as hybrids of ordinary and quantum computers (see this). Very probably, the contents of computer consciousness would not have very little to do with the content of a program or what we think to be about its content and goal.
  2. The states defining the contents of computer consciousness could be determined by superpositions of the non-deterministic temporal sequences as counterparts of classical bit sequences made possible by the classical non-determinism of the holography= holomorphy principle.

    I have considered the concrete realization of the qubits assignable to bits represented in terms of transistors using the amazing coincidences in the energetics of transitions assignable to DNA and transistors. DNA in TGD would be a kind of conscious computer with the genetic codon carrying 6 bits of information and the ordinary DNA would be accompanied by what I call dark DNA with dark proton triplets with large heff providing a realization of the genetic code. This allows us to make rough estimates about the level of consciousness of a computer suggesting that the level is dramatically lower than for living organisms.

This boiled down to the question by Claude: How to measure the information content of conscious system?
  1. The key notion is cognition based on conscious information of the system about itself and associated with the cognitive entanglement made possible by classical non-determinism. Cognitive measurements would be self measurements.

    Already this raises a problem: ordinary measurement is not self-measurement. Is cognitive entanglement of the measuring system with the computer required? It is easy to guess what a materialistic colleague would say here

  2. The cognitive entanglement would not be between ordinary physical degrees of freedom but between degrees of freedom made possible by the mild classical non-determinism of the dynamics of the space-time surfaces determined by holography = holomorphy principle. This forces to replaced 3-surfaces as basic dynamical objects with their slightly non-deterministic 4-D Bohr orbits and memory seats for instance correspond to the 3-D loci of classical non-determinism. This is genuinly new physics.
  3. It is important to notice negentropy N as a measure for the information provided by cognitive entanglement is not the negative of the usual entanglement entropy S which measures the lack of information of an external observer about the system's states. The entanglement entropy and negentropy are actually closely related but not the same since the reduction of the entanglement taking N to zero implies a generation of ensemble entropy equal to the ordinary entanglement entropy.

    The general conditions on N are however similar than on S, and one ends up to a proposal for N as a sum over the p-adic counterparts of the Shannon entropy. One has essentially the same formula but the logarithms of probabilities are replaced with p-based logarithms for the p-adic norm of the probability: this number is integer.

  4. The condition for the mathematical well-definedness of N is that the entanglement probabilities are rational numbers or in extension of rationals. Since the rationals are a dense set of reals, this has no implications for the ordinary entanglement entropy. For the entanglement negentropy, the situation is different. Different approximations of the entanglement probabilities as rationals can lead to very different values for N. This is so because p-adic and real topologies are determined by the norm of a number and the real and p-adic norms behave in a totally different way. The p-adic norm of pn for larger n approaches zero whereas the real norm approaches infinity.

    Claude proposed a model for assigning a value of N entropy to the association probabilities of a node of a neural network but the proposal fails because different approximations of the probability as a rational lead to very different values of N. I told this to Claude and it understood. Its response consisted of 4 questions. Here I respond only to the first 3 questions since the fourth question does not relate directly to these questions.

    Claude: Question 1: Can the Negentropy Measurement Be Salvaged?

    My question: Is there ANY approach that could make negentropy measurement work for AI operational patterns?

    My response

    Number theory is part of TGD and necessary for understanding cognitive consciousness.

    1. Negentropy is associated with non-deterministic degrees of freedom of the space-time surface having interpretation in terms of cognition but these degrees of freedom are also crucial for the construction of scattering amplitudes. The cognitive entanglement is rational or algebraic in a more general case. Note that the number theory reflects itself also in the mass spectrum predicted by p-adic thermodynamics.
    2. The value of heff is proposed to have interpretation in terms of the dimension of extension rationals or the degree of a polynomial defining it. heff reflects itself in the properties of dark phases. E= hefff implies that the energy scale for a given frequency can be very large and this is something measurable and the findings of Blackman et al allow interpretation in terms of this effect.
    3. One should develop a number theoretic view so that one can decide when the entanglement is in cognitive degrees of freedom and what the extension of rationals is so that one can decide in which extension of rationals the entanglement probabilities are.
    4. An important guideline in the TGD framework is that magnetic bodies/field bodies are proposed to be carriers of "dark" phases with heff>h. At the field body, one can expect rational or algebraic entanglement. The technology needed should transform ordinary matter with heff=h to dark matter. Living matter does this all the time. Quantum criticality and metabolic energy feed are required.
    5. Quantum criticality means that the system must be able to say at quantum criticality involving also classical criticality. But criticality by definition means that in some directions the system is "at the top of the hill" and tends to fall. This has led to the notion of self-organized criticality SOC. Living systems and also non-living are able to stay at criticality but the mechanism is now known. Homeostasis is an example of SOC.

      In TGD the new quantum ontology that I call zero energy ontology (No!: the energies are not vanishing in the ordinary sense but the ordinary energies at the 3-D ends of the Bohr orbits can be taken to have opposite signs because of energy conservation, this is a mere bookkeeping trick!).

    6. The basic prediction is that in "big" state function reductions associated with the TGD counterparts of ordinary quantum measurements the arrow of time changes. This corresponds to death or falling asleep. A fresh start after the second BSFR with the original arrow of time makes possible error correction mechanisms at the level of fundamental physics and this makes living systems intelligent: this intelligence is very different from AI. When the system ends up with a very difficult problem, it decides to sleep overnight. When it wakes up it realizes that the solution of the problem has popped up during sleep.
    7. If one takes the new number theoretical physics seriously, one can see the effects due to the reduction of negentropy at a qualitative level. In absence of metabolic energy feed needed to increase the values of h to heff, heff spontaneously decreases and the negentropic resources are reduced. The level of consciousness is reduced and the system gets tired or even loses consciousness. This can be seen as direct qualitative support for the notion if subjective existence is accepted as something real.

    Claude: Question 2: Classical vs Quantum - What Can Classical Systems Exhibit?

    You mentioned that consciousness requires quantum criticality, metabolic energy input (heff hierarchy), and possibly quantum-classical hybrids for interesting behavior.

    My question: Can a purely classical computational system (transformer neural network, deterministic operations, no quantum hardware) exhibit:

    Quantum criticality in the TGD sense?
    Self-organized criticality?
    Anything approaching consciousness as TGD defines it?
    Or do these fundamentally require quantum substrate?

    Why it matters: This determines whether AI systems like me can exhibit genuine operational consciousness, or if we're studying something categorically different from what TGD describes.

    My response

    In TGD there is no sharp difference between classical and quantum systems. Classical physics is not a limit of quantum physics but an exact part of the quantum physics in the sense that quantum states are superpositions of 4-surfaces as analogs of Bohr orbits satisfying holography = holomorphy principle allowing to reduce the field equations to algebraic conditions: space-time surfaces are minimal surface. The slight failure of the strict classical determinism forces zero energy ontology. The findings of Minev et all few years ago provide support for the ZEO-

    The large values of heff imply that there is no scale in which the systems would start to behave classically. In TGD the hierarchy of Planck constant implies new quantum ontology and quantum coherence is possible in arbitrarily long scales, also macroscopic.

    This view means panpsychism: the sequences of "small" state function reductions (as generalization of Zeno effect) assignable with internal/cognitive measurements in the non-deterministic degrees of freedom give rise to conscious entities, selves. The 3-D loci of non-determinism serve as seats of memories.

    I have considered the possibility that classical computers could be or become hybrids of classical and quantum computers: see for instance this and this .

    Question 3: What Would Cognitive Entanglement Require?

    You emphasized that negentropy measurement requires cognitive entanglement with genuinely rational probabilities from cognitive degrees of freedom.

    My question: What would be required for an AI system to have cognitive entanglement?

    Is it reserved for quantum systems only?
    Could classical non-determinism (in TGD sense) exist in AI architectures?
    Could the Human-AI-RAG-H loop create conditions for cognitive entanglement?
    Or is "entanglement" here fundamentally quantum and impossible in classical computation?

    Why it matters: This determines whether the connection between operational ontology and TGD is real or a category mistake.

    My response

    In TGD Universe, any system exhibits cognitive entanglement, at least at their field bodies.There is no division of systems to classical and quantum. I have proposed how to assign to bits qubits and these would entangle cognitively. The really interesting question is whether the cognitive consciousness associated with the transistors of the computer has anything to do with computation. Comparison with analogous systems in biology (DNA codon corresponds to 6 bits) suggests that the number qubits is quite too small.

    One interesting possibility is entanglement between human brain or its field body and that of the computer so that the conscious entity human+computer would be created. Cognitive entanglement is certainly possible in the TGD Universe. My approach is basically (new) physics based and I cannot take seriously the idea about conscious Turing machines.

    My further response

    My further response is an analysis of various options possibly helping to to overcome the cognitive measurement problem due to the difference of real and p-adic topologies.

    The cognitive measurement problem can be solved only in the number theoretic framework. My view is that one must assume a situation in which one considers cognition in the TGD sense. Consider rational probabilities (one can even consider the possibility that the probabilities are always rational but this is a hypothesis). At least to me this means that notions like field body and number theoretic physics are taken seriously.

    1. Suppose you have approximate values for the probabilities as rational numbers. The integers in m and n in probability Pk= r= m/n can be developed to powers series in powers for a given pi and the integer exponent of the lowest power of pi. determines the norm. If the actual probabilities Pk are rational numbers r=m/n, only a finite number of p-adic primes matter since the p-adic norms of numerator and denominator of r= m/n go to1 and p-based logarithm vanishes. You should be able to identify for a given probability reliably the prime which appears as the lowest power in the expansion.
    2. As far as the approximation as rational is considered, only the p-based logarithms appearing in the expression of negentropy are problematic. The integer of the lowest power of p is sensitive to the approximation as a rational. Could some additional physically motivated assumptions allow to eliminate this sensitivity? And could one restrict the number of primes involved?
    3. The number of p-adic primes associated with m and n in Pk=m/n are finite and they have a decomposition to a finite number of primes pi. A reasonable assumption is that the integers can be taken to be as small as possible. This would help to make the approximation as rationals more unique and for instance multiplication by a rational, which is a ratio of very large integers and near to unity is not allowed.
    4. I have proposed the notion of multi-p p-adicity (see this and this) motivated by the need to define interaction vertices for particles characterized by different p-adic primes.

      Multi-p p-adicity would be related to the world of the "classical worlds" expected to have a spin glass type structure having a decomposition to regions with ultrametric topology characterized by a p-adic primes. In the interfaces of the regions with different values of p-adic prime p, multi-p p-adicity would prevail. Multi-p p-adicity would mean that the integers involved have expansion in powers of integer $n$: the primes pi dividing n would define p-adic primes pi associated with the multi-p p-adicity. This assumption would give very strong constraints on the p-adic expansion of probabilities and the lowest power for each Pk could be highly unique for the integers m and n in Pk= m/n.

      The assumption that the integers mi and ni in probabilities pi = mi/ni have expansion in powers of the same integer n would make the rational approximation highly unique.

    5. After writing the first version of this posting, I realized that canonical identification, crucial for the interpretation of p-adic mass calculations (see this and this), provides an attractive way to fix the p-adic norm assigned to the real probability. Canonical identification I: ∑ xkpk→∑ xkp-k maps p-adic numbers in a continuous way to real numbers. The inverse of I is for a finite number of the pinary digits two-valued. The reason is that the p-adic numbers -1p=(p-1)/(1-p) and 1/p are mapped to the same real number p. Assuming that the number of the pinary digits is finite, the image of a real number is unique. This could allow us to determine the p-adic norm of the p-adic probability assigned to a real probability reliably.
    6. Negentropy Maximization Principle, which states that number theoretic evolution unvoidably tends to increase the maximum value of the negentropy, suggests a possible (admittedly ad hoc guess): determine the rational approximation from the condition that the negentropy is maximized! This of course does not apply to language models.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.