https://matpitka.blogspot.com/2025/02/

Wednesday, February 26, 2025

Why life would be experienced as a panorama near the moment of death?

My email friend DU asked a highly interesting question: Why would the brain go through a recall of important life events before death?

The question is an excellent challenge for the TGD based view of conscious experience. It splits into many sub-questions. What is conscious experience? What is life? What happens in death? What happens in memory recall? Why would life be seen as a panorama before the moment of death?

1. What memories and memory recall are in the TGD Universe?

  1. Memories are not possible in standard QM based theory of consciousness. Since the state after the quantum jump carries no information about the state before it.
  2. In TGD the small failure of classical determinism plus holography predict that quantum states in zero energy ontology (ZEO) are superpositions of 4-d Bohr orbits for 3-surfaces as analogs of particles. In standard ontology one would have superpositions of 3-D surfaces.

    In ZEO, quantum states are superpositions of these Bohr orbits. However, all Bohr orbits start from the same 3-surface and repeated small SFRs (SSFRs)do not change this surface at the passive boundary (PB) of causal diamond. At the active boundary (AB) state changes and this gives rise to self as a conscious entity experiencing the flow of time. This corresponds to the Zeno effect in which the quantum state does not change in repeated quantum jumps.

  3. Conscious experience is about space-regions where the non-determinism is. The 3-D loci of non-determinism would serve as sources of conscious experience. They would serve as memory seats. Small state function reductions (SSFRs) for a given seat with time tn would replace the Bohr orbit with a new one at times t>tn. The superpositions of Bohr orbits would be like quantum stories of life. These stories would begin from the same 3-D quantum state at PB.

    Each SSFR would mean memory recall would mean a quantum measurement for some memory seat as a locus of non-determinism. For instance, a localization to one alternative state of the memory locus would occur: this would mean selection of one possible classical future.

2. How to achieve the memory recall as a quantum measurement?

Option 1: The first view is that in an active memory recall a negative energy signal to geometric past is sent and this signal is received by the memory state and induces a quantum jump changing the 3-D state of the memory state and therefore also the future. This induces a signal propagating to the future brain and induces memory as conscious experience.

Option 2: The second view is that the memory loci with t=tn are quantum entangled as they would be if a superposition of Bohr orbits is in question. What would be measured, would be the state of memory locus at tn. This would mean a selection of one future.

3. What would happen in death?

In the TGD Universe, death and birth would be completely universal phenomena occurring in all scales and biological death and birth would be only a special instance of it.

  1. In ZEO death would mean "big" SFR (BSFR) in which the arrow of time changes. This measurement is induced by interaction with the external world at AP.

    Density matrix is a fundamental observable and is measured in BSSF. In the interaction of AB with the external world, the density matrix of AB ceases to commute with the density matrix and observables whose eigenstates the state associated with PB is.

  2. The measurement of density of AB occurs and necessarily changes the state at PB. The roles of AB and PB are changed and the arrow of geometric time changes. Old self dies and the reincarnated self with an opposite geometric arrow of time is born.

4. Why would death make it possible to recall the entire life?

The total memory recall should occur before the death of BSFR. All memory seats of self, which define subselves, should be activated simultaneously and involve SSFR.

Option 1: If negative energy signals to the geometric past induce a memory recall then a very strong signal of negative energy covering the entire frequency spectrum could generate this. A kind of white noise. All memory seats would be activated in time order propagating from now to the geometric past and the last activated seat would be near or at the moment of birth. Life would be lived in a reverse order. The arrow of geometric time for memory seats as subselves would change: also they would die and reincarnate with a reversed arrow of time!

After that the even bigger BSFR, in which self itself dies, would/could occur and also the arrow of time of self would change.

Option 2: In this case the entanglement between the neighboring memory seats would be reduced by a sequence of SSFRs propagating to the geometric past step by step and stop at PB. The outcome would be life experienced in reverse time order.

In the blog discussion a person who had experienced the life panorama told that the life review was in some sense objective: as if the life would have been seen from a higher perspective. This could be possible if the NDE involves transition to a higher reflective level at which the person is a subself. The memories would be memories of the higher level self about the person. This transition might mean that the third person perspective, which could be present always, is not masked by the sensory, motor and cognitive input anymore. Also a phase transition in which the algebraic complexity of the magnetic body increases can be considered.

The holography= holomorphy vision leads to a concrete proposal what reflective levels of consciousness could mean (see this). They could reduce to a functional composition f→ g(f) of an analytic map g: C2→ C2 with a map f: H=M4× CP2 → C2, where f is an analytic map with respect to hypercomplex coordinate and 3 complex coordinates of H. When g reduces to (g,Id) acting trivially in the second factor of C2, this gives compositions of complex maps of C→ C and iteration of g produces Mandelbrot fractals and Julia sets.

See the article Does Consciousness Survive Bodily Death? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, February 25, 2025

The realization of topological qubits in many-sheeted space-time

Microsoft has unveiled Majorana 1, claimed to be the world’s first quantum processor powered by topological qubits (see this)

1. How could one stabilize the computations and qubits?

The basic problem is how to realize computations in a stable way and how to make stable enough qubits? Concerning computation, topology comes to rescue.

  1. Topological quantum computations (see this can be represented as braidings which are topologically stable under small deformations. Each braid strand represent a unitary evolution of a particle representing a unitary evolution if a qubit and the braiding operation would represent the computation. Braiding can be either time-like dynamical operation for point-like particles in plane or space-like for a braid connects two planes.
  2. Since the 2-D plane containing particles as punctures, the homotopy group is non-abelian. This means that the rotation of a puncture around a second puncture of say bound state can transform the state such that transformation is not a mere phase factor but is a rotation which change the directions of the spins of the particles involved. Therefore the exchange of particles which can be seen as basic braiding operation changing the braid strands can induce an operation, which can be used as a basic building brick for a topological quantum computation.
How could one obtain stable qubits? Qubit represented as a spin is not thermodynamically stable and extremely low temperatures are required. This is the case also for the proposed topological quantum computation: the reason is now that superconductivity is required and this is possible only at temperatures of order milli Kelvins. In any case, the notion of qubit should be topologized. How to achieve this? Here Majorana bound bound states have been proposed as an answer (see this).
  1. Non-Abelian braid statistics, which means that their exchange realized as a 2-D rotational flow generated by braiding induces, instead of change of a sign in Fermi statistics, a non-Abelian unitary transformation of the state. It could be used to change the directions of their spins and affect the anyons. 2π rotation would induce a non-Abelian rotation instead of a mere sign change or phase factor in brain statistics. This is only possible in dimension 2 where the homotopy group can be non-abelian if there are punctures in the plane that the braids would represent. Similarly, swapping two Majorana fermions in braid produces a SU(2) rotation and can flip the spins and thus the qubits. This swap would be an essential operation in quantum computing. In order to have non-trivial topological quantum computation, one must have non-Abelian braid statistics characterized by a Lie group. Rotation group SO(2) or its covering SU(2) are the minimal options
  2. The bound state of two Majorana fermions associated with planar punctures, anyons, would thus obey non-Abelian braid stastistics. It is also possible to affect the second fermion of Majorana bound state by rotating a puncture containing a fermion around the second fermion. Braidings could therefore represent unitary transformations having an interpretation as topological quantum computations. Wikipedia article mentions several realizations of Majorana bound states in superconductors. Quantum vortices in super conductors can provide this kind of states. The ends of the super-conducting wire or of line defects can contain the Majorana fermions. Also fractional Hall effect can provide this kind of states. The realization studied by Microsoft has the fermions of the Majorana fermion at the ends of a superconducting wire.
  3. As I understand it, a condensed matter Majorana fermion would correspond formally to a superposition of an electron and a hole. The statistics would no longer be normal but non-Abelian Fermi statistic but would be that of a non-abelian anion. The weird sounding property of this statistics is that the the creation operator is equal to annihilation operator. One obtains two creation operators corresponding to two spin statse and square the creation operator of is unit operator: for fermions it vanishes. This implies that Majorana fermion number is defined only modulo 2 and only the number of fermions modulo 2 matters. Also the anticommutator of two creation operators at different points is equal to unit operator so that the system is highly nonlocal.
  4. How the braiding could be realized? One can consider two options. Dance metaphor allows to understand the situation. Imagine that particles are dancers at the parquette. The dance would give rise to a time like braiding. If the feet of the dancers are tied to a wall of the dancing house by threads, also a space-like braiding is induced since the threads get tangled.
  5. In the TGD framework, dancers would correspond to particle-like 3-surfaces moving in the plane and the dance would define the dancing pattern as a time-like braiding. This classical view is actually exact in the TGD framework since classical physics is an exact part of quantum physics in TGD. If thee particles are connected to the wall by threads realized as monopole flux tubes, a space-like braiding is induced.
  6. These threads bring in mind the wires connecting superconductor and another object and containing Majorana fermions at its ends. Now the second end would be fixed and second would correspond to a moving particle. Majorana bound states would correspond to the ends of the thread and the superconducting flow of the second end would correspond to the dynamical braiding.

2. Algebraic description of Majorana fermions

The dissertation of Aran Sivagure contains a nice description of Majorana fermions (see this). Majorana fermions would be quasiparticles possible in a many-fermion state. They would create from a fermion state with N fermions a superposition of states with fermion numbers N+1 and N-1. They would be created by hermitian operators γi+/-= ai+/- ai formed from the fermionic oscillator operators satisfying the standard anticommutation relations {ai,aj}= δi,j. Note that one consider also more general Hermitian operators γi+/-= exp(iφ)ai+/- exp(-iφ)ai.

The anticommutations would be {γi ε1, γiε2}= 2× Id, where one has ε1=+1 and ε2=-1 and Id denotes the unit operator. Therefore the statistics is not the ordinary Bose or Fermi statistics and non-Abelian statistics.

What is so remarkable is that that also the anticommutators {γij} satisfy {γi+/-j+/-}= 2Id even when i and j label different points. Therefore these operators are highly non-local meaning long range quantum coherence.

3. Could many-sheeted spacetime allow a more fundamental description of Majorana like states?

The problematic aspect of the notion of Majorana fermion is that the manyfermion states in this kind of situation do not in general have a well-defined fermion number. Physically, fermion conservation is a superselection rule so that the superposition of fermion and hole must physically correspond to a superposition of fermion states, where the hole corresponds to a fermion which is outside the system. TGD suggests an elegant solution of the problem.

  1. In condensed matter physics Majorana fermions could be assigned with the vortices of superconductors. In the TGD Universe, these vortices could correspond to monopole flux tubes as body parts of field body. The states created by γi would be superpositions of states in which the fermion is at the monopole flux tube or at the normal space-time sheet representing the part of the condensed matter system that we see. The Majorana description would be effective description.
  2. The Majorana creation operators γi would be replaced with operators which shift the fermion from ordinary space-time sheet to the monopole flux tube and vice versa. From the geometric interpretation it is clear that this operation must be idempotent. This operation must be representatable in terms of annihilation and creation operators. The operators γi would be expressible products of creation and annihilation operators acting at the space-time sheets 1 and 2 and one would have

    γiε= a1a2+ε a2a1.

    One can consider either commutation or anticommutation relations for these operators. Since the operation does not change the total fermion number, the interpretations as a bosonic operator is natural and therefore commutations relations look more plausible.

    Neglecting for a moment the indices labelling positions and spins, a rather general expression for the operator γε would be

    γε= a1a2+ ε a2a1 .

    This operator is hermitian. If fermionic anticommutations are true, one has for anticommutator

    ε)2= X+Y ,

    X=a1a2a1a2+ ε2 a2a1a2a1 ,

    Y=ε1ε2 (a1a2a2a1+ a2a1a1a2) .

    If one can assume anticommutativity for the oscillator operators associated with flux tube and ordinary space-time sheet one has X=0. For Y one obtains

    Y= ε1ε2 (N1a2a2+N2 a1a1)= ε1ε2(-N1-N2+2N1N2) .

    The eigen values of (-N1-N2+2N1N2) vanish for (N1,N2) ∈ {(1,1),(0,0)} and are equal to -1 for (N1,N2) ∈ {(1,0),(0,1)} so that the eigenvalues are equal to -ε1ε2.

    One can consider also the commutator, which is perhaps more natural on the basis of the physical interpretation. The commutator obviously vanishes for ε12. For (ε12)=(1,-1) one has

    1-1]= ε1ε2 (a1a2a2a1- a2a1a1a2) =-(N1a2a2-N2a1a1)= N2-N1 .

    For ε1=-ε2, the eigen values of ε1ε2(N2-N1) vanish for (N1,N2) ∈ {(1,1),(0,0)} and belong to {1 ,-1} for (N1,N2) ∈ {(1,0),(0,1)}.

    Both the anticommutator and commutator resemble that in the Majorana case but are not identical to a unit operator since the eignvalues belong to the set {0,1} for the anticommutator and to the set {0,-1,1} for the commutator.

4. OH-OH++p as topological qubits?

While writing this, I noticed that the OH-OH++p qubits, where p is a dark proton ag monopole flux tubes, that I proposed earlier to play fundamental role in biology and perhaps even make quantum counterparts of ordinary computes possible, are to some degree analogous to Majorana fermions. The extremely nice feature of these qubits would be that superconductivity, in particular biosuperconductivity. would be possible at room temperatures. This is would be possible by the new physics predicted by TGD both at the space-time level and at the level of quantum theory.

  1. In TGD space-times are surfaces in H=M× CP2 and many-sheetedness is the basic prediction. Another related prediction is the notion of field body (magnetic/electric) body. Number theoretica view of TGD predicts a hierarchy of effective Planck constants making possible quantum coherence in arbitrarily long length scales. Second new element is zero energy ontology modifying profoundly quantum measurement theory and solving its basic problem.
  2. OH-OH++p qubit means that one considers protons but also electrons can be considered. Now the proton is either in the OH group associated with water molecule in the simplest situation in which Pollack effect occurs or the proton is a dark proton at a monopole flux tube. A proton in OH would be analog of non-hole state and the dark proton in the flux tube be the anaog of hole state.
  3. What is new is that the proton being on/off the spacetime surface would represent  a bit.  For Majorana fermions,  the situation is rather similar: the hole  corresponds to the electron being "somewhere else", which could also correspond to being on a monopole flux tube as I have suggested. In standard quantum computation, a qubit would correspond to a spin. The analog of  the Majorana qubit would be a quasiparticle which is superposition of transitions OH ↔ O-+p transitions and transitions in which nothing happens: OH goes to itself and O-+p goes to itself. The Majorana property would correspond to the fact that the transfer between two space-time sheets repeated twice is trivial.
  4. If the energies for OH and OH++p are close to each other, the situation is quantum critical and the qubits can be flipped and a process similar to quantum computation becomes possible. Also superconductivity becomes possible at the magnetic flux tubes analogous to magnetic vortices appearing in superconductivity and in fractional Quantum Hall effect. These are truly topological qubits also because the topologies of the spacetime surface for different bit values ​​are different. However, the energy difference must be larger than the thermal energy, otherwise the qubits become unstable. With the help of electric fields, qubits can be sensitized to quantum criticality and their inversion becomes possible.
  5. The above argument suggests that a non-abelian statistics could be understood for OH-OH++p qubits. The anticommutation/commutation relations for the operators trasferring protons to the magnetic body would not be identical to those for Majorana oscillator operators the squares of these operators would be proportional to unit operator which is essentially the Majorana property. I have proposed a possible realization for this in a more general case. The exchange of dark protons/qubits would be induced by reconnection of monopole flux tubes:    it would therefore be a purely topological process. Nothing would be done to the dark electrons, but the flux tubes would be reconnected. Strands AB and CD would become strands AD and BC. At the same time, the unilluminated protons would become associated with different O-. In this exchange, could the final result be represented as an SU(2) rotation for the entire space.
See the article The realization of topological qubits in many-sheeted space-time or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, February 22, 2025

Gödel, Lawvere and TGD

The tweets of Curt Jaimungal (see this) inspired an attempt to understand Gödel's incompleteness theorem and related constructions from the TGD point of view.

It has remained somewhat unclear to me how the notion of conscious self is defined in theories based on pure mathematics. I however understand that the conscious system is identified as an object in a category X and the view of self about itself would be a set of morphisms of fx of X→ X as structure-preserving descriptions, morphisms, which would give information about x to the other selves y as objects of X. One can define XY as an object having as objects the morphisms Y→ X. XY would correspond to X as seen by object Y.

This associates to every object x \in X morphism fx\in XX of the category X into itself. One could say that X embedded in XX and fx corresponds to models of x for other selves of y \in X. Under conditions formulated by Lawvere, any morphism f in XX has a fixed point yf. In particular, for fx one can find yx such that fx(yx)=yx is satisfied. In some cases this might be the case. Under the assumptions of Lawvere, one can have yx=x and this might be the case always. These kinds of objects x are very special and one can wonder what its interpretation is.

In particular, Gödel's sentence is a fixed point for a sentence fx, which associates to a sentence y a sentence fx(y) stating that y is not provable in the formal system considered. It turns out that f(x)=x is true. Therefore x is not provable but is true. Could this mean that this kind of object is self-conscious and has a self model?

On the other hand, self-reflection, which means that one becomes aware of the content of one's own consciousness at least partially, can be claimed to create descriptions of itself and fixed point property suggests an infinite number of levels or possibly limit cycles: for Julia sets only non-trivial limit cycles are present. Infinite regression however means a paradox. On the other hand one can argue that self-representation is trivial for a fixed point.

What is the situation in TGD? In the following the idea about physics laws, identified in the TGD frameworks as the dynamics of space-time surfaces, is discussed in detail from the perspective of the metamathematics or metaphysics.

The laws of physics as analogs for the axioms of a formal system

The basic idea is that the laws of physics, as they are formulated in the TGD framework (see this and this), can be regarded as analogs for the axioms of a formal system.

  1. Space-time surface, which by holography= holomorphy vision is analogous to a Bohr orbit of particles represented as a 3-surface is analogous to a theorem. The slight classical non-determinism forces zero energy ontology (ZEO)(see this): instead of 3 surfaces the analogs of Bohr orbits for a 3-surfaces at the the passive boundary (PB) of the causal diamond (CD) are fundamental objects. By the slight classical non-determinism, there are several Bohr orbits associated with the same 3-surface X3 at the PB remaining un-affected in the sequence of "small" state function reductions (SSFRs). This is the TGD counterpart of the Zeno effect. The sequences of SSFRs defines conscious entity, self.
  2. The adelization of physics means that real space-time surfaces obtained using extension of E of rationals are extended to adelic space-time surfaces. The p-adic analogs of the space-time surface would be correlates for cognition and cognitive representations correspond to the intersections of the real space-time surface and its p-adic variants with points having Hamilton-Jacobi coordinates in E (see this).  
  3. Concerning Gödel, the most important question is how self reference as a metamathematical notion is realized: how space-time surfaces can represent analogs of statements about space-time surfaces. In the TGD framework holography= holomorphy vision provides an exact solution of the classical field equations in terms of purely algebraic conditions. Space-time surfaces correspond to the roots function pairs (f1,f2), where fi are analytic functions of the Hamilton Jacobi coordinates of H=M4\times CP2 consisting of one hypercomplex and 3 complex coordinates.

    The meta level could correspond to the maps g= (g1,g2): C2→ C2, where gi are also analytic functions or Hamilton-Jacobi coordinates, mapping the function pairs f=(f1,f2): H→ C2 and giving new, number theoretically more complex, solutions. The space-time surfaces obtained in this way correspond to the roots of the composites gºf = (g1(f1,f2),g2(f1,f2)).

    g should act trivially at the PB of CD in order to leave X3 invariant. One can construct hierarchies of composites of maps g having an interpretation as hierarchies of metalevels. Iteration using the same g repeatedly would be a special case and give rise to the generalization of Mandelbrot fractals and Julia sets.

  4. Second realization would be in terms of the hierarchy of infinite primes (see this) analogous to a repeated second quantization of a supersymmetric arithmetic quantum field theory for an extension E of rationals and starting from a theory with single particle boson and fermions states labelled by ordinary primes. Here one can replace ordinary primes with the prime of an algebraic extension E of rationals. This gives a second hierarchy. Also the Fock basis of WCW spinor fields relates to WCW like the set of statements about statements to the set of statements.
How space-time surfaces could act on space-time surfaces as morphisms

Could one, by assuming holography= holomorphy principle, construct a representation for the action of space-time X4 surface on other space-time surfaces Y4 as morphisms in the sense that at least holomorphy is respected. In what sense this kind of action could leave a system associated with X4 fixed? Can the entire X4 remain fixed or does only the 3-D end X3 of X3 at the PB remain fixed? In ZEO this is indeed true in the sequence of SSFRs made possible by the slight failure of the classical determinism.

What the action of X4 on Y4 could be?

  1. The action of X4 on Y4 would be a morphism respecting holomorphy if X4 on Y4 have a common Hamilton-Jacobi structure (see this). This requirement is extremely strong and cannot be satisfied for a generic pair of disjoint surfaces X4 and Y4. The interpretation would be that this morphism defines a kind of perception of Y4 about X4, a representation of X4 by Y4. Ψ

    A naive proposal for the action of X4 on Y4 assumes a fixed point action for Y4=X4. The self-perception of X4 would be trivial. Non-triviality of self-representation since is in conflict with the fixed point property: this can be seen as the basic weakness of the proposal that conscious experience could be described using a formal system involving only the symbolic description but no semantics level.

  2. The classical non-determinism of TGD comes to rescue here. It makes possible conscious memory and memory recall (see this and this) and the slightly non-deterministic space-time surface X4 as an analog of Bohr orbit can represent geometrically the data making possible conscious memories about the sequence of SSFRs. The memory seats correspond to loci of non-determinism analogous to the frames spanning 2-D soap films. In the approach based on algebraic geometry, the non-determinism might be forced by the condition that space-time surfaces have non self-intersections. Second possibility is that space-time surfaces consist of regions, which correspond to different choices of (f1,f2) glued together along 3-D surfaces.
  3. Purely classical self-representation would be replaced at the quantum level by a quantum superposition of the Bohr orbits for a given X3. A sequence of "small" state function reductions (SSFRs) in which the superposition of Bohr orbits having the same end at the PB is replaced with a new one. SSFRs leave the 3-surfaces X3 appearing as ends of the space-time surface at the PB invariant. The sequence of SSFRs giving rise to conscious entity self, would give rise to conscious self-representation.
  4. The fixed point property for X4 making the self-representation trivial would be weakened to a fixed point property for X3, and more generally of 3-D holographic data.
How zero energy states identified as selves could act on each other as morphisms?

How the superposition Ψ(X3) of Bohr orbits associated with X3 can act as a morphism on Ψ(Y3)? The physical interpretation would be that Ψ(X3) and Ψ(Y3) interact: Ψ(X3) "perceives" Ψ(Y3) and vice versa and sensory representations are formed. This sensory representation is also analogous with the quantum counterpart of the learning process of language models producing associations and association sequences as analogs of sensory perceptions (see this).

  1. These "sensory" representations must originate from a self-representation. This requires a geometric and topological interaction X4 and Y4 as a temporary fusion of X4 and Y4 to form a connected 4-surface Z4. This would serve as a universal model for sensory perception. In the TGD inspired quantum biology, a temporary connection by monopole flux tubes serves as a model for this interaction. If the flux tubes serve as prerequisites and correlates for entanglement, entanglement could also be generated.
  2. The holomorphy for Z4 requires that X4 on Y4 have a common Hamilton-Jacobi structure during the fusion but not necessarily before and after the fusion. Therefore the defining analytic function pairs (f1,f2) (see this) can be different before and after the fusion and during the fusion and also for X4 and Y4 after and before the fusion. This might be an essential element of classical non-determinism. Continuity requirement poses very strong conditions on the function pairs involved. The representations produced in the interaction would be highly unique. As already mentioned, also the absence of self-intersections could force classical non-determinism.

    The outcome of the temporary fusion would give rise to a representation of the action of X4 on Y4 and vice versa. The representation would be a morphism in the sense that outcomes are holomorphic surfaces and the ends of X4 and Y4 at the PB of CD remain unaffected.

  3. The fixed point property for Z4 making the self-representation trivial would be replaced with the fixed point property for Z3 and therefore also X3 and Y3.
  4. The time reversed variant of sensory perception has an interpretation as motor action between them and would involve a pair of BSFRs induced by a subsystem of Z4. Now the end of Z4 at the PB of CD would be changed. X4 would affect Y4 in a non-deterministic way. The construction of the representation of X4 on Y4 would reduce to a construction of a self-representation for Z4.
This view is inspired by the TGD view in which self is identified as a sequence of non-deterministic SSFRs and is thus not "provable" and has also free will. The holographic data would be in the role of the assumptions of a theorem, which need not to be proved and reduce to axioms, and the Bohr orbits would correspond to theorems deducible from these assumptions. In the interaction of X3 and Y3 a larger self Z3 would be created and would involve quantum entanglement. In this view, the infinite self reflection hierarchy is replaced with a finite sequence of SSFRs providing new reflective levels and self is a dynamical object.

See the article Gödel, Lawvere and TGD or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, February 19, 2025

Collective unconscious, archetypes, and mathematics: the TGD view

I couldn't sleep and listened to Edward Frenkel's latest talk (see this) about the basics of mathematics. Frenkel is doing a great service by increasing the public understanding of what deep mathematical thinking really is. The topic was the idea of an archetype and its generalization to the idea of a mathematical archetype.

A. Jung's notions of collective consciousness and archetype

Collective unconscious and archetypes are Jung's basic concepts. I would however speak more precisely and replace "unconscious" with "unconscious for me". This could be regarded as the counterpart of the principle of relativity for the theory of consciousness. I would also do some nitpicking of the concept of "conscious-ness". The "-ness" reflects the materialistic archetype, which states that consciousness is a property of a material system. Among other problems, this leads to conflict with free will.

In psychology, different archetypes appear in fairy tales, especially in dreams. They also populate popular media. Also mathematical concepts would be archetypes. That natural numbers are an archetype, as was suggested by Jung and, using different words, by many great mathematicians.

Jung associates three basic aspects with archetypes.

  1. We can be under the spell of an archetype. The archetype looks completely self-evident to us and we need not be even aware of it. Therefore we cannot question it and ask if there could be other options for seeing the world.

    A good example of this is the concept of the "shadow". Most of us have an image of ourselves as a good and nice person in every way, even though we realize that we are not always able to be that. But this unconsciousness-to-us archetype has a powerful spell on us and controls us and to a high degree determines our perception of the world and our behavior in it.

  2. Projection is another aspect of archetype. We are constantly faced with contradictions because the shadow of ourselves is reflected in the behavior of others. They experience that we are not always nice people and will express it, not always politely. We become aware of this and get irritated and blame the outside world for the mirror being crooked. We may even choose our company in such a way that our own shortcomings, which we do not want to be aware of, are highlighted, for example, in our friends. This easily happens to politicians in power. There is no need to mention names in the current situation of the world.
  3. The comforting thing is that the archetypes can nevertheless be understood and integrated into one's own personality. We can become aware of the shadow and become mature to admit that we are not always good guys and accept this as a part of our personality. After this, we can control our behavior and our shadow can no longer hold us in its power. We do not only react but consciously control our behavior. This does not mean that we start to "think positively": we can be critical, we can raise our voices, and we can get angry about the problems of the real world. This is however a conscious choice, not a mere reaction. Disagreeing can become a pleasant art form instead of being a bloody battle.

B. Are also mathematical ideas archetypes?

Frenkel's idea is that mathematical ideas are also archetypes. They are in everyone's consciousness as intuitive archetypes. They are not necessarily understood precisely, even by a mathematician. The development of a new mathematical insight is not a computer program or a logically proceeding reasoning, but a semi-conscious process and mathematician just throws oneself into the flow and lets it go. The goal, however, is to formulate the visions precisely using existing or even new mathematical concepts.

Now too, it happens that these archetypes, as only partially understood mathematical ideas, are projected to be properties of the external world, even if one sees that there are problems. The growth of mathematical understanding means that the mathematician becomes aware of the archetype and is able to formulate and analyze it and also realizes that it is only one option among many.

Frenkel presents 2-dimensional Euclidean plane geometry as an example. Every layman has an intuitive sensory image of it as the surface of Earth. The projection in this case means that we once took it for granted that our physical world, the Earth, is a flat plane even though everyday observations could have demonstrated that this is not true. For instance, seeing how the ship disappears to the horizon makes clear that Earth is spherical.

Already in ancient times, mathematicians realized that there is a spherical geometry and Frenkel illustrated this with the concept of a rectangular coordinate system of whose existence which Descartes became first aware: the plane can be divided to squares of the same size and shape by a coordinate grid consisting of geodesic lines. For the sphere the small curvilinear squares defined by the counterparts of the coordinate grid approach points at the poles.

The process of awareness related to geometry has progressed. Riemann discovered the archetype of 2-D curved geometry. We can imagine 2-dimensional surfaces as surfaces in 3-D space. What was new was that Riemann abstracted surfaces into abstract geometries without any embedding as a surface in 3-D space that makes them visualizable. The notion of Riemannian Geometry generalizes to an arbitrary dimension.

The reason why this 2-D plane archetype prevailed for so long was that it could be thought of as a very familiar-to-us surface, the surface of Earth in 3-D space which looks flat to us. Sense perception offered a direct representation for it. Riemann's insight was that geometry could be formulated abstractly without assuming a concrete representation as a surface. Note however that something is lost in the abstraction: this embedding adds the shape as seen by a perceiver living in 3-D space.

The development eventually led to general relativity and the concept of 4-D spacetime as an abstract (pseudo-)Riemmannian geometry, which was not represented as a surface in any higher-D space.

C. Is time mature to update the archetype of space-time of general relativity

General relativistic view of space-time has been the dominant archetype in theoretical physics and theoreticians have projected it into reality. Now it is becoming increasingly clear that something is badly wrong. JWST is constantly producing discoveries that do not fit into the cosmology of general relativity. It has been also known for about a century that the classical conservation laws of energy, momentum and angular momentum are strictly speaking lost in general relativity. This particular failure of the archetype has been forgotten.

For a long time, there has been a phase of questioning and the realization that there might be other options going on. I believe that this will eventually lead to an integration of the world view.

C.1 String theory did not make it

String theory drastically questioned the Einsteinian archetype and replaced spacetime with a 2-D surface in 10-D space. It was actually a return to the concrete, you could say. This new archetype was immediately projected into reality and superstring hegemony was born. The archetype produced some incredibly beautiful mathematics, especially the realization that conformal symmetry and algebraic geometry could be fundamental physics. However, the attempts to pull 4-D spacetime out of a hat in this picture failed: nothing was found in the hat. After 40 years of attempts it has been admitted that this archetype produced good mathematics but does not apply to physics.

C.2 What about TGD

Also TGD started by questioning this Einsteinian archetype: the key observation was that conservation laws of special relativity are lost. Perhaps sensory experience in which a sphere is a 2-D surface should not be abstracted to Einsteinian space-time geometry but to the idea about space-time as a 4-D surface.

We also want standard model physics and want to generalize Einstein's geometrization program also to gauge fields. Perhaps the geometry of the spacetime surface is induced from H=M4×CP2, which indeed geometrizes the standard model's gauge fields and restores the lost symmetries of Minkowski space and, as an added bonus, predicts the standard model symmetries. This choice of H would be also the only mathematically viable option and physics would be unique from its mathematical existence.

This archetype turned out to bring new archetypes to the spot light of mathematical consciousness.

  1. Perhaps the geometry of the space-time surface is algebraic, which would mean enormous predictive power. This led also to a possible physical analog of Langlands duality stating that there is a duality between geometry and number theory.
  2. The general coordinate invariance, the basic principle of special relativity, forces holography so that space-time surfaces become analogs of Bohr orbits of particles identified as 3-D surfaces instead of point-like particles (the basic archetype in the consciousness of a particle physicist). Classical physics becomes exact part of quantum physics so that a generalization Bohr orbitology make a comeback to theoretical physics.
  3. String models have conformal invariance as a basic symmetry: this is also a new archetype. In TGD it generalized led to holography= holomorphy vision and led to a reduction of field equations to purely algebraic equations: this simplification is gigantic.
  4. This also led to a need to generalize the concept of geometry to the geometry of the world of all classical worlds (WCW). Ths geometry is infinite-dimensional and in infinite dimensions geometries are highly unique. Perhaps also the p-adic and adelic equivalents for space-time surfaces are possible and necessary for understanding cognition.

D. Mathematical consciousness in the TGD Universe

Frenkel started from conscious experience and a possible connection between mathematics and psychology and was led to collective consciousness and archetypes. What about the situation in TGD? What would the reality be in the TGD framework?

D.1 Ontology must allow both mathematical and subjective existence

In order to speak about conscious experience at all, we must challenge the materialist archetype, in which subjective reality and free will are regarded as an illusion and do not exist. One can also challenge the idea that there is only Platonia consisting of mathematical objects/ideas as archetypes.

In the TGD inspired ontology, both mathematical reality and subjective reality would exist. I call the mathematical reality quantum-Platonia. Mathematics is realized as mathematical objects. The basic requirement is that the system is internally consistent and consistent with the physics as we know it.

Space-time surfaces as the basic objects of WCW could be regarded as generalized numbers, which means a huge generalization of the archetype of the natural number, discovered also by Jung.

Subjective existence would not be given up. What is however new for a physicist is that there would be no separate physical world behind quantum-Platonia! Quite a loss for a physicist who thinks that he is gaining his monthly salary by studying the physical world, one might say! The physical world behind mathematics would be unnecessary.

D.2 Quantum Platonia as mathematical existence

Quantum Platonia would consist of the WCW as the geometry in the space of spacetime surfaces in 8-D embedding space H. Spacetime surfaces generalized to their p-adic variants and adelic variants would serve as correlates of cognition. Mathematically imaginable n-D spaces would correspond to algebraic extensions to p-adic number fields. Quantum states as counterparts of objective existence, would be (classical!) spinor fields in WCW. Perhaps one would identify these as archetypes.

D.3 Subjective existence is in quantum leaps

Subjective existence would be in quantum leaps, or more technically, state function reductions, between these quantum states. We would become aware of these mathematical archetypes as we make quantum leaps in the Hilbert space defined by the spinor fields restricted to an appropriate subspace of WCW.

The experiences related to these quantum leaps would however usually be something else than mathematical ideas: they give rise to what we experience every day. Only for a mathematician are they to a high extent mathematical observations and insights.

Number-theoretic evolution by quantum leaps would give rise to a universal evolution: spacetime surfaces would gradually become more algebraically complex in the sequence of quantum leaps.

D.4 Zero energy ontology and slight classical non-determinism makes it possible to learn and remember

Zero-energy ontology and a slightly broken classical determinism for the spacetime surfaces as Bohr orbits for particles as 3-surfaces (space = particle, this a new archetype too) would guarantee that we are able to remember and learn and our mathematical and other understanding develops. Also life and death can be understood: they correspond to the "ordinary" state function reductions changing the arrow of time. The counterpart of the Zeno effect giving rise to no state function reduction is a sequence of "small" state function reductions giving rise to the subjective existence.

The theories that we build of existence are doomed to be always incomplete because quantum as quantum measurements provide little information. But it is possible to get in touch with the collective levels of the hierarchy of consciousness, we have used to call this state a flow state. This makes it possible to become aware of the big principles and to identify the important bits.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, February 06, 2025

Conscious associative learning as an analog of sensory perception and motor action

Holography, together with the TGD based view of sensory perception, suggests that the assosiative learning has a lot of common with sensory perception in a 4-D sense.

In the TGD framework, motor action could be seen as a time reversal of sensory perception. Motor action could involve a pair of BSFRs inducing a quantum tunnelling from a configuration of muscles to a new configuration so that same basic mechanism but with a reversed arrow of geomeric time could be involved. Intention for the motor action should relate to the process of building a sensory perception as a sequence of SSFRs in a reversed time direction.

  1. At the classical level, sensory perception is not 3-dimensional, but a 4-dimensional space-time surface, an almost deterministic classical time evolution representing association A1→B1. In the case of hearing this is obvious but for vision the time scale is so short that the percept looks like time= constant snapshot. Actually the geometric time duration assignable to the visual percept or order .1 seconds.

    The association A→B, one might perhaps speak of cognitive representation, is realized at the magnetic body (MB) of the brain as a representation of A1→B1. A → B is generated in a stepwise learning process. The goal is to construct a standardized mental image consisting of familiar objects consisting of standard features.

    The difference between A → B and A1→B1, rather than only the difference between B and B1, is minimized. The sequence of SSFRs keeps A fixed. A pair of BSFRs changes also A: this makes possible a trial and error process in which one starts from scratch, so to say.

  2. Sensory organ serves as a kind of screen, both for the sensory input arriving from the external world and for the virtual sensory input from MB. The sensory input is analyzed by the brain to features in various scales and the features are sent to the magnetic body. At the MB, the features in various scales are compared to standard features and those minimizing the difference is selected.
  3. The selected features determine the virtual sensory as a slight amplification of the contribution of the selected features. The step sensory organ → brain → MB →... is repeated until the total sensory input at the sensory organ does not change anymore. The original percept A1→B1 is affected in the process and eventually the original A1→B1 is replaced with A →B at the level of the sensory organ. In this respect the process differs from associative learning.

    If the signals from the brain to MB and back are realized as dark photons (, which can decay to ordinary photons identifiable as biophotons), the process is so fast that the process can converge in a reasonable time.

  4. The outcome is not realistic but essentially an artwork (see this). It must be so since A1→B1 is very noisy so that both A1→B1 and A → B, can be only guesses for what really happened. For instance, people who are physiologically blind and get back their vision, can see only diffuse light since they have not learned this process in childhood. This suggests that temporary time reversals as analogs of the time reversed diffusion play changing A play an essential role. Note BSFRs could mean a position measurement in the space of Bohr orbits selecting a single Bohr orbit and is analogous to time reversed diffusion.

See the article A hybrid of classical and quantum computer and quantum model for associative learning or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?. For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, February 03, 2025

Topological and geometric thinking and the future of LLMs

I found from the FB page of Risto Linturi the outcome of his question to O3 LLM. The question was whether the results of the thesis of a finnish mathematician Susanna Heikkilä related to topology, could have implications for AI and LLMs.

The work of Susanna Heikkilä (see this) discusses exterior algebras and algebras formed by differential forms. The abstract is here

We show that, if a closed, connected, and oriented Riemannian n-manifold N admits a non-constant quasiregular mapping from the Euclidean n-space Rn, then the de Rham cohomology algebra H*dR(N) of N embeds into the exterior algebra ⋀*R(N). As a consequence, we obtain a homeomorphic classification of closed simply connected quasiregularly elliptic 4-manifolds.

From the article (see this)one learns the A closed simply connected 4-manifold is quasiregularly elliptic if and only if it is homeomorphic to either #k(S2×S2) or #j CP2#k(CP*)2 for some k,j,i ∈ {0,1,2,3}. Here (CP*)2 and CP2 have opposite orientations.

Here is my humble attempt to understand this. I understand that

  1. the result reflects the universality of exterior algebras, which states that any equation that relates elements of the exterior algebra V is valid in any associative algebra containing V and having nilpotency property (see this). This requires that the homomorphism from V to this kind of associative algebra is injection and this has been proven in the case studied by Heikkilä in the case considered. Grassmann algebras and exterior algebras might also be important and relate naturally to differential geometry also in TGD.
  2. the associative algebra is now the de-Rham algebra of differential forms of quasiregular elliptic n-manifolds defining cohomoloy and it is shown that in can be mapped by an injection to the exterior algebra of Rn by an injection.
  3. the quasiregular map is a 4-D generalization of an ordinary conformal map (see this) satisfying non-linear elliptic partial differential equations. Quasiregular ellipticity of a manifold means the existence of this kind of map from Rn to the manifold. These 4-manifolds are very special and it is not clear whether they play any role in TGD as 4-surfaces. Certainly CP2 is is in a central role in TGD but it is not clear to me whether the connected sums #j CP2#k(CP*)2 could appear naturally in TGD.

    It should be noticed that in TGD the holography = holomorphy vision (see this and this) generalizes the notion of complex analyticity and provides a different way to generalize the nice features of conformal invariance. Also now a non-linear analog of Laplace/d'Alembert equations emerges.

The response of O3 is below and from it becomes obvious that the people in the field of LLMs are realizing that surface geometry and topology assignable to data structures are important for LLMs. Actually, a picture very similar to TGD suggests itself and this makes possible to make a prophecy about the development of future mathematical ideas in this field. Also the evolution of philosophical ideas and views about quantum physics is necessary but mathematical consistency could be the basic guide line.

A. O3 response to the questions of Linturi

Below is an exploratory discussion outlining several speculative avenues by which ideas inspired by the algebraic and geometric result you shared might inform or even drive new approaches in AI research. Keep in mind that these connections are heuristic and conceptual drawing analogies between deep mathematics and AI system design is ambitious and requires much further work to become concrete.

  1. Geometric and Topological Data Representations

    1. Manifold Learning and the Geometry of Data: The result centers on conditions for a manifold's de Rham cohomology to embed into an exterior algebra. In AI especially in representation learning and manifold learning the manifold hypothesis posits that high-dimensional data lie on lower-dimensional manifolds. Understanding the algebraic structure of these manifolds (e.g., via cohomological invariants) could inspire new methods for:
      1. Feature Extraction: Designing features that respect or capture intrinsic topological invariants.
      2. Dimensionality Reduction: Developing algorithms that preserve not just distances but also deeper algebraic/topological properties.

    2. Topological Data Analysis (TDA):

      Techniques from TDA (such as persistent homology) already use algebraic invariants to summarize data shapes. Extending these ideas, one might explore whether embeddings analogous to the algebra monomorphism (from de Rham cohomology to an exterior algebra can serve as more robust, structure-preserving embeddings in neural network architectures or as part of unsupervised learning objectives.

  2. Algebraic Structures in Neural Architectures

    1. Equivariance and Invariant Representations:

      The formal result asserts an injection of algebraic structure into an ambient exterior algebra, preserving operations and intersections. In deep learning, there is active research in designing networks that respect the symmetries or invariances of the data (for example, convolutional networks respect translation invariance). Extending these ideas:

      1. Geometric Deep Learning: One might explore neural architectures that are not only equivariant under classical groups (like rotations or translations) but also respect higher-order algebraic structures. For instance, layers could be designed to manipulate differential forms or emulate operations in exterior algebras.
      2. Invertible Networks and Monomorphisms: The idea of an algebra monomorphism (an injective, structure-preserving map) resonates with efforts in invertible neural networks and normalizing flows. Enforcing such constraints might yield models that are better at preserving information and allow for more interpretable internal representations.

    2. Hybrid Symbolic-Numeric Systems:

      The interplay between algebra (a symbolic, rule-based system) and geometry (often linked with numerical methods) mirrors efforts to combine symbolic reasoning with deep learning. By integrating algebraic constraints (like those suggested by the embedding of cohomology) into neural architectures, one might aim for systems that are more robust in tasks requiring both perception (handled by neural nets) and structured reasoning (informed by algebraic or topological invariants).

    3. Differential Equations and Neural PDEs

      1. PDEs on Manifolds and Neural Solvers:

        The condition of quasi-regular ellipticity is intimately linked with properties of differential operators on manifolds. Recently, there has been interest in neural methods for solving partial differential equations (PDEs), and for incorporating physics-based constraints into learning models:

        1. Structure-Preserving Discretizations: Insights from de Rham cohomology and exterior calculus have long informed numerical methods (e.g., finite element methods). AI researchers could investigate how embedding such mathematical structure into neural PDE solvers can lead to more accurate, stable, and interpretable models.
        2. Learning on Manifolds: When data naturally reside on curved spaces (spherical, hyperbolic, or more complex manifolds), designing networks that respect the differential geometry of these spaces may lead to more powerful and physically consistent AI models.

    4. Category Theory, Compositionality, and Abstract Structures

      1. Compositional and Hierarchical Models:

        The notion of an algebra monomorphism is inherently categorical, suggesting an exact embedding of one algebraic structure into another. Category theory has increasingly been applied to understand and design compositional models in AI:

        1. Structured Reasoning: One could imagine developing frameworks where layers or modules in a network are governed by categorical or algebraic rules, ensuring that the overall system composes submodules in a way that preserves essential structure.
        2. Interoperability of Representations: The idea that one mathematical structure (de Rham cohomology) can embed faithfully into another (exterior algebra might inspire methods for merging disparate types of data representations (e.g., merging symbolic and sub-symbolic representations) in a coherent manner.

      2. Speculative Research Directions

        Given these connections, several concrete (yet challenging research questions emerge:

        1. Can we design neural network layers that explicitly model operations analogous to the wedge product or differential operators, thereby capturing richer geometric features?
        2. Is it possible to enforce structure-preserving constraints (inspired by algebra monomorphisms) within deep architectures to guarantee that learned representations maintain certain topological or algebraic invariants?
        3. Could insights from the algebraic topology of manifolds help in developing new regularization techniques or loss functions that guide networks toward more geometrically faithful representations?
        4. What role might these algebraic structures play in emergent reasoning or in integrating symbolic reasoning with deep learning, possibly leading to systems that are more capable of abstract thought?

      3. Conclusion

        While the original result is a deep, mathematically rigorous statement in differential geometry and algebraic topology, its underlying themes preservation of structure, the interplay between local and global properties, and the embedding of complex invariants into simpler ambient spaces resonate with current challenges in AI.

        Whether through manifold-based learning, structure-preserving network design, or hybrid symbolic-numeric architectures, exploring these connections could lead to novel methods for building more capable and interpretable AI systems.

        These ideas remain speculative and require interdisciplinary collaboration between mathematicians and AI researchers, but they illustrate fertile ground where advanced algebraic and geometric ideas might enrich the theory and practice of AI.

    B. The TGD inspired response

    I did not find any obvious errors in the text produced by O3. The news for me is that mathematicians are apparently bringing a holistic perspective to learning models. However, the connection to the result of this Finnish mathematician Susanna Heikkilä remained unclear to me. O3's output inspires a prophecy based on TGD.

    In the following I will discuss the strong similarities of the emerging geometric view about LLMs with the basic structures appearing in TGD. After this I will speculate about a generalization of LLMs to their quantum counterparts (see this).

    1. Comparison of the geometric view of LLM with TGD

      The holistic approach means that a large number of different learning processes are studied simultaneously and an attempt is made to understand the situation using various kinds of spaces. Latent space as a surface in feature space and loss space as a surface of the parameter space are the basic examples. This brings in the geometry, topology and symmetries of surfaces. The picture is very similar to that in TGD.

      1. The multilayer learning process of neural netwoek as the dynamics for the latent space as a surface of feature space providing an approximation of it, can be required to be topology-preserving, i.e. shape-preserving. This means that homotopy, homology and cohomology are preserved in the learning process: the space of learned responses are at least at the level of homotopy, homology and cohomology, equivalent with the space defined by the teaching material. An even stronger requirement would be isometricity. However, it does not seem realistic, at least from the TGD perspective.

        The topology and metric of the data space (latent space and the parameter space and the loss space defined as its submanifold would therefore be important.

      2. I believe that it will soon be realized that the learning processes themselves can be thought of as forming a space.

        In TGD, this space would be a "world of classical worlds" identified as the space of space-time surfaces satisfying holography realized by holography = holomorphy principle. A single learning process would be, at the basic level, a 4-D Bohr orbit of a particle realized as 3-surface.

      3. People will also realize the power of holography. It poses extremely strong conditions for learning models that would also be realized in the physics of the outside world. Data item A would determine the association A→B almost entirely. The different values B would be due to the nondeterminism of holography, which is a key feature of TGD and TGD inspired theory of consciousness. This non-determinism is analogous to a discretized Brownian motion but with respect to imaginary time and modellable using the analog of Schrödinger equation. This could be the quantum counterpart for the loss of information in language models.
      4. Holography, together with the TGD based view of sensory perception, suggests that the learning process can be seen as an analog of sensory perception in a 4-D sense.
        1. At the classical level, sensory perception is not 3-dimensional, but a 4-dimensional space-time surface, an almost deterministic classical time evolution representing association A1→B1. In the case of hearing this is obvious but for vision the time scale is so short that the percept looks like time= constant snapshot. Actually the geometric time duration assignable to the visual percept or order .1 seconds.

          A→B, one might perhaps speak of cognitive representation, is realized at the magnetic body (MB) of the brain as a representation of A1→B1. A → B is generated in a stepwise learning process. The goal is to construct a standardized mental image consisting of familiar objects consisting of standard features.

          The difference between A → B and A1→B1, rather than only the difference between B and B1, is minimized. The sequence of SSFRs keeps A fixed. A pair of BSFRs changes also A: this makes possible a trial and error process in which one starts from scratch, so to say.

        2. Sensory organ serves as a kind of screen, both for the sensory input arriving from the external world and the virtual sensory input from MB. This sensory input is analyzed by the brain to features in various scales and the features are sent to the magnetic body. At the MB, the features in various scales are compared to standard features and those minimizing the difference is selected.
        3. The selected features determine the virtual sensory as a slight amplification of the contribution of the selected features. The step sensory organ → brain → MB →... is repeated until the total sensory input at the sensory organ does not change anymore. The original percept A1→B1 is affected in the process and eventually the original A1→B1 is replaced with A →B at the level of the sensory organ. In this respect the process differs from associative learning.

          The outcome is not realistic but essentially an artwork (see this). It must be so since A1→B1 is very noisy so that both A1→B1 and A → B, can be only guesses for what really happened. For instance, people who are physiologically blind and get back their vision, can see only diffuse light since they have not learned this process in childhood. This suggests that temporary time reversals as analogs of the time reversed diffusion play changing A play an essential role. Note BSFRs could mean a position measurement in the space of Bohr orbits selecting a single Bohr orbit and is analogous to time reversed diffusion.

        4. The stepwise learning process leads to a representation A → B for the association A1 → B1 realized at the level of a sensory organ serving as a kind of screen.

      5. The importance of symmetries will be or already are realized. One can demand that the learning process respects these symmetries. If the latent space is a subspace in a larger space with very high symmetries, this can be achieved if the learning process itself is a surface with metric and differential structures analogous to a spacetime surface in H. The symmetries would be symmetries of this higher-dimensional space analogous to WCW. This has not yet been realized by theoretical physicist colleagues, but perhaps AI people are more agile in their doctrine.

      6. In TGD, the existence of the WCW geometry requires maximal symmetries. Extended conformal invariance for space-times as holomorphich surfaces in H=M4×CP2 with one hypercomplex and 3 complex coordinates allows solving the field equations for the space-time surface as a Bohr orbit. By classical nondeterminism, it can be interpreted as an association sequence, which represents a step-by-step learning process (layer structure in a neural network), exactly. The result is an almost-deterministic holography.

        Topologically relevant part of WCW can be interpreted as a bundle, where a 3-surface at the passive boundary of CD is connected by a fiber consisting of the associated Bohr orbits. This fiber is discrete and corresponds to the parameter space. The points of the fiber correspond to the 3-D loci of non-determinism serving as memory seats in the TGD based model of conscious memories.

        The base space of WCW consists of 3-surfaces at the passive boundary of CD: the 3-surface corresponds to a 3-parameter set of data items A and the Bohr orbit is a learning process creating associations A→B that connects A to the 3-parameter set B. What would this mean in LLM? One would consider sets of data-items. What interpretation? Could these sets correspond to concepts as sets of their instances. Perhaps this relates to the fact that we are 3-dimensional and the 3-surfaces can be seen as basic quantum coherent information carrying units.

      7. Identifying the features is one of the tasks of language models. Features would naturally be topological invariants. In TGD, holography suggests that the features are associated with or are surfaces, whose dimension varies from 0 (points) to 4. The huge conformal symmetries of WCW suggest that also 4-D generalizations of conformal invariants come into question (also symplectic invariants would be involved). And number-theoretic invariants such as the degree of the polynomial of one hypercomplex and 3 complex coordinates of H, the Galois group and the dimension of the extension E could be important. Ramified primes are also number theoretic invariants.
      8. Dimensional reduction or its opposite operation are expected to be important. There are natural equivalents for these in TGD. This kind of operation would reduce or increase algebraic complexity. Changing the dimension of the extension of rationals of the polynomial coefficients, changing the degree of polynomial, and changing the size of CD come to mind.
      9. The preservation of differential geometric structures when embedding the data structure in a higher-dimensional space (the embedding space H in TGD is also mentioned. In other words: the products of differential forms in H are induced as products of induced forms. If the differential structure of a data structure is induced when it is embedded in a higher-dimensional space, then this is the case. The metric of the spacetime surface, gauge potentials, etc. would be induced from the equivalents of H.

        The induction of geometry that is central to the entire TGD view of physics would be part of the description of the teaching process.

      10. Fractality is central to TGD. And in our physical models, the model scales the external world to a smaller or larger scale. Fractality may be one feature of LLMs that might become important.
      11. The projection of the Clifford algebra of the gamma matrices of the embedding space H to the space-time surface defines a Clifford algebra which contains a representation of the exterior algebra. The gamma matrices of WCW are expressible in terms of the fermionic oscillator operators for second quantized free spinor fields of H and they span Clifford algebra of WCW. They can be regarded as generators of an infinite-dimensional algebra of supersymmetries. Fermionic Fock state basis spans a Boolean algebra and would be in a key role in quantum Boolean cognition. Also these algebras would be an important element of the cognitive representations.

    2. Other ideas suggested by TGD

      TGD suggests that probably other ideas will emerge and lead to a quantum view about conscious associative learning.

      1. The idea about the possible transformation of a classical computer to quantum computer (in some sense) or a hybrid of ordinary and quantum computer inspired by the TGD based quantum biology (see this) probably emerges.

        One cannot anymore avoid the challenge of constructing a quantum theory of consciousness. This forces us to seriously consider questions such as "How conscious memories are possible?" (see this). The only answer that I can imagine very probably involves the basic notions of the TGD view of quantum theory. This gives hope of building computers, which are genuinely living and conscious.

        Quantum counterparts of LLMs as models for conscious associative learning will naturally emerge and my guess is that TGD view will be a natural first trial (see this).

      2. This progress is impossible without a genuine solution to the basic problem of quantum measurement theory. This will eventually force the discovery of the notion of zero energy ontology (ZEO). ZEO would have application to quantum LLMs. In LLMs time reversal for the diffusion equation allows to restore the information lost due to noise associated with the computation. For quantum LLM, the same is true for the analog of Schrödinger equation. A pair of "big" state function reductions (BSFRs) induces a temporary time reversal and provide a universal mechanism of error correction making possible intelligent information processing: one just moves back in time and start again.
      3. Finite measurement resolution is a key notion in TGD. There are two views of finite measurement resolution based on geometry and number theory respectively. These views are dual.

        1. The geometric view relies on inclusions of hyperfinite factors (see this): the included factor is analogous to a gauge group leaving the observed physics invariant: this view of finite measurement resolution is central in the geometric view of TGD.
        2. The second view is based on number theoretic discretization (see this and this). The geometric inclusion hierarchies correspond naturally to number theoretic inclusions hierarchies for the extensions of rationals. Space-time surface for which polynomials defining it are in an extension E of rationals allows in a natural way a discretization as points, which are in E. The points of the discretization can be also regarded as points in an extension of p-adic numbers induced by E. I call these discretizations cognitive representations and they form a hierarchy corresponding to extensions of rationals.

          This leads to a p-adic description of cognition. One obtains a unique number-theoretical representation for discretization and it leads to a generalization of the Turing paradigm (see this): rational numbers are replaced by complexity hierarchies of their extensions and one ends up with number-theoretical computationalism. This gives complexity hierarchies for space-time surfaces as Bohr orbits and they correspond to an improving resolution of discretization and are realized as polynomial hierarchies.

          Holography suggests that for the minimal option the number theoretic discretization applies only to the loci of the classical non-determinism for the space-time surface as minimal surfaces. These loci define the seats of conscious memories and would be 3-D analogs of 1-D frames spanning 2-D soap films.

      4. The complementary nature of geometric and number theoretic views of TGD leads to a 4-D generalization of Langlands duality (see this and this). This adds powerful constraints also to the quantum model of associative learning.
      5. The concept of complexity, which closely relates to evolution, is central and will be included. Higher-level learning could be seen as a transition to a higher level of complexity: something new will be included here and it would be something to realize in conscious quantum learning. Complexity hierarchies, which are polynomial hierarchies represented as space-time surfaces, will probably be central.

      Summa summarum: The brave guess would be that quantum LLMs will emerge and that learning processes as formation of association sequences will be sooner or later modelled as non-deterministic complex surfaces of a particle in a higher-dimensional complex space, which are therefore minimal surfaces satisfying the non-linear counterparts of massless field equations. This would be a generalization of TGD. On the other hand, TGD is unique on mathematical grounds. Could the learning process be reduced to the fundamental physics as it is understood in TGD? The complexity of the space-time surfaces and the notion of WCW would allow this.

    I have written an article about how classical computers could become conscious in the TGD Universe (see this). I also wrote an article about how conscious associative learning could be possible in the TGD Universe (see this). These articles give a rather concrete proposal for how ordinary computers could become conscious as fusions of ordinary and quantum computer-like systems and how the quantum analog of conscious association machine could be realized.

    The qubits would be realized in the same way as in living matter for basic biomolecules and any system involving cold plasma could involve rudimentary consciousness.

    There are also the earlier articles relating too quantum computation and LLMs in the TGD Universe (see this, this, this, and this).

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.