https://matpitka.blogspot.com/search?updated-max=2022-03-07T03:22:00-08:00

Wednesday, February 23, 2022

Connection between dark nucleon code and bioharmony

The model of genetic code based on bioharmony has evolved through many sidetracks (see this) but the the version discussed in many sidetracks (see this, this, and this) is roughly consistent with the original model and also gives a connection with the model of dark nuclear code.

Bioharmony and resonance mechanism for dark photon communications

The faces of icosahedron and tetrahedron (and also octahedron appearing in the model of genetic code as icosa-tetrahedral tessellation of hyperbolic space H3 (see this) are triangles. The proposal is that they somehow correspond to 3-chords made of dark photons, which in turn represent genetic codons.

Communications by dark 3-photons represent codons. 3N-photons represent in turn genes. The communications rely on cyclotron 3N-resonance so that the vertices of the faces of icosa-tetrahedron must contain charged particles coupling to a magnetic field. The magnetic field strengths at flux tubes associated with charged particles would determine the cyclotron frequencies.

Information is encoded to the frequency modulation of cyclotron frequencies. The chords serve as addresses much like in computer language LISP. If the modulations of 3N frequencies are identical and in synchrony, the outcome of the receiver consisting of 3N charged particles is a sequence of 3N-resonances giving rise to an 3N-pulse sequence. Nerve pulse patterns could emerge by this mechanism.

One can also consider 3N-signals for which only M<3N modulations are identical and in synchrony. In this manner communications to subsets of the receiver are possible. For instance, some subset of codons of dark gene or dark protein can be selected as a receiver, possibly controlled. This selection could de-entangled the receiver to de-entangled coherent pieces.

There is a direct connection with empiria. Biophotons, whose origin remains poorly understood, can be identified as ordinary photons resulting from the decay of dark 3N-photons to ordinary photons.

The realization in terms of dark nucleons looks more plausible if also DtRNA and DAAs are realized in terms of icosa-tetrahedral picture (the dark counterpart of information molecules X will be denoted by DX). This is because the amino acids are often neutral unlike DNA nucleotides which are negatively charged. The dark charge assignable to the icosa-tetrahedron can be controlled by pionic bonds with charges 0,+/- 1 so that it can be 3 units for DDNA and vanish for amino acids. A natural proposal is that the charge of icosa-tetrahedron compensates the charge of the amino acid and tRNA.

There are pairings of type DX-DY. The pairings DDNA-DRNA, DRNA- DtRNA and DtRNA-DAA induce the biochemical dynamics of transcription and translation. There are also pairings DX-X. DDNA-DNA and DRNA-RNA unique DtRNA-tRNA pairing is 1-to-many and relates to the wobble phenomenon. The pairings between dark nucleon variants of biomolecules and corresponding dark 3N-photons make possible biocommunications and control.

Details of the bioharmony model

Consider now a more detailed bioharmony model of the genetic code based on the geometries of icosahedron and tetrahedron.

  1. Icosahedron has 12 vertices and 20 faces, which are triangles. The idea is that the 12 vertices correspond to the notes of 12-note scale. Tetrahedron has 4 vertices and 4 faces and is self-dual whereas the dual of icosahedron is dodecahedron having 20 faces and 12 faces.
  2. 12-note scale can be represented as a Hamiltonian cycle at an icosahedron going once through all vertices. The frequencies at the neighboring points as edges of a face in cycle relate by a frequency scaling of 3/2: this gives rise to the Pythagorean variant of quint cycle.

    Octave equivalence means the identification of frequencies differing by a multiple of octaves. Octave equivalence can be used to reduce all frequencies to a single octave. If the scaling is exactly 3/2 at all steps there is a slight-breaking of octave equivalence since (3/2)12 does not quite correspond to an integer number (7) of octaves. Pythagoras was well aware of this.

    Given cycle assigns to faces 3-chords defining a harmony with 20 chords assignable to the faces of the icosahedron. For dodecahedron there is only single harmony with 12 chords and 20-note scale which could correspond to Eastern scales. For the tetrahedron the Hamiltonian cycle is unique.

  3. Icosahedral Hamiltonian cycles can be classified by symmetries. The group Z6, Z4, or Z2 (rotation by π or reflection) as a group of symmetries
The connection with the genetic code emerges in the following manner.
  1. The natural idea is that the faces of the icosa-tetrahedron correspond to both 3-chords and genetic DNA/RNA codons. If the orbits of faces could correspond to amino acids (AAs), the DNA codon would code for amino acid AA if the corresponding face is at the orbit corresponding to AA.
  2. One wants 64 DNAs: Z6,Z4 ja Z2 cycle give rise to 20+20+20 =60 DNa codons. Tetrahedron gives the remaining 4 codons.
  3. Does one obtain a correct number of AAs? Do the numbers of faces at the orbits correspond to numbers of DNAs coding for the corresponding AA?
    1. Z6 decomposes to 3 6-orbits and 1 2-orbit (3× 6+2 =20). There are 3 AAs coded by 6 DNAs. 2-orbit corresponds to AA coded by two DNAs.
    2. Z4 decomposes to 5 4-orbits. There are 5 AAs coded by 4 codons.
    3. Z2 corresponds to 10 2-orbits predicting 10 AAs coded by 2 codons.There would be 11 2-orbits altogether. There are 9 AAs coded by 2 codons.

      Some kind of symmetry breaking is present as in the case of dark nucleon code. 2 AA doublets must split to singlets. If (ile,ile,ile,met) coded by UAX could correspond to (ile,ile) and (met,met) such that (met,met) is split to (ile,met). In absence of symmetry breaking one would have 11 doublets as predicted.

  4. There are also 4 tetrahedral codons.

    There is (stop,stop) doublet (UAA, UAG) and (stop,trp) doublet (UGA,UGG). These doublets could correspond to the faces of the tetrahedron. Only one face would code for amino acid in the vertebrate code. Other faces would not have corresponding tRNA?

    For bacterial codes, the situation can be different. Pyl and sec appear as exotic amino acids. Could (UAA,UAG) for code for (stop,pyl) and (UGA,UGG) for (sec,trp) instead of (stop,trp)? Orientation preserving rotations form a 12-element group having Z2 and Z3 as subgroups. For Z2 the orbits consist of 2 vertices and for Z3 of 3 vertices (face) and 1 vertex. Z3 symmetry could correspond to trp as singlet and vertebrate stop codons as triplet. For bacterial pyl and sec Z2 with symmetry breaking is suggestive.

Bioharmony, dark nucleon code, and icosa-tetrahedral code as a tessellation of H3

Bioharmony model involves icosahedron and tetrahedron. This looks ugly unless there is some really deep reason for their emergence. One can also ask why not also octahedron having triangular faces.

Hyperbolic 3-space H3 has interpretations as a mass shell of Minkowski space M4 at the level of M8 and as light-cone proper-time constant surface at the level of H. The 4-surface X4 in M8 contains mass shells of M4 corresponding to the roots of the polynomial P defining X4. Hence one expects that H3 plays a key role in quantum TGD both discretized momentum as defining a cognitive representation with momenta, which are algebraic integers associated with extension of rationals defined by P. H3 has infinite discrete subgroups of the Lorentz group analogous to discrete groups of translations in E3 as isometries and H3 allows an infinite number of tessellations (lattices).

Perhaps the simplest tessellation is icosa-tetrahedral tessellation involving also octahedrons and thus all triangular Platonic solids. This tessellation could give rise to genetic code by induction of tesselation to 3-surfaces or lower-D objects such as linear biomolecules, and cell membranes (see this). I do not however understand the mathematical details well enough but the following discussion is general.

Consider first the model for DDNA and DRNA allowing us to understand the connection between dark nucleon and dark photon realization of the genetic code physically.

  1. The realization of DDNA/DRNA/DtRNA/DAA could correspond to a sequence of icosahedron-tetrahedron pairs at H3 contained by the 4-surface X4⊂ M8 and its H images which is also H3.
  2. Each icosa-tetrahedron would contain a dark codon realized both as a face and dark nucleon triplet associated with it. The dark photon chord associated with the face must be the same as the codon defined by dark nucleon triplet. The dark nucleon triplets correspond to cyclotron frequency triplets, which in turn correspond to dark photon 3-chords associated with the Hamiltonian cycles.
  3. The cyclotron frequencies are determined by magnetic fields at flux tubes so that Hamilton cycles must correspond to flux tube patterns. The simplest hypothesis is that the Hamilton cycle is a closed flux tube connecting all vertices of the icosahedron. Dark codon triplet corresponds to a face with 3 flux tube edges.

    The simplest option is that the flux tubes defining the edges define the cyclotron frequencies defining the dark codon in bioharmony. The variation of flux tube thickness implies frequency modulation crucial for communications.

    The realization of the Hamilton cycle requires that the magnetic field strength along the cycle is scaled by factor 3/2 to give a quint cycle.

  4. An interesting question relates to the relation of DDNA strand and its conjugate. The change of the orientation of the Hamiltonian cycle changes the chord of the harmony. For the ordinary 8-note scale one can roughly say that major and minor chords are transformed to each other. The orientation reversal could correspond to time reversal. The fact that the orientations of two DNA strands are opposite suggests that DNA and conjugate DNA are related by the orientation reversal of the Hamiltonian cycle inducing the map G→ C, U→ A a the level of DNA letters. The conjugation does not imply any obvious symmetry for the corresponding amino acids as the inspection of the code table demonstrates.
How could the Hamiltonian cycle determine the DtRNA codons?
  1. DRNA codons pair with 32 DtRNA codons and DtRNA codons pair with trNA codons in 1-to-many manner. Therefore DRNA-DtRNA pairing could be universal and 2-1, although not in a codon-wise manner. This pairing should be the same for both bioharmony and dark nucleont triplets.
  2. The pairing by 3-resonances requires that DtRNA icosa-tetrahedron contains the DRNA codons, which pair with DtRNA codon. There would be 2 DRNA codons in DtRNA icosahedron for most DtrNA codons and 1 codon for DtRNA pairing with DAA corresponding to met and trp. The number 32 of DtRNA implies in the case of icosa-tetrahedral code that there are 10+10-10=30 icosahedral DtRNAs and only 2 tetrahedral DtRNAs so that two faces of tetrahedron cannot correspond to DtRNA codon so that corresponding DRNAs must serve as stop codons.

    One of the DtRNAs could correspond to trp. The second one would correspond to a stop codon in the vertebrate code: either the DtRNA codon is not present at all or or it does not pair with tRNA. TAG and TGA can code for pyl and sec in some bacterial versions of the code and in this case the corresponding dark DRNA codon would be represented at the DtRNA tetrahedron.

  3. For bioharmony DDNA-DAA correspondence means that AAs correspond to orbits of the faces of icosahedron under the subgroup Z6,Z4, or Z2 which could correspond to reflection or to a rotation by π.

    Since DRNA-DtRNA correspondence is 2-1 although not codon-wise, the natural first guess is that Z2 orbits of the faces define the DRNA codons at the DtRNA icosahedron so that it would contain 2 codons for most DtRNAs. At the DtRNA tetrahedron the only option is Z1 so there is a symmetry breaking.

    If Z2 corresponds to a reflection, the orbit always contains 2 codons. If Z2 corresponds to a rotation by π, it might happen that the face invariant under π rotation and the orbit would consist of a single point. Could this explain why one has (ile,ile,ile,met) instead of (ile,ile) and (met,met)? The rotation axis should go through the invariant face and since the face is a triangle, π rotations lead out of the icosahedron. Therefore the answer is negative.

Ile-met problem deserves a separate discussion.
  1. The pairing of Z2 related DDRNA faces with two different DtRNAs coding for ile and met rather than two mets means Z2 symmetry breaking at the level of bioharmony. Could the fact that AUG acts as a start codon relate to this? Could it be that both AUG and AUA cannot act as start codons? It is difficult to invent any raeson for this.
  2. The symmetry breaking could occur in DtRNA-DAA pairing and replace Dmet with Dile. Is it possible that the 3-chords for coding for ile and second met are nearly identical so that the resonance mechanism selects ile instead of met? Could the situation be similar for the codons coding for (stop,stop) and (stop,trp) and cause the coding of pyl or sec in some situations? The scale for the quint cycle model with octave equivalence does not quite close. Could this have some role in the problem?
  3. Since similar ambivalence occurs for stop codons assigned to the tetrahedral Hamiltonian cycle, one can look at the tetrahedral Hamiltonian cycle. In this case a given edge of the cycle corresponds to a scaling by (3/2)3 so that 4 steps gives (3/2)12, which is slightly more than 7 octaves. For the quint scale in Pythagorean sense, one obtains 4 notes in the same octave.

    Exact octave equivalence corresponding to equally tempered scale in which half-note corresponds to frequency scaling 21/12, implies that there is only one 3-chord CEG#: this would explain why there are 3 stop codons in the vertebrate code!

    If bacterial codes correspond to Pythagorean scale, there would be two different 3-chords since CEG# and EG#C are not quite the same. The reason is that the frequency ratios of chords are powers of 3/2)12. This situation is completely exceptional.

    In the quint scale there are small differences between the 4 chords. Could this explain why only one of these 3-chords codes for AA (trp) in vertebrate code and pyl or sec is coded instead of stop in bacterial codes? Amusingly, the chord CEG# ends many finnish tangos and therefore acts like a stop codon!

    Could bacteria have a perfect pitch and live in a Pythagorean world? Could the transition to multicellulars mean the emergence of an algebraic extension of rationals containing 21/12 ≂ 1.059(, which is considerably larger than to (3/2)12/27≂ 1.0136)! Could people with perfect pitch have in their dark genome parts using Pythagorean scale or can they tune the magnetic flux tube radii to realize Pythagorean scale?

  4. Could the ile-met problem have a similar solution? The chords associated with ile and met would differ by ascaling with (3/2)3 or (3/2)6 using octave equivalence. These chords are not quite the same: could it happen that the 3-chord associated with the second met is nearer to that for ile? These 3-chords do not contain quint scaling and should correspond to the special chords for which no edge belongs to a Hamiltonian cycle.
Also DtRNA-DAA pairing is based on the 3-resonance.
  1. DAA icosahedron must contain the DtRNA codons pairing with DAA. This raises the question whether DDNAs could have a direct resonant coupling to DAAs. Could this pairing occur in DDNA-DAA occurring in transcription (see this) so that pieces of DDNA and DAA associated with an enzyme involved could pair with each other by 3N-resonance at DDA-DAA level? At the chemical level the base-amino acid interactions are extremely complex involving stereochemistry and formation of hydrogen bonds (see this) so that the reduction of these interactions to 3N-resonance would mean a huge simplification.
  2. Could this resonance pairing serve as a universal mechanism of bio-catalysis and take place for various enzymes and ribozymes? One example are promoters and enhancers involved with the transcription. Enhancers and promoters induce a highly non-local process generating a chromosome loop in which two portions of DNA become parallel and near to each other and dark 3N-photons could explain the non-locality as an outcome of quantum coherence in long scales.
  3. Why would DDNA-DAA pairing not occur? 3N-resonance relies on cyclotron frequencies and therefore on the magnetic field strength determined by the radii of the monopole flux tubes. One explanation would be that the frequency scales of DAA and DDNA are slightly different. Could the attachment of DRNA to translation machinery scale the magnetic field strengths of the flux tubes and their cyclotron frequencies so that only dRNA-DtRNA and DtRNA-DAA couplings are possible.
See the article The realization of genetic code in terms of dark nucleon and dark photon triplets or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, February 16, 2022

The realization of genetic code in terms of dark nucleon triplets

I have worked for more than 10 years with a proposal for a realization of the genetic code in terms of dark proton or nucleon triplets forming closed or open strings. I have considered several variants of the code but the details have remained poorly understood and I have spent a considerable time on wrong tracks. Also the contents of this chapter reflect this wandering.

It however seems that the dust is finally settling (I am writing this in the beginning of 2022). One can see the model as a generalization of the quark model of nucleon and Δ baryons obtained by replacing u and d quarks with dark nucleons. The color group solving the statistic problem for Δ baryon is in the receint case solved by Galois confinement involving Galois group Z3 assignable to the codons.

The nucleons are connected by pionic flux tubes to form a closed string-like entity carrying angular momentum 0,1, or 2. The dark variants DDNA, DRNA, DtRNA, DAA of DNA, RNA, tRNA, and and amino-acids (AA) follow as a prediction. AAs correspond to non-rotating analogs of N and Δ, DNA and RNA to rotating analog of Δ, and tRNA to rotating analog of N.

Also the pairings between dark information molecules can be understood to a high degree, the differences between DNA and RNA could reduced to difference between DDNA and DRNA due to the violation of the weak isospin symmetry. The almost exact T-C and A-G symmetries of the third letter of the genetic codon could also seen as reflection of almost exact isospin symmetry (see this). The number of DtRNAs is the minimal 32 and this predicts 1-to-many character of DtRNA-tRNA pairing which would induced wobble base pairing.

See the article The realization of genetic code in terms of dark nucleon triplets or the chapter About the Correspondence of Dark Nuclear Genetic Code and Ordinary Genetic Code.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Sunday, February 13, 2022

Homology of "world of classical worlds" in relation to Floer homology and quantum homology

One of the mathematical challenges of TGD is the construction of the homology of "world of classical worlds" (WCW). With my rather limited mathematical skills, I had regarded this challenge as a mission impossible. The popular article in Quanta Magazine with title "Mathematicians transcend the geometric theory of motion" (see this) however stimulated the attempts to think whether it might be possible to say something interesting about WWC homology.

The article told about a generalization of Floer homology by Abouzaid and Blumberg (see this) published as 400 page article with the title "Arnold Conjecture and Morava K-theory". This theory transcends my mathematical skills but the article stimulated the idea WCW homology might be obtained by an appropriate generalization of the basic ideas of Floer homology (see this).

The construction of WCW homology as a generalization of Floer homology looks rather straightforward in the zero ontology (ZEO) based view about quantum TGD. The notions of ZEO and causal diamond (CD)(see this and this), the notion of preferred extremal (PE) (see this and this), and the intuitive connection between the failure of strict non-determinism and criticality pose strong conditions on the possible generalization of Floer homology.

WCW homology group could be defined in terms of the free group formed by preferred extremals PE(X3,Y3) for which X3 is a stable maximum of Kähler function K associated with the passive boundary of CD and Y3 associated with the active boundary of CD is a more general critical point.

The stability of X3 conforms with the TGD view about state function reductions (SFRs) (see this). The sequence of "small" SFRs (SSFRs) at the active boundary of CD as a locus of Y3 increases the size of CD and gradually leads to a PE connecting X3 with stable 3-surface Y3. Eventually "big" SFR (BSFR) occurs and changes the arrow of time and the roles of the boundaries of the CD changes. The sequence of SSFRs is analogous to a decay of unstable state to a stable final state.

The identification of PEs as minimal surfaces with lower-dimensional singularities as loci of instabilities implying non-determinism allows to assign to the set PE(X3,Y3i) numbers n(X3,Y3i→ Y3j) as the number of instabilities of singularities leading from Y3i to Y3j and define the analog of criticality index (number of negative eigenvalues of Hessian of function at critical point) as number n(X3,Y3i)= ∑jn(X3,Y3i→ Y3j). The differential d defining WCW homology is defined in terms of n(X3,3i→ Y3j) for pairs Y3i,Y3j such that n(X3,Y3j)-n(X3,Y3i)=1 is satisfied. What is nice is that WCW homology would have direct relevance for the understanding of quantum criticality.

The proposal for the WCW homology also involves a generalization of the notion of quantum connectivity crucial for the definition of Gromow-Witten invariants. Two surfaces (say branes) can be said to intersect if there is a string world sheet connecting them generalizes. In ZEO quantum connectivity translates to the existence of a preferred extremal (PE), which by the weak form of holography is almost unique, such that it connects the 3-surfaces at the opposite boundaries of causal diamond (CD).

See the article Homology of "world of classical worlds" in relation to Floer homology and quantum homology or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Wednesday, February 09, 2022

Are we living in the past?

There was an inspiring popular article published in Science Times (see this) with a long title "Are We Living In the Past? New Study Shows Brain Acts Like A Time Machine That Brings Us 15 Seconds Back". It caught my attention because the basic prediction of TGD inspired theory of consciousness is that the perceptive field is 4-dimensional rather than 3-D time=constant snapshot as in standard neuroscience.

The research article by Mauro Manassi and David Whitney (see this) with title " Illusion of visual stability through active perceptual serial dependence" suggests that visual perception is a kind of temporal average over a time interval, which can be even longer than 15 seconds.

1. The findings of Manassi and Whitney

1.1 Motivating question

"Why do the objects in the world appear to be so stable despite constant changes in their retinal images?" was the question that motivated the work of Manassi and Whitney. Retinal images continuously fluctuate because of sources of internal and external noise. Retinal image motion, occlusions and discontinuities, lighting changes, and perspective changes and many other sources of noise are present. However, the objects do not appear to jitter, fluctuate, or change identity from moment to moment. Why does the perceived world change smoothly over time although the real world does not?

This problem is also encountered in quantum consciousness theories. If conscious experience consists of a sequence of non-deterministic quantum jumps as moments of consciousness, it is not at all clear how a smooth stream of consciousness is possible.

One modern explanation for the smoothness of conscious experience is some kind of change blindness or inattentional blindness. The finite capacity of visual short-term memory is certainly a fact and forces a finite perceptive resolution and effectively eliminates too fast temporal gradients. This finite resolution poses limits in perceptual, decisional and memory processing. This would naturally apply also to other sensory memories.

In the standard view sensory percept corresponds to a time=constant snapshot of the physical world. The basic prediction is that the object at a given moment of time is the real object but in a finite perceptive resolution.

The alternative hypothesis studied in the article is that the visual system, and presumably also other sensory systems, use an active stabilization mechanism, which manifests as a serial dependence in perceptual judgments. Serial dependence causes objects at any moment to be misperceived as being more similar to those in the recent past. The serial dependence has been reported in the appearance of objects, perceptual decisions about objects, and the memories about objects. In all of these examples, serial dependence is found for random or unpredictable sequential images.

This raises the question whether one can understand the serial dependence by identifying the conscious perception at a given time as a weighted temporal average of preceding time= constant perceptions over some time interval T and what additional assumptions are needed to understand the other findings related to the phenomenon.

1.2 The experiments demonstrating the serial illusion

Article describes 5 experiments related to serial illusion. The experiments are described in detail in the article with illustrations (see this) and in the sequel I summarize them only very briefly. The reader is strongly encouraged to read the original article providing illustrations and references to literature related to serial illusion.

Experiment 1: shift of the perception to past

In Experiment 1 the shift of the perception to the past was demonstrated.

  1. 2 separate groups of 44 and 45 participants rated the age of a young or old static face embedded in a blue frame (13 and 25.5 years, respectively). This gave a distribution of ratings around some mean identified as the real age of the face. The rating of the static face alone is referred to as the reference face .
  2. A third group of 47 independent participants were presented with a movie of a face that morphed gradually, aging from young to old. These observers then rated the age of the old face. The rating of the static face preceded by the movie is referred to as the test face . The last frame of the video was identical to the reference face.
  3. The age ratings between physically identical static faces, either alone (reference face) or with a preceding video (test face) were compared. Although the test and reference faces were identical, the old test face, seen after the video, was rated as 5 years younger than the old reference face, seen without the video (20.2 versus 25.5 years).
  4. One can argue that the stability illusion is due to a simple unidirectional bias in age ratings. Therefore a fourth group of 45 new participants watched a movie of a face that gradually morphed from old to young. Following the movie, observers rated the age of a young static test face embedded in a blue frame. The young face was rated as 5 years older than its actual age (18.4 versus 13 years). Therefore the stability illusion can cause faces to appear younger or older depending on the previously seen faces.
These findings are consistent with the temporal averaging hypothesis.

Experiment 2: the effect of noise

The noise is known to increase the serial dependence. Whether this is the case also in the case of illusion stability was tested. Stimuli with and without noise were represented to separate groups of observers. As a measure of the stability illusion strength, attraction index as the bias in age ratings toward the beginning of the movie was introduced.

  1. A measure of the stability illusion strength, attraction index was introduced. Attraction index is defined as Δ T/T , Δ T= | Tr-Tp| , where Tr is the real and Tp the perceived age of the test face, and T is the total age range T. Real age refers to the average perceived age in the Experiment without preceding video.
  2. When the movie and test face were presented alone or with superimposed dynamic noise, the static test face ratings were attracted by 28 and 42 % of the movie.
  3. When the movie was presented with increasing dynamic noise and a test face with high noise, the attraction was around 48 %.

    The results conform with the earlier finding that serial dependence in perception increases with noise and uncertainty. As the increasing dynamical noise yielded the strongest illusory effect, it was used across subsequent experiments.

Why should the increase of the noise increase the strength of the illusion stability? Suppose that the perception is average over time=constant perceptions from a time interval T. For instance, one could think of a Gaussian distribution for the weights of the contributions over the interval T. It would seem that T gets longer in the presence of noise in order to achieve reliability.

Experiment 3: Central tendency bias not involved

It might be argued that the results are due to a central tendency bias, i.e., the tendency to rate test faces as being close to middle age, independent of movie content.

To test this, Experiment 3 replicated the same conditions Experiment 1 but linear increase/decrease in the age of the face was replaced with a more complex increase/decrease using staircase functions leaving intact the starting and ending points of the movies (young and old).

  1. Attraction index gradually decreased with decreasing the number of age steps in the movie, thus showing that our illusion is not only due to a simple response or central tendency bias but also strongly depends on the whole content of the face morphing movie
  2. The attraction index was computed with the last 6, 18, and 30 seconds of the video preceding the test face. Attraction linearly increased with increasing video duration, thus showing that the attraction effect involves all parts of the preceding video.
These results seem to be consistent with the averaging hypothesis. If Gaussian distribution can be used to model the averaging, the parameter T characterizing the locus of the distribution was at least of order T= 30 seconds and that the distribution was rather flat in this range.

Experiment 4: Temporal strength/range of illusion

If our illusion is due to the proposed active mechanism of perceptual serial dependence, it should occur on a broad temporal range in accordance with previous findings.

In experiment 4 the temporal strength of the stability illusion with an interstimulus interval (I.S.I.) of 0, 1, 5, 10, and 15 seconds between the movie and test face was measured.

Test face age ratings were attracted toward the movie at all intervals, thus showing that stability illusion extends across a large period of time . These results also show that, without intervening trials, serial dependence magnitude extends over a larger period of time than previously shown.

Experiment 5: Face feature similarity

The previous serial dependence literature on face stimuli suggests that stability illusion should be determined by face feature similarity and should occur only when the face morphing movie and test face are similar (belong to the same person, and even more, have very nearly the same age).

Unlike previous passive change blindness based explanations, any modulation of the illusion respecting feature similarity should be consistent with serial dependence and would make it possible to make predictions about the perceived age Tp of the test face.

In experiment 5, a movie of a face that morphed from young to old was represented, and after an interval of 1 second, the age of the static test face was varied by making it younger or older than the original test old face. On the basis of the known tuning of serial dependence for face similarity, three predictions were formulated.

  1. Stability illusion should occur only with faces similar in age to the test face and not between dissimilar faces. It was found that the old test face was rated as younger (attraction effect) only for a few similar identities that were most similar to the old face; the attraction disappeared for more dissimilar identities.
  2. As the old test face was perceived as being ≈ 20 years old after watching the movie, it was predicted that, when a reference face that is 20 years old is used as a test face after the movie, the degree of attraction for that face should be zero. No attraction for a test face of 20 years of age was found.
  3. Test faces younger than ≈ 20 years old should be perceived as older, because the movie content contains older identities across the duration of the morph movie and, hence, should bias test face perception toward older ages. When the test face was younger, it was rated as older than it actually was.
The results and predictions were very well captured by a two-parameter derivative of a Gaussian model, in accordance with previous results, and ideal observer models proposed in the serial dependence literature.

2. TGD based explanation for the findings

TGD inspired quantum theory of consciousness as a generalization of quantum measurement theory allowing to overcome its basic problem caused by the conflict between determinism of unitary time evolution and non-determinism of state function reduction (see for instance this). Zero energy ontology (ZEO) as an ontology of quantum theory (see this and this) plays a crucial role and leads to the proposal that the perceptive field is 4-dimensional so that one can speak of 4-D brain. This leads to a general vision about sensory perception and memory.

In the TGD framework, the question why the perceived world looks smooth is encountered already at quantum level. ZEO predicts two kinds of state function reductions (SFRs).

  1. In "Big" SFRs (BSFRs) the arrow of time changes. In ZEO this explains in all scales why the world looks classical for the observer having arrow of time opposite to that for a system produced in BSFR (see this).
  2. Sensory perceptions correspond naturally to "small" SFRs (SSFRs) and since SSFRs are the TGD counterparts of weak measurements of quantum optics and their sequences define what in the wave mechanics would correspond to a repetition of the same measurement (Zeno effect). Therefore one can hope that the problem disappears at quantum level.

    One must however understand why the perceived world seems to evolve smoothly although it does not.

The TGD based explanation for stability illusion and serial dependence relies on the general assumptions of TGD inspired theory of consciousness.
  1. TGD inspired theory of consciousness predicts the notion of self hierarchy (see this). Self has subselves, which in turn have subselves which correspond to particular sub-subselves of self. Self experiences its subselves as separate mental images determined as averages of their subselves. There are therefore three levels involved: self, subself, and sub-sub-self. Self hierarchy is universal and appears in all scales and one can ask whether the super-ego--ego--Id triple of Freud could be interpreted in terms of this hierarchy.

    The correspondences are therefore "We" ↔ self; mental image ↔ subself; subself as mental images of self ↔ average over sub-subselves.

  2. In accordance with the vision of the 4-D brain, ZEO makes possible the temporal ensemble of mental images as a basic element of quantum consciousness. No separate neural mechanism for forming the temporal ensemble is needed: its generation is a basic aspect of the quantum world.
  3. The perception (subself) as a mental image is identified as a kind of temporal average over time=constant perceptions (sub-subselves), which basically correspond to quantum measurements and can in ZEO be identified as "small" state function reductions (SSFRs) in ZEO. Continuous stream of consciousness would replace the Zeno effect.

    The averaging smooths out various fluctuations (to which also SSFRs contribute at quantum level) and subselves as temporal averages over sub-subselves give rise to an experience of a smoothly changing world. The conscious sensory perception at "our" level is not about time=constant snapshot but an average over this kind of snapshots.

Consider now a model for the stability illusion and various aspects of serial dependence. In the following Tr resp. Tp denotes the real resp. perceived age (after seeing the video) of the face. T denotes the total age range. tk denotes the time associated with kth video picture and tmax the total duration of the video.
  1. Sub-subselves in the experiments of Manassi and Whitney correspond to t=tk<tmax video snapshots. Subself at t=tk corresponds to a statistical average Mk of 0< r< k video snapshots at tr. At t= tk, "we" experiences Mk . The averaging over time gives rise to experience, which is biased towards earlier perceptions. The averaging creates the smoothing of the perception and generates the illusion that the perceived mental image is shifted to the past.

    If the perceived ages Tp,k, to be distinguished from tk corresponding to real ages Tr,k=T0+ kΔ Tr contribute with the same weight in the age interval T, the average corresponds to the central value of T=T0+T/2. In the general case, the average depends on the details of the distribution for Tr,k and on the distribution of weights for tk in accordance with the results of Experiment 3.

  2. The higher the noise level, the longer the maximal time interval tM over which the averaging takes place in order to gain reliability. This requires active response by changing tM for Mk. tM must increase with the noise level. For instance, if the weights in the average are Gaussian, the width of the Gaussian distribution must increase with the noise level. This explains the findings of Experiment 2 relating to the effects of noise.
Experiment 5 provides the information needed to formulate a model for what could happen in the addition of a new face at t=tN.
  1. The test face FN+1 is first experienced as a different person. After that it is checked whether FN+1 corresponds to any memory mental image Mk in the set {Mk| k = 1,...,N}. This involves memory recall besides time=constant snapshot perception.

    If FN+1 is similar to some Mk in the set {Mk, k=1,..N}, it is added to MN and defines a new memory mental image MN+1 and there is a stability illusion.

    If it does not correspond to any Mk, it is not recognized as an already perceived face, and is not added to MN as a new memory MN+1 so that there is no stability illusion.

  2. This model explains the results of 3 sub-experiments of Experiment 5 relating to the face feature similarity. The second experiment however deserves a detailed comment since it involves criticality in the sense that a small variation of the real age of F(N+1) should lead to a disappearance of the stability illusion.

    Let Tp,A be the perceived age of the test face in experiment A and Tr,B the real age in the next experiment. For TB,r= TA,p the stability illusion is absent whereas for TB,r< TA,p it is present. The situation is therefore critical.

    The proposed model explains the presence of the illusion. One can however argue that TB,r> TA,p rather than TB,r= TA,p should actually hold true, or more precisely, there was no memory mental image Mk with Tp<q Tr. A small variation of TB,r makes it possible to test whether the situation is really critical.

See the article Quantum Statistical Brain or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Sunday, February 06, 2022

Gene tectonics and TGD

"Gene tectonics" (see this and this) represents a remarkable step of progress in genetics. The study of the evolution of chromosomes involving few basic mechanisms such as mixing of genes within chromosome, fusion of chromosomes along their ends, the insertion of chromosome inside chromosome, and fusion followed by permutations of genes within the composite chromosome allows to study the evolution at the level of entire genome and to understand what the differentiation of lineages and species could correspond at the level of genome. It has been found that the mixing of genes occurs often and does not have drastic effects and one can speak of chromosome conservation whereas the mutations involving several chromosomes are rare.

These findings represent a challenge for the TGD point of view of genetics and together with the recent progress in the number theoretical vision about physics, inspire fresh questions and ideas about genes and chromosomes. In particular, the question of how genes could code for biological functions reduces to the level of space-time dynamics at the number-theoretical level.

In the number-theoretical vision about TGD, biological functions would correspond to polynomials and genes would correspond to composition of polynomials assignable to genes. In zero energy ontology (ZEO), a given polynomial would define a space-time region as an analog of deterministic classical computation and quantum computation would involve their superposition.

See the article Gene tectonics and TGD or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, February 03, 2022

Arnold's conjecture , generalization of Floer's theory, and TGD

There is a highly interesting popular article in Quanta Magazine with title "Mathematicians transcend the geometric theory of motion" (see this). The article explains the work of mathematicians Abouzaid and Blomberg represents a generalization of Floer homology, which, using popular terms, allows to "count holes" in the infinite-D spaces. The loop space is replaced with a more general space.
  1. The starting point is the conjecture by Arnold related to the Hamiltonian systems. These systems are defined in phase space whose points are pairs of position and momentum of the particle. This notion is extremely general in classical physics. The question of Arnold was whether there exist closed orbits, kind of islands of order, in the middle of oceans of chaos consisting of non-closed chaotic orbits. His conjecture was that there indeed exists and that homology theory allows us to understand them. These closed orbits would be minimal representatives for the homology equivalence classes of curves. These orbits are also critical.
  2. A 2-D example helps to understand the idea. How to understand the homology of torus? Morse theory is the tool.

    Consider the embedding of torus to 3-space. The height-coordinate H defines a function at torus. It has 4 critical points. H has maximum resp. minimum at the top resp. bottom of the torus. H has saddle points at the top and bottom of the "hole" of the torus. These correspond to two touching circles: the topology of the intersection changes. The situation is topologically critical and the criticality tells about the appearance of the "hole" in torus. The extrema code for the homology. Outside these points the topology of the intersection is a circle or two disjoint circles.

  3. One can deform the torus and also add handles to it to get topologies with a higher genus and reader is encouraged to see how the height function now codes for the homology and the appearance of "holes".
  4. This situation is finite-D and too simple to apply in the case of the space of orbits of a Hamiltonian system. Now the point of torus is replaced with a single orbit in phase space. This space is infinite-dimensional and the Morse theory does not generalize as such. The work of Abouzaid and Blomberg changes the situation.
I do not understand the technical aspects involved with the finding but it might have direct relevance for TGD.
  1. In the TGD Universe, space-time is a 4-surface in H=M4× CP2, in a loose sense an orbit of 3-surface. General Coordinate Invariance (GCI) requires that the dynamics associates to a given 3-surface a highly unique 4-surface at which the 4-D general coordinate transformations act. This 4-surface is a preferred extremal of the action principle determing space-time surfaces in H and analogous to Bohr orbit. GCI gives Bohr orbitology as an exact part of quantum theory and also holography.

    These preferred extremals as 4-surfaces are analogous to the closed orbits in Hamiltonian systems about which Arnold speculated. In the TGD Universe, only these preferred extremals would be realized and would make TGD an integrable theory. The theorem of Abouzaid and Blomberg allows to prove Arnold's conjecture in homologies based on cyclic groups Zp. Maybe it could also have use also in the TGD framework.

  2. WCW generalizes the loop space considered in Floer's approach. Very loosely, loop or string is replaced by a 3-D surface, which by holography induced is more or less equivalent with 4-surface. In TGD just these minimal representatives for homology as counterparts of closed orbits would matter.
  3. Symplectic structure and Hamiltonian are central notions also in TGD. Symplectic (or rather, contact) transformations assignable to the product δ M4+× CP2 of the light-cone boundary and CP2 act as the isometries of the infinite-D "world of classical worlds" (WCW) consisting of these preferred extremals, or more or less equivalently, corresponding 3-surfaces. Hamiltonian flows as 1-parameter subgroups of isometries of WCW are symplectic flows in WCW with symplectic structure and also Käehler structure.
  4. The space-time surfaces are 4-D minimal surfaces in H with singularities analogous to frames of soap films. Minimal surfaces are known to define representatives for homological equivalence classes of surfaces. This has inspired the conjecture that TGD could be seen as a topological/homological quantum theory in the sense that space-time surfaces served as unique representatives or their homological classes.
  5. There is also a completely new element involved. TGD can be seen also as number theoretic quantum theory. M8-H duality can be seen as a duality of a geometric vision in which space-times are 4-surfaces in H an of a number theoretic vision in which one consideres 4-surfaces in octonionic complexified M8 determined by polynomials with dynamics reducing to the condition that the normal space of 4-surface is associative (quaternionic). M8 is analogous to momentum space so that a generalization of momentum-position duality of wave mechanics is in question.
A generalization of Floer's theory allowing to generalize Arnold's conjecture is needed and the approach of Abouzaid and Blomberg might make such a generalization possible.
  1. The preferred extremals would correspond to the critical points of an analog of Morse function in the infinite-D context. In TGD the Kähler function K defining the Kahler geometry of WCW is the unique candidate for the analog of Morse function.

    The space-time surfaces for which the exponent exp(-K) of the Kähler function is stationary (so that the vacuum functional is maximum) would define preferred extremals. Also other space-time surfaces could be allowed and it seems that the continuity of WCW requires this. However the maxima or perhaps extrema would provide an excellent approximation and number theoretic vision would give an explicit realization for this approximation.

  2. A stronger condition would be that only the maxima are allowed. Since WCW Kähler geometry has an infinite number of zero modes, which do not appear in the line elements as coordinate differentials but only as parameters of the metric tensor, one expects an infinite number of maxima and this might be enough.
  3. These maxima or possibly also more general preferred extrema would correspond by M8-H duality to roots of polynomials P in the complexified octonionic M8 so that a connection with number theory emerges. M8-H duality strongly strongly suggests that exp(-K) is equal to the image of the discriminant D of P under canonical identification I: ∑ xnpn→ ∑ xnp-n mapping p-adic numbers to reals. The prime p would correspond to the largest ramified prime dividing D (see this and this).
  4. The number theoretic vision could apply only to the maxima/extrema of exp(-K) and give rise to what I call a hierarchy of p-adic physics as correlates of cognition. Everything would be discrete and one could speak of a generalization of computationalism allowing also the hierarchy of extensions of rationals instead of only rationals as in Turing's approach. The real-number based physics would also include the non-maxima via a perturbation theory involving a functional integral around the maxima. Here Kähler geometry allows to get rid of ill-defined metric and Gaussian determinants.
For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Quantum friction in the flow of water through nanotube

The popular article "Quantum friction slows water flow" (see this) explains the work of Lyderic Bockquet related to quantum friction published in Nature (see this).

In the experiments considered, water flows through very smooth carbon nanotubes. Water molecules have a diameter of .3 nm. The radius of the nanotube varies in the range [20,100] nm. A small friction has been measured. The surprising finding is that the resistance increases with the radius of the nanotube although large tubes are as smooth as small tubes.

In classical hydro-dynamics the wall is just a wall. Now one must define this notion more precisely. The wall is made of mono-atomic graphene layers. Layers are smooth, which reduces drag and water molecules are not adsorbed on the walls. Therefore the friction is very small but non-vanishing.

The reason is that the electrons of graphene interact with polar water molecules and form bound states and follow the flow. Catching the flow takes however some time which causes resistance. In Born-Oppenheimer approximation this is not taken into account and electrons are assumed to adapt to molecular configurations instantaneously. For thin nanotubes the graphene layers are not so well-ordered due to the geometric constraints and the number layers and therefore also of co-moving electrons is smaller. This reduces the friction effect.

Could TGD help to understand the findings?

  1. I wrote some time ago an article about quantum hydrodynamics in TGD Universe some time ago (see this). The model for turbulence would involve the notion of dark matter as phases of ordinary matter with effective Planck constant heff= nh0>h even in macroscales. heff would characterize the "magnetic body" (MB) associated with the flow.
  2. The quantum scale L associated with the flow is proportional to heff and could characterize the MB. L could be larger than the system size but would be determined by it. One could say that MB to some degree controls the ordinary matter and its quantum coherence induces ordinary coherence at the level of the ordinary matter. Quantum effects at the level of MB are suggested to be present even for the ordinary hydrodynamic flow. The detailed mechanism is however not considered.
  3. The outcome is the prediction that kinematic viscosity is proportional to heff/m, where m is the mass of the unit of flow, now a water molecule.
  4. What could be the quantum scale L now? The scale of classical forced coherence would be the radius R of the pipe or, as the study suggests, the size scale of the system formed by water flow and the ordered graphene layers. The scale L of quantum coherence associated with MB could be larger. The larger the number of layers, the larger the size L of MB.

    From L ∝ heff, one has ν ∝ ℏeff/m ∝ L. In conflict with the classical intuitions, the friction would be proportional to L and decrease as the pipe radius decreases. This conforms with the proposal if the magnetic body associated with the electron system is the boss.

See the article TGD and hydrodynamics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, February 02, 2022

A finding challenging the standard theory of electrolytes

I received a link to an interesting article "Double-layer structure of the Pt(111) aqueous electrolyte interface" about the findings of Ojha et al (see this) . The reader can also consult the popular representation of the finding (see this) .

The experiments demonstrate that the physics of electrolytes is not completely understood.

  1. Pt(111)-aqueous electrolyte interface is studied in a situation in which there is a circulation of H2 molecules part of which decay to H ions and electrons at the interface of the first electrode.
  2. Electrons give rise to a current flowing to the second electrode, which also involves the Pt(111) interface. There is also a proton transfer between the electrodes. At the second interface there is a circulation of O2 molecules: part of them transforms to water molecules at the interface.
  3. A double layer of positive and negative charges of some thickness acting like a capacitor at the first interface is formed. Two plates of this kind plus electrolyte between them form an analog of a continually loaded battery and electron current is running when wire connects the plates.
  4. The prediction of the standard theory is that when the salt concentration of the electrolyte is lowered, the current should eventually stop running at some critical salt concentration determined by the potential between the electrodes. There would be no free electrons anymore. OThis critical potential is called the potential of zero charge.
  5. The experimental findings produced a surprise. The potential of zero charge did not appear for the predicted salt ion concentration. The reduction of ion concentration by a factor 1/10 was needed to achieve this. It would seem that the actual concentration of ions is 10 times higher! What are these strange, invisible, salt ions?
I have confessed to myself and also publicly in (see this and this) that I do really understand how ionization takes place in electrolytes. The electrostatic energies in atomic scales associated with the electrolyte potential are quite too small to induce ionization. I might be of course incredibly stupid but I am also incredibly stubborn and wanted to understand this in my own way.

The attempt to do something for this situation, and also the fact that "cold fusion" also involves electrolytes, which no nuclear physicist in his right mind would associate with electrolysis, led to a totally crazy sounding proposal that electrolysis might involve some new physics predicted by TGD and making possible "cold fusion" (see this, this, and this). Electrolytes actually involve myriads of anomalous effects (see this, this, and this). Theoretical physicists of course do not take them seriously since chemistry is thought to be an ancient, primitive precursor of modern physics.

Part of the ions of the electrolyte would be dark in the TGD sense having effective Planck constant heff> h so that their local interactions (describable using Feyman diagrams) with the ordinary matter with heff= h would be very weak. There these ions behave like dark matter so that the term "dark ion" is well-motivated. This does not however mean that the galactic dark matter would be dark matter in this sense. TGD based explanation for the galactic dark matter could be actually in terms of the dark energy assignable to cosic strings thickened to magnetic flux tubes carrying monopole flux (see this, this, and this).

  1. The presence of dark ions in water would explain the ionization in electrolytes. Water would be a very special substance in that the magnetic body of water carrying dark matter would give rise to hundreds of thermodynamic anomalies characterizing water (see this) .
  2. Biology is full of electrolytes and biologically important ions are proposed to be dark ions (see this). As a matter of fact, I ended the TGD based notion of dark matter from the anomalies of biology and neuroscience. This notion emerged from the number theoretic vision about TGD much later (see this, this, and this). Pollack effect would involve dark protons and would be in a key role in biology. The realizations of genetic codons in terms of dark proton and dark photon triplets would also be central.
  3. "Cold fusion" is one application for TGD view about dark matter (see this). The formation of dark proton sequences gives rise to dark protons and perhaps even heavier nuclei for which the binding energies would be much smaller than for ordinary nuclei. The subsequent transformation to ordinary nuclei would liberate essentially the ordinary nuclear binding energy.
The notion of dark matter also leads to concrete ideas about what happens in electrolysis (see this). In the TGD framework, the finding of Ojha et al would suggest that 90 per cent of ions are dark in the electrolyte considered.

See the article TGD and condensed matter or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Tuesday, February 01, 2022

Do we live in a steady state Einsteinian Universe or expanding TGD Universe?

The title of the popular article "Universe is Not Expanding After All, Controversial Study Suggests" is quite provocative. The article tells about the findings of Lerner et al described in the article "UV surface brightness of galaxies from the local universe to z ≈ 5". These findings challenge the notion of cosmic expansion.

First some basic concepts. Luminosity P is the total power of electromagnetic radiation emitted by the source per time. The total power dP/dΩ× ΔΩ measured by an instrument spanning a solid angle Δ Ω weakens like 1/d2 with distance. Bolometric surface brightness (SB) refers to the total radiation power per area at source and is SB=d2P/dSdΩ=(dP/dΩ)/S, that is dP/dΩ divided by the area S of the source.

The general relativity (GRT) based cosmology predicts that the luminosity decreases as (1+z)-4 and therefore very rapidly. One factor of (1+z)-1 comes from the time dilation reducing the emission rate. Second factor (1+z)-1 comes from the cosmic redshift. A factor (1+z)-2 comes from the fact that the apparent size as the area spanned by the source has decreased since the emission by cosmic expansion so that the apparent size is now by a factor (1+z)2 larger at the moment of emission. If the cosmic redshift is caused by some other mechanism instead of expansion so that one has steady state cosmology, one has much weaker (1+z)-1 dependence.

The findings of Lerner et al however suggest that the SB for identical spiral galaxies depends only weakly on the distance of the source. In the Einsteinian Universe this favors the steady state Universe. This is in conflict with the recent view of cosmology having a strong empirical support.

In the TGD Universe, galaxies are nodes of a network formed from cosmic strings thickened to flux tubes. The light from the galaxy from A to B travels only along the flux tubes connecting the source to the receiver. These flux tubes can stretch but the amount of light arriving B remains the same irrespective of distance. In Einsteinian space-time this kind of channeling would not happen and the intensity would decrease like 1/distance squared.

This mechanism would give rise to a compensating factor (1+z)2 so that the dependence of the BS on redshift would be (1+z)-2, while the BS in the static Einsteinian Universe would be (1+z)-1. For z ≈ 5, the TGD prediction for BS is by by a factor of 1/6 smaller than for static Universe whereas the GRT prediction is by a factor 1/196 smaller. TGD prediction is nearer to steady state Universe than expanding Universe based on Einsteinian view about space-time.

The article at TGD view of the engine powering jets from active galactic nuclei provides a model for how galactic jets would correspond to this kind of flux tube connections. See also the articles Cosmic string model for the formation of galaxies and stars and TGD view about quasars.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, January 31, 2022

Some useful critical questions concering the twistorial construction of scattering amplitudes

The details of the proposed construction of the scattering amplitudes (see this) starting from twistors are still unclear and the best way to proceed is to invent objections and critical questions.

How do the quark momenta in M8 and H relate to each other?

The relationship between quark momenta in M8 and H is not clear. M8-H duality suggests the same momentum and mass spectrum for quarks in M8 and H. However, the mass spectrum of color partial waves for quark spinors for the Dirac operator D(H) is very simple and characterized by 2 integers labeling triality t=1 representations of SU(3) (see this). D(H) does not allow a mass spectrum as algebraic roots of polynomials.

What can one conclude if M8-H duality holds true in a strong sense so that these spectra are identical?

The only possible conclusion seems to be that the propagator in both M8 and H is just the M4 Dirac propagator D(M4) and that the roots of the polynomial P give the spectrum of off-mass-shell masses. Also tachyonic mass squared values are allowed as roots of P. The real on-shell masses would be associated with Galois singlets.

Consider first the nice features of the D(M4) option.

  1. The integration over momentum space reduces to a finite summation over virtual mass shells defined by the roots of P and one avoids divergences. This tightens the connection with QFTs.
  2. Massless propagation conforms with the twistorialization since mass one obtains holomorphy in twistor variables.
  3. Massless quarks are consistent with the QCD picture about quarks.
  4. The consideration of problems related to right-handed neutrino (see this) led to the proposal that the quark spinor modes in H are annihilated only by the H d'Alembertian but not by the H Dirac operator. The assumption that on mass shell H-spinors are annihilated by the M4 Dirac operator leads to the same outcome.

    This allows different M4 chiralities to propagate separately and solves problems related to the notion of right-handed neutrino νR (assumed to be 3-antiquark state and modellable using leptonic spinors in H. This also conforms with the right and left-handed character of the standard model couplings.

  5. A further argument favoring the D(M4) option comes from the following observation. Suppose that one takes seriously the idea that the situation can be described also by using massless M8 momenta. This implies that for some choices of M4⊂ M8 the momentum is parallel to M4 and therefore massless in 4-D sense. Only the quarks associated with the same M4 can interact. Hence M4 can be always chosen so that the on mass-shell 4-momenta are light-like.
Consider next the objections against the D(M4) option.

  1. For some years ago I found that the space-time propagators for points of H connected by a light-like geodesic behave like massless propagators irrespective of mass. CP2 type extermals have a light-like geodesic as an M4 projection. This would suggest that quarks associated with CP2 type extremals effectively propagate as massless particles even if one assumes that they correspond to modes of the full H Dirac operator. This allows us to consider D(H) as an alternative. For this option most quarks would be extremely massive and practically absent from the interior of the space-time surface.
  2. Since the color group acts as symmetries, one can assume that spinor modes correspond to color partial waves as eigen states of CP2 spinor d'Alembertian D2(CP2). One would get rid of the constraint on masses but the correlation between color and electroweak quantum numbers would be still "wrong".
  3. If M4 Kähler form is trivial, there would be no need for the tachyonic ground state in p-adic mass calculations to reduce the mass quark squared to zero. On the other hand, this might lead also to the problems since the earlier calculations were sensitive to the negative ground state conformal weight and would not work as such. This conformal weight could be generated by conformal generators with weights h coming as roots of P with a negative real part.
  4. If leptons are allowed as fundamental fermions, D(H) allows νR as a spinor mode which is covariantly constant in CP2. If leptons are not allowed, one can argue that νR as a 3-quark state can be modeled as a mode of H spinor with Kähler coupling giving correct leptonic charges.

    The M4 Kähler structure favored by the twistor lift of TGD (see this) implies that νR with negative mass squared appears as a mode of D(H). This mode allows the construction of tachyonic ground states.

    For D(M4) with a M4 Kähler coupling, one obtains for all spinor modes states with both positive and negative mass squared from the JklΣkl term. Physical on-mass- shell states with negative mass squared cannot be allowed. These would however allow to construct tachyonic ground states needed in the p-adic mass calculations. Note that a pair of these two modes gives massless ground state.

Can one allow "wrong" correlation between color and electroweak quantum numbers for fundamental quarks?

For CP2 harmonics, the correlation between color and electroweak quantum numbers is wrong (see this). Therefore the physical quarks cannot correspond to the solutions of D2(H)Ψ=0. The same applies also to the solutions of D(M4)Ψ=0 if one assumes that they belong to irreducible representations of the color group as eigenstates of D(CP2).

How to construct quark states, which are physical in the sense that they are massless and color-electroweak correlation is correct?

  1. For D(H) option, the reduction of quark masses to zero requires a tachyonic ground state in p-adic mass calculations (see this). Also for D(M4) with M4 Kähler structure ground states can be tachyonic.

    Colored operators with non-vanishing conformal weight are required to make all quark states massless color triplets. This is possible only if the ground state is tachyonic, which gives strong supper for M4 Kähler structure.

  2. This is achieved by the identification of physical quarks as states of super-symplectic representations. Also the generalized Kac-Moody algebra assignable to the light-like partonic orbits or both of these representations can be considered. These representations could correspond to inertial and gravitational representations realized at "objective" embedding space level and "subjective" space-time level.

    Supersymplectic generators are characterized by a conformal weight h completely analogous to mass squared. The conformal weights naturally correspond to algebraic integers associated with P. The mass squared values for the Galois singlets are ordinary integers.

  3. It is plausible that also massless color triplet states of quarks can be constructed as color singlets. From these one can construct hadrons and leptons as color singlets for a larger extension of rationals. This conforms with the earlier picture about conformal confinement. These physical quarks constructed as states of super-symplectic representation, as opposed to modes of the H spinor field, would correspond to the quarks of QCD.

    One can argue that Galois confinement allows to construct physical quarks as color triplets for some polynomial Q and also color singlets bound states of these with extended Galois group for a higher polynomial P\circ Q and with larger Galois group as representation of group Gal(P)/Gal(Q) allowing representations of a discrete subgroup of color group.

Are M8 spinors as octonionic spinors equivalent with H-spinors?

At the level of M8 octonionic spinors are natural. M8-H duality requires that they are equivalent with H-spinors. The most natural identification of octionic spinors is as bi-spinors, which have octonionic components. Associativity is satisfied if the components are complexified quaternionic so that they have the same number of components as quark spinors in H. The H spinors can be induced to X4⊂ M8 by using M8-H duality. Therefore the M8 and H pictures fuse together.

The quaternionicity condition for the octonionic spinors is essential. Octonionic spinor can be expressed as a complexified octonion, which can be identified as momentum p. It is not an on-mass shell spinor. The momenta allowed in scattering amplitudes belong to mass shells defined by the polynomial P. That octonionic spinor has only quaternionic components conforms with the quaternionicity of X4⊂ M8 eliminating the remaining momentum components and also with the use of D(M4).

Can one allow complex quark masses?

One objection relates to unitarity. Complex energies and mass squared values are not allowed in the standard picture based on unitary time evolution.

  1. Here several new concepts lend a hand. Galois confinement could solve the problems if one considers only Galois singlets as physical particles. ZEO replaces quantum states with entangled pairs of positive and negative energy states at the boundaries of CD and entanglement coefficients define transition amplitudes. The notion of the unitary time evolution is replaced with the Kähler metric in quark degrees of freedom and its components correspond to transition amplitudes. The analog of the time evolution operator assignable to SSFRs corresponds naturally to a scaling rather than time translation and mass squared operator corresponds to an infinitesimal scaling.
  2. The complex eigenvalues of mass squared as roots of P be allowed when unitarity at quark level is not required to achieve probability conservation. For complex mass squared values, the entanglement coefficients for quarks would be proportional to mass squared exponents exp(im2λ), λ the scaling parameter analogous to the duration of time evolution. For Galois singlets these exponentials would sum up to imaginary ones so that probability conservation would hold true.
See the article About TGD counterparts of twistor amplitudes or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, January 29, 2022

The mass spectrum for an iterate of polynomial and chaos theory

Suppose that the number theoretic interaction in the scattering corresponds to a functional composition of the polynomials characterizing the external particles. If the number of the external particles is large, the composite can involve a rather high iterate of a single polynomial. This motivates the study of the scattering of identical particles described by the same polynomial P at the limit of a large particle number. These particles could correspond to elementary particles, in particular IR photons and gravitons. This situation leads to an iteration of a complex polynomial.

If the polynomials satisfy P(0)=0 requiring P(x)= xP1(x), the roots of P are inherited. In this case fixed points correspond to the points with P(x)=1. Assume also that the coefficients are rational. Monic polynomials are an especially interesting option.

For a k:th iterate of P, the mass squared spectrum is obtained as a union of spectra obtained as images of the spectrum of P under iterates P-r, r< k, for the inverse of P, which is an n-valued algebraic function if P has degree n. This set is a subset of Fatou set (see this) and for polynomials a subset of filled Julia set.

At the limit of large k, the limiting contributions to the spectrum approach a subset of Julia set defined as a P-invariant set which for polynomials is the boundary of the set for which the iteration divergences. The iteration of all roots except x=0 (massless particles) leads to the Julia set asymptotically.

All inverse iterates of the roots of P are algebraic numbers. The Julia set itself is expected to contain transcendental complex numbers. It is not clear whether the inverse iterates at the limit are algebraic numbers or transcendentals. For instance, one can ask whether they could consist of n-cycles for various values of n consisting of algebraic points and forming a dense subset of the Julia set. The fact that the number of roots is infinite at this limit, suggests that a dense subset is in question.

The basis properties of Julia set

The basic properties of Julia set deserve to be listed.

  1. At the real axis , the fixed points satisfying P(x)=x with |dP/dx|>1 are repellers and belong to the Julia set. In the complex plane, the definition of points of the Julia set is |P(w)-P(z)|> |w-z| for point w near to z.
  2. Julia set is the complement of the Fatou set consisting of domains. Each Fatou domain contains at least one critical point with dP/dz=0. At the real axis, this means that P has maximum or minimum. The iteration of P inside Fatou domain leads to a fixed point inside the Fatou set and inverse iteration to its boundary. The boundaries of Fatou domains combine to form the Julia set. In the case of polynomials, Fatou domains are labeled by the n-1 solutions of dP/dz= P1 +zdP1/dz=0.
  3. Julia set is a closure of infinitely many periodic repelling orbits. The limit of inverse iteration leads towards these orbits. These points are fixed points for powers Pn of P.
  4. For rational functions Julia set is the boundary of a set consisting of points whose iteration diverges to infinity. For polynomials Julia set is the boundary of the so-called filled Julia set consisting of points for which the iterate remains finite.
Chaos theory also studies the dependence of Julia set on the parameters of the polynomials. Mandelbrot fractal is associated to the polynomial Q(z)= a+z2 for which origin is an stable critical point and corresponds to the boundary of the region in a-plane containing origin such that outside the boundary the iteration leads to infinity and in the interior to origin.

The critical points of P with dP/dz=0 for z= zcr located inside Fatou domains are analogous to point z=0 for Q(z) associated with Fatou domains and quadratic polynomial a+b(z-zcr)2, b>0, would serve as an approximation. The variation of a is determined by the variation of the coefficients of P required to leave zcr invariant.

Feigenbaum studied iteration of a polynomial a-x2 for which origin is unstable critical point and found that the variation of a leads to a period doubling sequence in which a sequence of 2n-cycles is generated (see this). Origin would correspond to an unstable critical point dP(z)/dz=0 belonging to a Julia set.

About physical implications

The physical implications of this picture are highly interesting.

  1. For a large number of interacting quarks, the mass squared spectrum of quarks as roots of the iterate of P in the interaction region would approach the Julia set as infinite inverse iterates of the roots of P. This conforms with the idea that the complexity increases with the particle number.

    Galois confinement forces the mass squared spectrum to be integer valued when one uses as a unit the p-adic mass scale defined by the larger ramified prime for the iterate. The complexity manifests itself only as the increase of the microscopic states in interaction regions.

  2. Julia set contains a dense set consisting of repulsive n-cycles, which are fixed points of P and the natural expectation is that the mass spectrum decomposes into n-multiplets. Whether all values of n are allowed, is not clear to me. The limit of a large quark number would also mean an approach to (quantum) criticality.
Two objections

There is a useful objection against this picture. M8-H duality requires the same momentum and mass spectrum for quarks in M8 and H. However, the mass spectrum of color partial waves for quark spinors in H is very simple and characterized by 2 integers labeling triality t=1 representations of SU(3) (see this). How can these pictures be consistent with each other? Do the quarks in M4 and H differ from each other and what does this mean?

To answer this question, one must ask what one means with quark in H and in M8.

  1. There are good reasons to assume that the quark spinor modes in H are annihilated only by the H d'Alembertian but not by the H Dirac operator (see this). This allows different M4 chiralities to propagate separately and solves problems related to the notion of right-handed neutrino νR and also conforms with the right and left-handed character of standard model couplings.
  2. Apart from νR all quark partial harmonics have CP2 mass scale and also the correlation between color and electroweak quantum numbers is wrong (see this). Therefore the physical quarks cannot correspond to the solutions of H spinor d'Alembertian.

    The M4 Kähler structure forced by the twistor lift of TGD (see this) is part of the solution. It predicts that νR, if modeled as a mode of H spinor with Kähler coupling giving correct leptonic charges, has a tachyonic mass. The first guess is that the physical states contain an appropriate number of right handed neutrinos to build a tachyonic ground state from which one can construct a massless state. A more general approach allows roots of P with a negative real part as tachyonic virtual quarks. The virtual particles of standard QFT would correspond to quarks with masses coming as roots of P and they can also be tachyonic. Galois singlets would be analogous to on-mass shell particles.

  3. How to construct quark states which are physical in the sense that they are massless and color-electroweak correlation is correct? The reduction to quark masses to zero requires a tachyonic ground state in p-adic mass calculations (see this) . Also colored operators are required to make all quarks state color triplets.

    The solution of the problem is provided by the identification of physical quarks as states of super-symplectic representations. Also the generalized Kac-Moody algebra assignable to the light-like partonic orbits or both of these representations can be considered. These representations could correspond to inertial and gravitational representations realized at "objective" embedding space level and "subjective" space-time level.

    Supersymplectic generators are characterized by a conformal weight h completely analogous to mass squared. The conformal weights naturally correspond to algebraic integers associated with P. The mass squared values for the "physical" quarks are algebraic integers and Galois confinement forces integer-valued conformal weights for the physical states consisting of quarks. This conforms with the earlier picture about conformal confinement.

  4. These "physical" quarks constructed as states of super-symplectic representation, as opposed to modes of H spinor field, would correspond to quarks in M8. The complex momenta and mass squared values would be generated by supersymplectic generators with conformal weights h coming as algebraic integers associated with P. Most importantly, the modes of H-spinors would have integer-valued momenta and mass spectrum of the spinor d'Alembertian.
Second useful objection relates to unitarity. Complex energies and mass squared values are not allowed in the standard picture based on unitary time evolution.
  1. Here several new concepts lend a hand. Galois confinement could solve the problems if one considers only Galois singlets as physical particles. ZEO replaces quantum states with entangled pairs of positive and negative energy states at the boundaries of CD and entanglement coefficients define transition amplitudes.

    The notion of the unitary time evolution is replaced with the Kähler metric in quark degrees of freedom and its components correspond to transition amplitudes. The analog of the time evolution operator assignable to SSFRs corresponds naturally to a scaling rather than time translation and mass squared operator corresponds to an infinitesimal scaling.

  2. The complex eigenvalues of mass squared as roots of P be allowed when unitarity at quark level is not required to achieve probability conservation. For complex mass squared values, the entanglement coefficients for quarks would be proportional to mass squared exponents exp(im2λ), λ the scaling parameter analogous to the duration of time evolution. For Galois singlets these exponentials would sum up to imaginary ones so that probability conservation would hold true.
To sum up, it would seem that chaos (or rather complexity-) theory could be an essential part of the fundamental physics of many-quark systems rather than a mere source of pleasures of mathematical aesthetics.

See the article About TGD counterparts of twistor amplitudes or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Wednesday, January 26, 2022

Quantum Statistical Brain

The following considerations were inspired by a popular article (see this) telling about findings (see this) of Li et al supporting the view that neural noise carries information in the sense that it represents the uncertainty of visual short term memories so that both the content of memory and its uncertainty are represesented. Thanks for the link to Jouko Alanko.

Does neural noise carry information about the uncertainty of visual short term memories?

The highlights of Li et al are following:

  • Humans know the uncertainty of their working memory and use it to make decisions.
  • The content and the uncertainty of working memory can be decoded from so called BOLD signals.
  • Decoding errors predict memory errors at the single-trial level.
  • Decoded uncertainty correlates with behavioral reports of working memory uncertainty.
It is not surprising that the states of feature detector neurons should obey a statistical distribution. It is however not obvious that the reliability of the memory should correlate with the width of this distribution and that even the subjective estimate for the reliability should reflect this width.

Does the distribution in the feature space reflect quantum non-determinism?

Could the distribution in the feature space reflect quantum non-determinism rather than uncertainty of sensosry perceptions and somehow also the uncertainty of memories.

  1. If features as states of feature detector neurons or groups of them correspond to the outcomes of quantum measurements, they have a probability distribution. The real input to these neutron would have produced this distribution and could be estimated from the probability distribution.

    The outcomes are eigenstates of density matrix determined by the entanglement and determined apart from phase factors. For instance, in the measurement of spin of spin 1/2 particle the probabilities of spin 1/2 and spin -1/2 states can be deduced for an ensemble of identical particles but the relative phase of the spin 1/2 and spin -1/2 state cannot be deduced.

  2. The interpretation of quantum measurement would differ from the classical one. Classically, and according to recent neuroscience, sensory perception means that brain, system A, detects the state of system B in the external world. Quantum mechanically, the entanglement between A and B is reduced in the measurement and entangled state becomes a tensor product of are eigenstates of the density matrix. The relationship between A and B is what is "measured". For an ensemble of outcomes, the probabilities of outcomes allow to deduce information about the entanglement before measurement.
  3. If the reduction of the entanglement between sensory organ and external world can be measured repeatedly, it gives rise to a distribution of outcomes coding also the uncertainty caused by the quantum measurement. This however requires that the entanglement is regenerated between these measurements. Is this possible?
The distribution of features would not reflect uncertainty of memories but the non-determinism of the outcome in the reduction of entanglement. Interestingly, in quantum computation this kind of ensemble is produced and from the distribution of outcomes of the measurement halting the quantum computation, the outcome of the quantum computation is deduced. The method is essentually statistical.

In TGD framework the notion of magnetic body (MB) using biological body as sensory receptor and motor instrument emerges as a new notion. The entanglement between magnetic body and sensory organs could be reduced in sensory perception. There is a hierarchy of levels and entanglements at them and SFR is replaed with a cascade of SFRs proceeding from long to short scales.

Is the feature distribution realized as a temporal ensemble?

In sensory perception, the distribution of features should correspond to a distribution of states of feature detector neurons or their groups. How is this distribution realized? How does this distribution relate to the distribution of memories?

Let us consider the questions about sensory perceptions.

  1. The neuroscience based answer to question in the case of sensory perceptions would be "As a spatial ensemble consisting of feature neurons". But how does this distribution relate to the distribution of memories?
  2. In TGD framework, the answer would be "As a temporal ensemble". Zero energy ontology (ZEO) leads to a new view about quantum states as superpositions of deterministic time evolutions and modifies the view about quantum measurements allowing to circumvent the basic paradox of quantum measurement theory leading to various interpretations.

    The outcome is the notion of 4-D brain, which suggests a temporal ansemble formed by memory mental images of the feature. In ZEO, the sequences of "small" state function reductions (SSFRs) as counterparts of so called weak measurements would form temporal ensembles of memory mental images so that the connection with short term memory would be direct. The spatial ensemble would be replaed by temporal ensemble experienced consciously as memories.

TGD based view about sensations and short term memories

To develop a more detailed model based on the proposed ideas, one must answer several questions in the TGD framework. What sensory experiences, perceptions, and features are in TGD Universe? What could the phrase "statistical ensemble of features" mean? What does sensory perception as a quantum measurement and quantum measurement itself correspond to?

The notions of sensation, perception, and feature

Sensation as the core of sensory experience must be distinguished from perception. Sensation is just the sensory awareness with nothing added. Perception involves a cognitive representation providing an interpretation of perception and consists of objects and the associations and memories associated with them.

Brain is believed to analyze the sensory input from the sensory organs to features. Features are just those aspects of the input that are relevant to survival or target of attention. Neutrons serve as feature detectors (see this).

This deconstruction process is followed by reconstruction which proceeds upwards from features to objects of the perceptive field so that the perceptive field decomposes to standardized mental images representing objects with various attributes, orientation and motion are such attributes. This is basically pattern recognition. Features are basic building bricks of the sensory mental images and not necessarily conscious to us.

The reconstruction process is analogous to first drawing a simple drawing consisting of lines and then gradually filling the picture by adding colors with varying intensities. Something analogous happens also when the sound-scape of a movie is constructed. One starts from the actual sound-scape but the outcome is quite different and very far from the original. One could say that sensory perception is essentially an artwork.

In the mathematical modeling, one can speak of a feature space. Features have attributes and the claim of the article discussed is that one can assign to features a probability distribution. Brain would not only build features but also represent this probability distribution making it possible to estimate the reliability of the visual short memory. It is however not clear how the distribution gives rise to a conscious experience about reliability and how the short term memory relates to the sensory perception.

Ensemble of features as temporal ensemble of memory mental images?

The probability distribution for features should be realized somehow as a statistical ensemble. One can consider two alternative options.

  1. In the standard physics framework spatial ensemble seems to be the only possible realization. The perception would be represented as a large number of copies. The fact that the inputs in the retina are mapped in a topographic manner to various parts of the visual cortex poses strong constraints on the number and location of the copies. If there is a spatial ensemble its neurons should form groups of neary neurons. The problem is how the distribution of features in this ensemble can code for the reliability of sensory or memory mental images and this requires a theory of consciousness.
  2. In the TGD framework, the brain is 4-D and it makes sense to speak of a temporal ensemble of memory mental images. These temporal ensembles would correspond to temporal sequences of memory mental images and the distribution aspect would be automatically realized. The variance of this distribution would provide conscious experience about the reliability of the mental images. The natural interpretation would be in terms of short term memory.
For the TGD option, the sensory input to the sensory organ, say retina, would generate a temporal ensemble of visual mental images making possible short term memory. This ensemble would be characterized by a probability distribution. The probability distribution for the states of feature neurons would be a neuronal level example of this kind of distribution. Variance would be one characteristic of this distribution and characterize the reliability of short term memory. Sensory perceptions would give rise to short term memories.

Many questions remain to be answered. How are these memory mental images generated in quantum measurements? How does the memory recall of long term memory generate a short term memory represented as a temporal ensemble of visual mental images?

  1. For instance, in the memory recall of a phone number, long term memory is involved. Somehow the memory recall creates "almost" sensory, that is virtual, perception, which suggests that a virtual sensory input from MB is involved and creates a virtual sensory perception giving rise to a visual short term memory.
  2. In the TGD framework, these virtual sensory perceptions would also make possible imagination. The virtual sensory input would come from MB to cortex and proceed to the lower levels of the brain but would not reach sensory organs except during dreams, hallucinatory states, and sensory memories (memory feats of idiot savants).
  3. The sensation associated with the sensory experience would correspond to a state function reduction (SFR) occurring in quantum measurement. But what does SFR correspond to in TGD?

    In the zero energy ontology (ZEO), the notion of SFR generalizes. The are two kinds of SFRs: "big" SFRS (BSFRs) as analogs of ordinary quantum measurements in which a large change is possible and "small" SFRs (SSFRs) as analogs of so called weak measurements, which are assumed in quantum optics but are not very-well defined in the standard quantum theory and do not appear in the text books.

    SSFRs relate closely to the Zeno effect which states that the state of the physical system remains unaffected if the same measurement is repeated. In reality this is not quite true, and the sequence of SSFRs represents a generalization of a repeated quantum measurement allowing us to understand what really happens.

    Sensory perception would be repetition of SSFRs following analogs of unitary time evolutions and would produce an temporal ensemble of sensory mental images giving rise to short term memory. The system would be measured, it would return back to almost its original state and would be measured again. SSFR is almost a classical measurement.

See the article Quantum Statistical Brain or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.