Wednesday, February 23, 2022

Connection between dark nucleon code and bioharmony

The model of genetic code based on bioharmony has evolved through many sidetracks (see this) but the the version discussed in many sidetracks (see this, this, and this) is roughly consistent with the original model and also gives a connection with the model of dark nuclear code.

Bioharmony and resonance mechanism for dark photon communications

The faces of icosahedron and tetrahedron (and also octahedron appearing in the model of genetic code as icosa-tetrahedral tessellation of hyperbolic space H3 (see this) are triangles. The proposal is that they somehow correspond to 3-chords made of dark photons, which in turn represent genetic codons.

Communications by dark 3-photons represent codons. 3N-photons represent in turn genes. The communications rely on cyclotron 3N-resonance so that the vertices of the faces of icosa-tetrahedron must contain charged particles coupling to a magnetic field. The magnetic field strengths at flux tubes associated with charged particles would determine the cyclotron frequencies.

Information is encoded to the frequency modulation of cyclotron frequencies. The chords serve as addresses much like in computer language LISP. If the modulations of 3N frequencies are identical and in synchrony, the outcome of the receiver consisting of 3N charged particles is a sequence of 3N-resonances giving rise to an 3N-pulse sequence. Nerve pulse patterns could emerge by this mechanism.

One can also consider 3N-signals for which only M<3N modulations are identical and in synchrony. In this manner communications to subsets of the receiver are possible. For instance, some subset of codons of dark gene or dark protein can be selected as a receiver, possibly controlled. This selection could de-entangled the receiver to de-entangled coherent pieces.

There is a direct connection with empiria. Biophotons, whose origin remains poorly understood, can be identified as ordinary photons resulting from the decay of dark 3N-photons to ordinary photons.

The realization in terms of dark nucleons looks more plausible if also DtRNA and DAAs are realized in terms of icosa-tetrahedral picture (the dark counterpart of information molecules X will be denoted by DX). This is because the amino acids are often neutral unlike DNA nucleotides which are negatively charged. The dark charge assignable to the icosa-tetrahedron can be controlled by pionic bonds with charges 0,+/- 1 so that it can be 3 units for DDNA and vanish for amino acids. A natural proposal is that the charge of icosa-tetrahedron compensates the charge of the amino acid and tRNA.

There are pairings of type DX-DY. The pairings DDNA-DRNA, DRNA- DtRNA and DtRNA-DAA induce the biochemical dynamics of transcription and translation. There are also pairings DX-X. DDNA-DNA and DRNA-RNA unique DtRNA-tRNA pairing is 1-to-many and relates to the wobble phenomenon. The pairings between dark nucleon variants of biomolecules and corresponding dark 3N-photons make possible biocommunications and control.

Details of the bioharmony model

Consider now a more detailed bioharmony model of the genetic code based on the geometries of icosahedron and tetrahedron.

  1. Icosahedron has 12 vertices and 20 faces, which are triangles. The idea is that the 12 vertices correspond to the notes of 12-note scale. Tetrahedron has 4 vertices and 4 faces and is self-dual whereas the dual of icosahedron is dodecahedron having 20 faces and 12 faces.
  2. 12-note scale can be represented as a Hamiltonian cycle at an icosahedron going once through all vertices. The frequencies at the neighboring points as edges of a face in cycle relate by a frequency scaling of 3/2: this gives rise to the Pythagorean variant of quint cycle.

    Octave equivalence means the identification of frequencies differing by a multiple of octaves. Octave equivalence can be used to reduce all frequencies to a single octave. If the scaling is exactly 3/2 at all steps there is a slight-breaking of octave equivalence since (3/2)12 does not quite correspond to an integer number (7) of octaves. Pythagoras was well aware of this.

    Given cycle assigns to faces 3-chords defining a harmony with 20 chords assignable to the faces of the icosahedron. For dodecahedron there is only single harmony with 12 chords and 20-note scale which could correspond to Eastern scales. For the tetrahedron the Hamiltonian cycle is unique.

  3. Icosahedral Hamiltonian cycles can be classified by symmetries. The group Z6, Z4, or Z2 (rotation by π or reflection) as a group of symmetries
The connection with the genetic code emerges in the following manner.
  1. The natural idea is that the faces of the icosa-tetrahedron correspond to both 3-chords and genetic DNA/RNA codons. If the orbits of faces could correspond to amino acids (AAs), the DNA codon would code for amino acid AA if the corresponding face is at the orbit corresponding to AA.
  2. One wants 64 DNAs: Z6,Z4 ja Z2 cycle give rise to 20+20+20 =60 DNa codons. Tetrahedron gives the remaining 4 codons.
  3. Does one obtain a correct number of AAs? Do the numbers of faces at the orbits correspond to numbers of DNAs coding for the corresponding AA?
    1. Z6 decomposes to 3 6-orbits and 1 2-orbit (3× 6+2 =20). There are 3 AAs coded by 6 DNAs. 2-orbit corresponds to AA coded by two DNAs.
    2. Z4 decomposes to 5 4-orbits. There are 5 AAs coded by 4 codons.
    3. Z2 corresponds to 10 2-orbits predicting 10 AAs coded by 2 codons.There would be 11 2-orbits altogether. There are 9 AAs coded by 2 codons.

      Some kind of symmetry breaking is present as in the case of dark nucleon code. 2 AA doublets must split to singlets. If (ile,ile,ile,met) coded by UAX could correspond to (ile,ile) and (met,met) such that (met,met) is split to (ile,met). In absence of symmetry breaking one would have 11 doublets as predicted.

  4. There are also 4 tetrahedral codons.

    There is (stop,stop) doublet (UAA, UAG) and (stop,trp) doublet (UGA,UGG). These doublets could correspond to the faces of the tetrahedron. Only one face would code for amino acid in the vertebrate code. Other faces would not have corresponding tRNA?

    For bacterial codes, the situation can be different. Pyl and sec appear as exotic amino acids. Could (UAA,UAG) for code for (stop,pyl) and (UGA,UGG) for (sec,trp) instead of (stop,trp)? Orientation preserving rotations form a 12-element group having Z2 and Z3 as subgroups. For Z2 the orbits consist of 2 vertices and for Z3 of 3 vertices (face) and 1 vertex. Z3 symmetry could correspond to trp as singlet and vertebrate stop codons as triplet. For bacterial pyl and sec Z2 with symmetry breaking is suggestive.

Bioharmony, dark nucleon code, and icosa-tetrahedral code as a tessellation of H3

Bioharmony model involves icosahedron and tetrahedron. This looks ugly unless there is some really deep reason for their emergence. One can also ask why not also octahedron having triangular faces.

Hyperbolic 3-space H3 has interpretations as a mass shell of Minkowski space M4 at the level of M8 and as light-cone proper-time constant surface at the level of H. The 4-surface X4 in M8 contains mass shells of M4 corresponding to the roots of the polynomial P defining X4. Hence one expects that H3 plays a key role in quantum TGD both discretized momentum as defining a cognitive representation with momenta, which are algebraic integers associated with extension of rationals defined by P. H3 has infinite discrete subgroups of the Lorentz group analogous to discrete groups of translations in E3 as isometries and H3 allows an infinite number of tessellations (lattices).

Perhaps the simplest tessellation is icosa-tetrahedral tessellation involving also octahedrons and thus all triangular Platonic solids. This tessellation could give rise to genetic code by induction of tesselation to 3-surfaces or lower-D objects such as linear biomolecules, and cell membranes (see this). I do not however understand the mathematical details well enough but the following discussion is general.

Consider first the model for DDNA and DRNA allowing us to understand the connection between dark nucleon and dark photon realization of the genetic code physically.

  1. The realization of DDNA/DRNA/DtRNA/DAA could correspond to a sequence of icosahedron-tetrahedron pairs at H3 contained by the 4-surface X4⊂ M8 and its H images which is also H3.
  2. Each icosa-tetrahedron would contain a dark codon realized both as a face and dark nucleon triplet associated with it. The dark photon chord associated with the face must be the same as the codon defined by dark nucleon triplet. The dark nucleon triplets correspond to cyclotron frequency triplets, which in turn correspond to dark photon 3-chords associated with the Hamiltonian cycles.
  3. The cyclotron frequencies are determined by magnetic fields at flux tubes so that Hamilton cycles must correspond to flux tube patterns. The simplest hypothesis is that the Hamilton cycle is a closed flux tube connecting all vertices of the icosahedron. Dark codon triplet corresponds to a face with 3 flux tube edges.

    The simplest option is that the flux tubes defining the edges define the cyclotron frequencies defining the dark codon in bioharmony. The variation of flux tube thickness implies frequency modulation crucial for communications.

    The realization of the Hamilton cycle requires that the magnetic field strength along the cycle is scaled by factor 3/2 to give a quint cycle.

  4. An interesting question relates to the relation of DDNA strand and its conjugate. The change of the orientation of the Hamiltonian cycle changes the chord of the harmony. For the ordinary 8-note scale one can roughly say that major and minor chords are transformed to each other. The orientation reversal could correspond to time reversal. The fact that the orientations of two DNA strands are opposite suggests that DNA and conjugate DNA are related by the orientation reversal of the Hamiltonian cycle inducing the map G→ C, U→ A a the level of DNA letters. The conjugation does not imply any obvious symmetry for the corresponding amino acids as the inspection of the code table demonstrates.
How could the Hamiltonian cycle determine the DtRNA codons?
  1. DRNA codons pair with 32 DtRNA codons and DtRNA codons pair with trNA codons in 1-to-many manner. Therefore DRNA-DtRNA pairing could be universal and 2-1, although not in a codon-wise manner. This pairing should be the same for both bioharmony and dark nucleont triplets.
  2. The pairing by 3-resonances requires that DtRNA icosa-tetrahedron contains the DRNA codons, which pair with DtRNA codon. There would be 2 DRNA codons in DtRNA icosahedron for most DtrNA codons and 1 codon for DtRNA pairing with DAA corresponding to met and trp. The number 32 of DtRNA implies in the case of icosa-tetrahedral code that there are 10+10-10=30 icosahedral DtRNAs and only 2 tetrahedral DtRNAs so that two faces of tetrahedron cannot correspond to DtRNA codon so that corresponding DRNAs must serve as stop codons.

    One of the DtRNAs could correspond to trp. The second one would correspond to a stop codon in the vertebrate code: either the DtRNA codon is not present at all or or it does not pair with tRNA. TAG and TGA can code for pyl and sec in some bacterial versions of the code and in this case the corresponding dark DRNA codon would be represented at the DtRNA tetrahedron.

  3. For bioharmony DDNA-DAA correspondence means that AAs correspond to orbits of the faces of icosahedron under the subgroup Z6,Z4, or Z2 which could correspond to reflection or to a rotation by π.

    Since DRNA-DtRNA correspondence is 2-1 although not codon-wise, the natural first guess is that Z2 orbits of the faces define the DRNA codons at the DtRNA icosahedron so that it would contain 2 codons for most DtRNAs. At the DtRNA tetrahedron the only option is Z1 so there is a symmetry breaking.

    If Z2 corresponds to a reflection, the orbit always contains 2 codons. If Z2 corresponds to a rotation by π, it might happen that the face invariant under π rotation and the orbit would consist of a single point. Could this explain why one has (ile,ile,ile,met) instead of (ile,ile) and (met,met)? The rotation axis should go through the invariant face and since the face is a triangle, π rotations lead out of the icosahedron. Therefore the answer is negative.

Ile-met problem deserves a separate discussion.
  1. The pairing of Z2 related DDRNA faces with two different DtRNAs coding for ile and met rather than two mets means Z2 symmetry breaking at the level of bioharmony. Could the fact that AUG acts as a start codon relate to this? Could it be that both AUG and AUA cannot act as start codons? It is difficult to invent any raeson for this.
  2. The symmetry breaking could occur in DtRNA-DAA pairing and replace Dmet with Dile. Is it possible that the 3-chords for coding for ile and second met are nearly identical so that the resonance mechanism selects ile instead of met? Could the situation be similar for the codons coding for (stop,stop) and (stop,trp) and cause the coding of pyl or sec in some situations? The scale for the quint cycle model with octave equivalence does not quite close. Could this have some role in the problem?
  3. Since similar ambivalence occurs for stop codons assigned to the tetrahedral Hamiltonian cycle, one can look at the tetrahedral Hamiltonian cycle. In this case a given edge of the cycle corresponds to a scaling by (3/2)3 so that 4 steps gives (3/2)12, which is slightly more than 7 octaves. For the quint scale in Pythagorean sense, one obtains 4 notes in the same octave.

    Exact octave equivalence corresponding to equally tempered scale in which half-note corresponds to frequency scaling 21/12, implies that there is only one 3-chord CEG#: this would explain why there are 3 stop codons in the vertebrate code!

    If bacterial codes correspond to Pythagorean scale, there would be two different 3-chords since CEG# and EG#C are not quite the same. The reason is that the frequency ratios of chords are powers of 3/2)12. This situation is completely exceptional.

    In the quint scale there are small differences between the 4 chords. Could this explain why only one of these 3-chords codes for AA (trp) in vertebrate code and pyl or sec is coded instead of stop in bacterial codes? Amusingly, the chord CEG# ends many finnish tangos and therefore acts like a stop codon!

    Could bacteria have a perfect pitch and live in a Pythagorean world? Could the transition to multicellulars mean the emergence of an algebraic extension of rationals containing 21/12 ≂ 1.059(, which is considerably larger than to (3/2)12/27≂ 1.0136)! Could people with perfect pitch have in their dark genome parts using Pythagorean scale or can they tune the magnetic flux tube radii to realize Pythagorean scale?

  4. Could the ile-met problem have a similar solution? The chords associated with ile and met would differ by ascaling with (3/2)3 or (3/2)6 using octave equivalence. These chords are not quite the same: could it happen that the 3-chord associated with the second met is nearer to that for ile? These 3-chords do not contain quint scaling and should correspond to the special chords for which no edge belongs to a Hamiltonian cycle.
Also DtRNA-DAA pairing is based on the 3-resonance.
  1. DAA icosahedron must contain the DtRNA codons pairing with DAA. This raises the question whether DDNAs could have a direct resonant coupling to DAAs. Could this pairing occur in DDNA-DAA occurring in transcription (see this) so that pieces of DDNA and DAA associated with an enzyme involved could pair with each other by 3N-resonance at DDA-DAA level? At the chemical level the base-amino acid interactions are extremely complex involving stereochemistry and formation of hydrogen bonds (see this) so that the reduction of these interactions to 3N-resonance would mean a huge simplification.
  2. Could this resonance pairing serve as a universal mechanism of bio-catalysis and take place for various enzymes and ribozymes? One example are promoters and enhancers involved with the transcription. Enhancers and promoters induce a highly non-local process generating a chromosome loop in which two portions of DNA become parallel and near to each other and dark 3N-photons could explain the non-locality as an outcome of quantum coherence in long scales.
  3. Why would DDNA-DAA pairing not occur? 3N-resonance relies on cyclotron frequencies and therefore on the magnetic field strength determined by the radii of the monopole flux tubes. One explanation would be that the frequency scales of DAA and DDNA are slightly different. Could the attachment of DRNA to translation machinery scale the magnetic field strengths of the flux tubes and their cyclotron frequencies so that only dRNA-DtRNA and DtRNA-DAA couplings are possible.
See the article The realization of genetic code in terms of dark nucleon and dark photon triplets or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, February 16, 2022

The realization of genetic code in terms of dark nucleon triplets

I have worked for more than 10 years with a proposal for a realization of the genetic code in terms of dark proton or nucleon triplets forming closed or open strings. I have considered several variants of the code but the details have remained poorly understood and I have spent a considerable time on wrong tracks. Also the contents of this chapter reflect this wandering.

It however seems that the dust is finally settling (I am writing this in the beginning of 2022). One can see the model as a generalization of the quark model of nucleon and Δ baryons obtained by replacing u and d quarks with dark nucleons. The color group solving the statistic problem for Δ baryon is in the receint case solved by Galois confinement involving Galois group Z3 assignable to the codons.

The nucleons are connected by pionic flux tubes to form a closed string-like entity carrying angular momentum 0,1, or 2. The dark variants DDNA, DRNA, DtRNA, DAA of DNA, RNA, tRNA, and and amino-acids (AA) follow as a prediction. AAs correspond to non-rotating analogs of N and Δ, DNA and RNA to rotating analog of Δ, and tRNA to rotating analog of N.

Also the pairings between dark information molecules can be understood to a high degree, the differences between DNA and RNA could reduced to difference between DDNA and DRNA due to the violation of the weak isospin symmetry. The almost exact T-C and A-G symmetries of the third letter of the genetic codon could also seen as reflection of almost exact isospin symmetry (see this). The number of DtRNAs is the minimal 32 and this predicts 1-to-many character of DtRNA-tRNA pairing which would induced wobble base pairing.

See the article The realization of genetic code in terms of dark nucleon triplets or the chapter About the Correspondence of Dark Nuclear Genetic Code and Ordinary Genetic Code.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Sunday, February 13, 2022

Homology of "world of classical worlds" in relation to Floer homology and quantum homology

One of the mathematical challenges of TGD is the construction of the homology of "world of classical worlds" (WCW). With my rather limited mathematical skills, I had regarded this challenge as a mission impossible. The popular article in Quanta Magazine with title "Mathematicians transcend the geometric theory of motion" (see this) however stimulated the attempts to think whether it might be possible to say something interesting about WWC homology.

The article told about a generalization of Floer homology by Abouzaid and Blumberg (see this) published as 400 page article with the title "Arnold Conjecture and Morava K-theory". This theory transcends my mathematical skills but the article stimulated the idea WCW homology might be obtained by an appropriate generalization of the basic ideas of Floer homology (see this).

The construction of WCW homology as a generalization of Floer homology looks rather straightforward in the zero ontology (ZEO) based view about quantum TGD. The notions of ZEO and causal diamond (CD)(see this and this), the notion of preferred extremal (PE) (see this and this), and the intuitive connection between the failure of strict non-determinism and criticality pose strong conditions on the possible generalization of Floer homology.

WCW homology group could be defined in terms of the free group formed by preferred extremals PE(X3,Y3) for which X3 is a stable maximum of Kähler function K associated with the passive boundary of CD and Y3 associated with the active boundary of CD is a more general critical point.

The stability of X3 conforms with the TGD view about state function reductions (SFRs) (see this). The sequence of "small" SFRs (SSFRs) at the active boundary of CD as a locus of Y3 increases the size of CD and gradually leads to a PE connecting X3 with stable 3-surface Y3. Eventually "big" SFR (BSFR) occurs and changes the arrow of time and the roles of the boundaries of the CD changes. The sequence of SSFRs is analogous to a decay of unstable state to a stable final state.

The identification of PEs as minimal surfaces with lower-dimensional singularities as loci of instabilities implying non-determinism allows to assign to the set PE(X3,Y3i) numbers n(X3,Y3i→ Y3j) as the number of instabilities of singularities leading from Y3i to Y3j and define the analog of criticality index (number of negative eigenvalues of Hessian of function at critical point) as number n(X3,Y3i)= ∑jn(X3,Y3i→ Y3j). The differential d defining WCW homology is defined in terms of n(X3,3i→ Y3j) for pairs Y3i,Y3j such that n(X3,Y3j)-n(X3,Y3i)=1 is satisfied. What is nice is that WCW homology would have direct relevance for the understanding of quantum criticality.

The proposal for the WCW homology also involves a generalization of the notion of quantum connectivity crucial for the definition of Gromow-Witten invariants. Two surfaces (say branes) can be said to intersect if there is a string world sheet connecting them generalizes. In ZEO quantum connectivity translates to the existence of a preferred extremal (PE), which by the weak form of holography is almost unique, such that it connects the 3-surfaces at the opposite boundaries of causal diamond (CD).

See the article Homology of "world of classical worlds" in relation to Floer homology and quantum homology or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Wednesday, February 09, 2022

Are we living in the past?

There was an inspiring popular article published in Science Times (see this) with a long title "Are We Living In the Past? New Study Shows Brain Acts Like A Time Machine That Brings Us 15 Seconds Back". It caught my attention because the basic prediction of TGD inspired theory of consciousness is that the perceptive field is 4-dimensional rather than 3-D time=constant snapshot as in standard neuroscience.

The research article by Mauro Manassi and David Whitney (see this) with title " Illusion of visual stability through active perceptual serial dependence" suggests that visual perception is a kind of temporal average over a time interval, which can be even longer than 15 seconds.

1. The findings of Manassi and Whitney

1.1 Motivating question

"Why do the objects in the world appear to be so stable despite constant changes in their retinal images?" was the question that motivated the work of Manassi and Whitney. Retinal images continuously fluctuate because of sources of internal and external noise. Retinal image motion, occlusions and discontinuities, lighting changes, and perspective changes and many other sources of noise are present. However, the objects do not appear to jitter, fluctuate, or change identity from moment to moment. Why does the perceived world change smoothly over time although the real world does not?

This problem is also encountered in quantum consciousness theories. If conscious experience consists of a sequence of non-deterministic quantum jumps as moments of consciousness, it is not at all clear how a smooth stream of consciousness is possible.

One modern explanation for the smoothness of conscious experience is some kind of change blindness or inattentional blindness. The finite capacity of visual short-term memory is certainly a fact and forces a finite perceptive resolution and effectively eliminates too fast temporal gradients. This finite resolution poses limits in perceptual, decisional and memory processing. This would naturally apply also to other sensory memories.

In the standard view sensory percept corresponds to a time=constant snapshot of the physical world. The basic prediction is that the object at a given moment of time is the real object but in a finite perceptive resolution.

The alternative hypothesis studied in the article is that the visual system, and presumably also other sensory systems, use an active stabilization mechanism, which manifests as a serial dependence in perceptual judgments. Serial dependence causes objects at any moment to be misperceived as being more similar to those in the recent past. The serial dependence has been reported in the appearance of objects, perceptual decisions about objects, and the memories about objects. In all of these examples, serial dependence is found for random or unpredictable sequential images.

This raises the question whether one can understand the serial dependence by identifying the conscious perception at a given time as a weighted temporal average of preceding time= constant perceptions over some time interval T and what additional assumptions are needed to understand the other findings related to the phenomenon.

1.2 The experiments demonstrating the serial illusion

Article describes 5 experiments related to serial illusion. The experiments are described in detail in the article with illustrations (see this) and in the sequel I summarize them only very briefly. The reader is strongly encouraged to read the original article providing illustrations and references to literature related to serial illusion.

Experiment 1: shift of the perception to past

In Experiment 1 the shift of the perception to the past was demonstrated.

  1. 2 separate groups of 44 and 45 participants rated the age of a young or old static face embedded in a blue frame (13 and 25.5 years, respectively). This gave a distribution of ratings around some mean identified as the real age of the face. The rating of the static face alone is referred to as the reference face .
  2. A third group of 47 independent participants were presented with a movie of a face that morphed gradually, aging from young to old. These observers then rated the age of the old face. The rating of the static face preceded by the movie is referred to as the test face . The last frame of the video was identical to the reference face.
  3. The age ratings between physically identical static faces, either alone (reference face) or with a preceding video (test face) were compared. Although the test and reference faces were identical, the old test face, seen after the video, was rated as 5 years younger than the old reference face, seen without the video (20.2 versus 25.5 years).
  4. One can argue that the stability illusion is due to a simple unidirectional bias in age ratings. Therefore a fourth group of 45 new participants watched a movie of a face that gradually morphed from old to young. Following the movie, observers rated the age of a young static test face embedded in a blue frame. The young face was rated as 5 years older than its actual age (18.4 versus 13 years). Therefore the stability illusion can cause faces to appear younger or older depending on the previously seen faces.
These findings are consistent with the temporal averaging hypothesis.

Experiment 2: the effect of noise

The noise is known to increase the serial dependence. Whether this is the case also in the case of illusion stability was tested. Stimuli with and without noise were represented to separate groups of observers. As a measure of the stability illusion strength, attraction index as the bias in age ratings toward the beginning of the movie was introduced.

  1. A measure of the stability illusion strength, attraction index was introduced. Attraction index is defined as Δ T/T , Δ T= | Tr-Tp| , where Tr is the real and Tp the perceived age of the test face, and T is the total age range T. Real age refers to the average perceived age in the Experiment without preceding video.
  2. When the movie and test face were presented alone or with superimposed dynamic noise, the static test face ratings were attracted by 28 and 42 % of the movie.
  3. When the movie was presented with increasing dynamic noise and a test face with high noise, the attraction was around 48 %.

    The results conform with the earlier finding that serial dependence in perception increases with noise and uncertainty. As the increasing dynamical noise yielded the strongest illusory effect, it was used across subsequent experiments.

Why should the increase of the noise increase the strength of the illusion stability? Suppose that the perception is average over time=constant perceptions from a time interval T. For instance, one could think of a Gaussian distribution for the weights of the contributions over the interval T. It would seem that T gets longer in the presence of noise in order to achieve reliability.

Experiment 3: Central tendency bias not involved

It might be argued that the results are due to a central tendency bias, i.e., the tendency to rate test faces as being close to middle age, independent of movie content.

To test this, Experiment 3 replicated the same conditions Experiment 1 but linear increase/decrease in the age of the face was replaced with a more complex increase/decrease using staircase functions leaving intact the starting and ending points of the movies (young and old).

  1. Attraction index gradually decreased with decreasing the number of age steps in the movie, thus showing that our illusion is not only due to a simple response or central tendency bias but also strongly depends on the whole content of the face morphing movie
  2. The attraction index was computed with the last 6, 18, and 30 seconds of the video preceding the test face. Attraction linearly increased with increasing video duration, thus showing that the attraction effect involves all parts of the preceding video.
These results seem to be consistent with the averaging hypothesis. If Gaussian distribution can be used to model the averaging, the parameter T characterizing the locus of the distribution was at least of order T= 30 seconds and that the distribution was rather flat in this range.

Experiment 4: Temporal strength/range of illusion

If our illusion is due to the proposed active mechanism of perceptual serial dependence, it should occur on a broad temporal range in accordance with previous findings.

In experiment 4 the temporal strength of the stability illusion with an interstimulus interval (I.S.I.) of 0, 1, 5, 10, and 15 seconds between the movie and test face was measured.

Test face age ratings were attracted toward the movie at all intervals, thus showing that stability illusion extends across a large period of time . These results also show that, without intervening trials, serial dependence magnitude extends over a larger period of time than previously shown.

Experiment 5: Face feature similarity

The previous serial dependence literature on face stimuli suggests that stability illusion should be determined by face feature similarity and should occur only when the face morphing movie and test face are similar (belong to the same person, and even more, have very nearly the same age).

Unlike previous passive change blindness based explanations, any modulation of the illusion respecting feature similarity should be consistent with serial dependence and would make it possible to make predictions about the perceived age Tp of the test face.

In experiment 5, a movie of a face that morphed from young to old was represented, and after an interval of 1 second, the age of the static test face was varied by making it younger or older than the original test old face. On the basis of the known tuning of serial dependence for face similarity, three predictions were formulated.

  1. Stability illusion should occur only with faces similar in age to the test face and not between dissimilar faces. It was found that the old test face was rated as younger (attraction effect) only for a few similar identities that were most similar to the old face; the attraction disappeared for more dissimilar identities.
  2. As the old test face was perceived as being ≈ 20 years old after watching the movie, it was predicted that, when a reference face that is 20 years old is used as a test face after the movie, the degree of attraction for that face should be zero. No attraction for a test face of 20 years of age was found.
  3. Test faces younger than ≈ 20 years old should be perceived as older, because the movie content contains older identities across the duration of the morph movie and, hence, should bias test face perception toward older ages. When the test face was younger, it was rated as older than it actually was.
The results and predictions were very well captured by a two-parameter derivative of a Gaussian model, in accordance with previous results, and ideal observer models proposed in the serial dependence literature.

2. TGD based explanation for the findings

TGD inspired quantum theory of consciousness as a generalization of quantum measurement theory allowing to overcome its basic problem caused by the conflict between determinism of unitary time evolution and non-determinism of state function reduction (see for instance this). Zero energy ontology (ZEO) as an ontology of quantum theory (see this and this) plays a crucial role and leads to the proposal that the perceptive field is 4-dimensional so that one can speak of 4-D brain. This leads to a general vision about sensory perception and memory.

In the TGD framework, the question why the perceived world looks smooth is encountered already at quantum level. ZEO predicts two kinds of state function reductions (SFRs).

  1. In "Big" SFRs (BSFRs) the arrow of time changes. In ZEO this explains in all scales why the world looks classical for the observer having arrow of time opposite to that for a system produced in BSFR (see this).
  2. Sensory perceptions correspond naturally to "small" SFRs (SSFRs) and since SSFRs are the TGD counterparts of weak measurements of quantum optics and their sequences define what in the wave mechanics would correspond to a repetition of the same measurement (Zeno effect). Therefore one can hope that the problem disappears at quantum level.

    One must however understand why the perceived world seems to evolve smoothly although it does not.

The TGD based explanation for stability illusion and serial dependence relies on the general assumptions of TGD inspired theory of consciousness.
  1. TGD inspired theory of consciousness predicts the notion of self hierarchy (see this). Self has subselves, which in turn have subselves which correspond to particular sub-subselves of self. Self experiences its subselves as separate mental images determined as averages of their subselves. There are therefore three levels involved: self, subself, and sub-sub-self. Self hierarchy is universal and appears in all scales and one can ask whether the super-ego--ego--Id triple of Freud could be interpreted in terms of this hierarchy.

    The correspondences are therefore "We" ↔ self; mental image ↔ subself; subself as mental images of self ↔ average over sub-subselves.

  2. In accordance with the vision of the 4-D brain, ZEO makes possible the temporal ensemble of mental images as a basic element of quantum consciousness. No separate neural mechanism for forming the temporal ensemble is needed: its generation is a basic aspect of the quantum world.
  3. The perception (subself) as a mental image is identified as a kind of temporal average over time=constant perceptions (sub-subselves), which basically correspond to quantum measurements and can in ZEO be identified as "small" state function reductions (SSFRs) in ZEO. Continuous stream of consciousness would replace the Zeno effect.

    The averaging smooths out various fluctuations (to which also SSFRs contribute at quantum level) and subselves as temporal averages over sub-subselves give rise to an experience of a smoothly changing world. The conscious sensory perception at "our" level is not about time=constant snapshot but an average over this kind of snapshots.

Consider now a model for the stability illusion and various aspects of serial dependence. In the following Tr resp. Tp denotes the real resp. perceived age (after seeing the video) of the face. T denotes the total age range. tk denotes the time associated with kth video picture and tmax the total duration of the video.
  1. Sub-subselves in the experiments of Manassi and Whitney correspond to t=tk<tmax video snapshots. Subself at t=tk corresponds to a statistical average Mk of 0< r< k video snapshots at tr. At t= tk, "we" experiences Mk . The averaging over time gives rise to experience, which is biased towards earlier perceptions. The averaging creates the smoothing of the perception and generates the illusion that the perceived mental image is shifted to the past.

    If the perceived ages Tp,k, to be distinguished from tk corresponding to real ages Tr,k=T0+ kΔ Tr contribute with the same weight in the age interval T, the average corresponds to the central value of T=T0+T/2. In the general case, the average depends on the details of the distribution for Tr,k and on the distribution of weights for tk in accordance with the results of Experiment 3.

  2. The higher the noise level, the longer the maximal time interval tM over which the averaging takes place in order to gain reliability. This requires active response by changing tM for Mk. tM must increase with the noise level. For instance, if the weights in the average are Gaussian, the width of the Gaussian distribution must increase with the noise level. This explains the findings of Experiment 2 relating to the effects of noise.
Experiment 5 provides the information needed to formulate a model for what could happen in the addition of a new face at t=tN.
  1. The test face FN+1 is first experienced as a different person. After that it is checked whether FN+1 corresponds to any memory mental image Mk in the set {Mk| k = 1,...,N}. This involves memory recall besides time=constant snapshot perception.

    If FN+1 is similar to some Mk in the set {Mk, k=1,..N}, it is added to MN and defines a new memory mental image MN+1 and there is a stability illusion.

    If it does not correspond to any Mk, it is not recognized as an already perceived face, and is not added to MN as a new memory MN+1 so that there is no stability illusion.

  2. This model explains the results of 3 sub-experiments of Experiment 5 relating to the face feature similarity. The second experiment however deserves a detailed comment since it involves criticality in the sense that a small variation of the real age of F(N+1) should lead to a disappearance of the stability illusion.

    Let Tp,A be the perceived age of the test face in experiment A and Tr,B the real age in the next experiment. For TB,r= TA,p the stability illusion is absent whereas for TB,r< TA,p it is present. The situation is therefore critical.

    The proposed model explains the presence of the illusion. One can however argue that TB,r> TA,p rather than TB,r= TA,p should actually hold true, or more precisely, there was no memory mental image Mk with Tp<q Tr. A small variation of TB,r makes it possible to test whether the situation is really critical.

See the article Quantum Statistical Brain or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Sunday, February 06, 2022

Gene tectonics and TGD

"Gene tectonics" (see this and this) represents a remarkable step of progress in genetics. The study of the evolution of chromosomes involving few basic mechanisms such as mixing of genes within chromosome, fusion of chromosomes along their ends, the insertion of chromosome inside chromosome, and fusion followed by permutations of genes within the composite chromosome allows to study the evolution at the level of entire genome and to understand what the differentiation of lineages and species could correspond at the level of genome. It has been found that the mixing of genes occurs often and does not have drastic effects and one can speak of chromosome conservation whereas the mutations involving several chromosomes are rare.

These findings represent a challenge for the TGD point of view of genetics and together with the recent progress in the number theoretical vision about physics, inspire fresh questions and ideas about genes and chromosomes. In particular, the question of how genes could code for biological functions reduces to the level of space-time dynamics at the number-theoretical level.

In the number-theoretical vision about TGD, biological functions would correspond to polynomials and genes would correspond to composition of polynomials assignable to genes. In zero energy ontology (ZEO), a given polynomial would define a space-time region as an analog of deterministic classical computation and quantum computation would involve their superposition.

See the article Gene tectonics and TGD or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, February 03, 2022

Arnold's conjecture , generalization of Floer's theory, and TGD

There is a highly interesting popular article in Quanta Magazine with title "Mathematicians transcend the geometric theory of motion" (see this). The article explains the work of mathematicians Abouzaid and Blomberg represents a generalization of Floer homology, which, using popular terms, allows to "count holes" in the infinite-D spaces. The loop space is replaced with a more general space.
  1. The starting point is the conjecture by Arnold related to the Hamiltonian systems. These systems are defined in phase space whose points are pairs of position and momentum of the particle. This notion is extremely general in classical physics. The question of Arnold was whether there exist closed orbits, kind of islands of order, in the middle of oceans of chaos consisting of non-closed chaotic orbits. His conjecture was that there indeed exists and that homology theory allows us to understand them. These closed orbits would be minimal representatives for the homology equivalence classes of curves. These orbits are also critical.
  2. A 2-D example helps to understand the idea. How to understand the homology of torus? Morse theory is the tool.

    Consider the embedding of torus to 3-space. The height-coordinate H defines a function at torus. It has 4 critical points. H has maximum resp. minimum at the top resp. bottom of the torus. H has saddle points at the top and bottom of the "hole" of the torus. These correspond to two touching circles: the topology of the intersection changes. The situation is topologically critical and the criticality tells about the appearance of the "hole" in torus. The extrema code for the homology. Outside these points the topology of the intersection is a circle or two disjoint circles.

  3. One can deform the torus and also add handles to it to get topologies with a higher genus and reader is encouraged to see how the height function now codes for the homology and the appearance of "holes".
  4. This situation is finite-D and too simple to apply in the case of the space of orbits of a Hamiltonian system. Now the point of torus is replaced with a single orbit in phase space. This space is infinite-dimensional and the Morse theory does not generalize as such. The work of Abouzaid and Blomberg changes the situation.
I do not understand the technical aspects involved with the finding but it might have direct relevance for TGD.
  1. In the TGD Universe, space-time is a 4-surface in H=M4× CP2, in a loose sense an orbit of 3-surface. General Coordinate Invariance (GCI) requires that the dynamics associates to a given 3-surface a highly unique 4-surface at which the 4-D general coordinate transformations act. This 4-surface is a preferred extremal of the action principle determing space-time surfaces in H and analogous to Bohr orbit. GCI gives Bohr orbitology as an exact part of quantum theory and also holography.

    These preferred extremals as 4-surfaces are analogous to the closed orbits in Hamiltonian systems about which Arnold speculated. In the TGD Universe, only these preferred extremals would be realized and would make TGD an integrable theory. The theorem of Abouzaid and Blomberg allows to prove Arnold's conjecture in homologies based on cyclic groups Zp. Maybe it could also have use also in the TGD framework.

  2. WCW generalizes the loop space considered in Floer's approach. Very loosely, loop or string is replaced by a 3-D surface, which by holography induced is more or less equivalent with 4-surface. In TGD just these minimal representatives for homology as counterparts of closed orbits would matter.
  3. Symplectic structure and Hamiltonian are central notions also in TGD. Symplectic (or rather, contact) transformations assignable to the product δ M4+× CP2 of the light-cone boundary and CP2 act as the isometries of the infinite-D "world of classical worlds" (WCW) consisting of these preferred extremals, or more or less equivalently, corresponding 3-surfaces. Hamiltonian flows as 1-parameter subgroups of isometries of WCW are symplectic flows in WCW with symplectic structure and also Käehler structure.
  4. The space-time surfaces are 4-D minimal surfaces in H with singularities analogous to frames of soap films. Minimal surfaces are known to define representatives for homological equivalence classes of surfaces. This has inspired the conjecture that TGD could be seen as a topological/homological quantum theory in the sense that space-time surfaces served as unique representatives or their homological classes.
  5. There is also a completely new element involved. TGD can be seen also as number theoretic quantum theory. M8-H duality can be seen as a duality of a geometric vision in which space-times are 4-surfaces in H an of a number theoretic vision in which one consideres 4-surfaces in octonionic complexified M8 determined by polynomials with dynamics reducing to the condition that the normal space of 4-surface is associative (quaternionic). M8 is analogous to momentum space so that a generalization of momentum-position duality of wave mechanics is in question.
A generalization of Floer's theory allowing to generalize Arnold's conjecture is needed and the approach of Abouzaid and Blomberg might make such a generalization possible.
  1. The preferred extremals would correspond to the critical points of an analog of Morse function in the infinite-D context. In TGD the Kähler function K defining the Kahler geometry of WCW is the unique candidate for the analog of Morse function.

    The space-time surfaces for which the exponent exp(-K) of the Kähler function is stationary (so that the vacuum functional is maximum) would define preferred extremals. Also other space-time surfaces could be allowed and it seems that the continuity of WCW requires this. However the maxima or perhaps extrema would provide an excellent approximation and number theoretic vision would give an explicit realization for this approximation.

  2. A stronger condition would be that only the maxima are allowed. Since WCW Kähler geometry has an infinite number of zero modes, which do not appear in the line elements as coordinate differentials but only as parameters of the metric tensor, one expects an infinite number of maxima and this might be enough.
  3. These maxima or possibly also more general preferred extrema would correspond by M8-H duality to roots of polynomials P in the complexified octonionic M8 so that a connection with number theory emerges. M8-H duality strongly strongly suggests that exp(-K) is equal to the image of the discriminant D of P under canonical identification I: ∑ xnpn→ ∑ xnp-n mapping p-adic numbers to reals. The prime p would correspond to the largest ramified prime dividing D (see this and this).
  4. The number theoretic vision could apply only to the maxima/extrema of exp(-K) and give rise to what I call a hierarchy of p-adic physics as correlates of cognition. Everything would be discrete and one could speak of a generalization of computationalism allowing also the hierarchy of extensions of rationals instead of only rationals as in Turing's approach. The real-number based physics would also include the non-maxima via a perturbation theory involving a functional integral around the maxima. Here Kähler geometry allows to get rid of ill-defined metric and Gaussian determinants.
For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Quantum friction in the flow of water through nanotube

The popular article "Quantum friction slows water flow" (see this) explains the work of Lyderic Bockquet related to quantum friction published in Nature (see this).

In the experiments considered, water flows through very smooth carbon nanotubes. Water molecules have a diameter of .3 nm. The radius of the nanotube varies in the range [20,100] nm. A small friction has been measured. The surprising finding is that the resistance increases with the radius of the nanotube although large tubes are as smooth as small tubes.

In classical hydro-dynamics the wall is just a wall. Now one must define this notion more precisely. The wall is made of mono-atomic graphene layers. Layers are smooth, which reduces drag and water molecules are not adsorbed on the walls. Therefore the friction is very small but non-vanishing.

The reason is that the electrons of graphene interact with polar water molecules and form bound states and follow the flow. Catching the flow takes however some time which causes resistance. In Born-Oppenheimer approximation this is not taken into account and electrons are assumed to adapt to molecular configurations instantaneously. For thin nanotubes the graphene layers are not so well-ordered due to the geometric constraints and the number layers and therefore also of co-moving electrons is smaller. This reduces the friction effect.

Could TGD help to understand the findings?

  1. I wrote some time ago an article about quantum hydrodynamics in TGD Universe some time ago (see this). The model for turbulence would involve the notion of dark matter as phases of ordinary matter with effective Planck constant heff= nh0>h even in macroscales. heff would characterize the "magnetic body" (MB) associated with the flow.
  2. The quantum scale L associated with the flow is proportional to heff and could characterize the MB. L could be larger than the system size but would be determined by it. One could say that MB to some degree controls the ordinary matter and its quantum coherence induces ordinary coherence at the level of the ordinary matter. Quantum effects at the level of MB are suggested to be present even for the ordinary hydrodynamic flow. The detailed mechanism is however not considered.
  3. The outcome is the prediction that kinematic viscosity is proportional to heff/m, where m is the mass of the unit of flow, now a water molecule.
  4. What could be the quantum scale L now? The scale of classical forced coherence would be the radius R of the pipe or, as the study suggests, the size scale of the system formed by water flow and the ordered graphene layers. The scale L of quantum coherence associated with MB could be larger. The larger the number of layers, the larger the size L of MB.

    From L ∝ heff, one has ν ∝ ℏeff/m ∝ L. In conflict with the classical intuitions, the friction would be proportional to L and decrease as the pipe radius decreases. This conforms with the proposal if the magnetic body associated with the electron system is the boss.

See the article TGD and hydrodynamics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, February 02, 2022

A finding challenging the standard theory of electrolytes

I received a link to an interesting article "Double-layer structure of the Pt(111) aqueous electrolyte interface" about the findings of Ojha et al (see this) . The reader can also consult the popular representation of the finding (see this) .

The experiments demonstrate that the physics of electrolytes is not completely understood.

  1. Pt(111)-aqueous electrolyte interface is studied in a situation in which there is a circulation of H2 molecules part of which decay to H ions and electrons at the interface of the first electrode.
  2. Electrons give rise to a current flowing to the second electrode, which also involves the Pt(111) interface. There is also a proton transfer between the electrodes. At the second interface there is a circulation of O2 molecules: part of them transforms to water molecules at the interface.
  3. A double layer of positive and negative charges of some thickness acting like a capacitor at the first interface is formed. Two plates of this kind plus electrolyte between them form an analog of a continually loaded battery and electron current is running when wire connects the plates.
  4. The prediction of the standard theory is that when the salt concentration of the electrolyte is lowered, the current should eventually stop running at some critical salt concentration determined by the potential between the electrodes. There would be no free electrons anymore. OThis critical potential is called the potential of zero charge.
  5. The experimental findings produced a surprise. The potential of zero charge did not appear for the predicted salt ion concentration. The reduction of ion concentration by a factor 1/10 was needed to achieve this. It would seem that the actual concentration of ions is 10 times higher! What are these strange, invisible, salt ions?
I have confessed to myself and also publicly in (see this and this) that I do really understand how ionization takes place in electrolytes. The electrostatic energies in atomic scales associated with the electrolyte potential are quite too small to induce ionization. I might be of course incredibly stupid but I am also incredibly stubborn and wanted to understand this in my own way.

The attempt to do something for this situation, and also the fact that "cold fusion" also involves electrolytes, which no nuclear physicist in his right mind would associate with electrolysis, led to a totally crazy sounding proposal that electrolysis might involve some new physics predicted by TGD and making possible "cold fusion" (see this, this, and this). Electrolytes actually involve myriads of anomalous effects (see this, this, and this). Theoretical physicists of course do not take them seriously since chemistry is thought to be an ancient, primitive precursor of modern physics.

Part of the ions of the electrolyte would be dark in the TGD sense having effective Planck constant heff> h so that their local interactions (describable using Feyman diagrams) with the ordinary matter with heff= h would be very weak. There these ions behave like dark matter so that the term "dark ion" is well-motivated. This does not however mean that the galactic dark matter would be dark matter in this sense. TGD based explanation for the galactic dark matter could be actually in terms of the dark energy assignable to cosic strings thickened to magnetic flux tubes carrying monopole flux (see this, this, and this).

  1. The presence of dark ions in water would explain the ionization in electrolytes. Water would be a very special substance in that the magnetic body of water carrying dark matter would give rise to hundreds of thermodynamic anomalies characterizing water (see this) .
  2. Biology is full of electrolytes and biologically important ions are proposed to be dark ions (see this). As a matter of fact, I ended the TGD based notion of dark matter from the anomalies of biology and neuroscience. This notion emerged from the number theoretic vision about TGD much later (see this, this, and this). Pollack effect would involve dark protons and would be in a key role in biology. The realizations of genetic codons in terms of dark proton and dark photon triplets would also be central.
  3. "Cold fusion" is one application for TGD view about dark matter (see this). The formation of dark proton sequences gives rise to dark protons and perhaps even heavier nuclei for which the binding energies would be much smaller than for ordinary nuclei. The subsequent transformation to ordinary nuclei would liberate essentially the ordinary nuclear binding energy.
The notion of dark matter also leads to concrete ideas about what happens in electrolysis (see this). In the TGD framework, the finding of Ojha et al would suggest that 90 per cent of ions are dark in the electrolyte considered.

See the article TGD and condensed matter or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Tuesday, February 01, 2022

Do we live in a steady state Einsteinian Universe or expanding TGD Universe?

The title of the popular article "Universe is Not Expanding After All, Controversial Study Suggests" is quite provocative. The article tells about the findings of Lerner et al described in the article "UV surface brightness of galaxies from the local universe to z ≈ 5". These findings challenge the notion of cosmic expansion.

First some basic concepts. Luminosity P is the total power of electromagnetic radiation emitted by the source per time. The total power dP/dΩ× ΔΩ measured by an instrument spanning a solid angle Δ Ω weakens like 1/d2 with distance. Bolometric surface brightness (SB) refers to the total radiation power per area at source and is SB=d2P/dSdΩ=(dP/dΩ)/S, that is dP/dΩ divided by the area S of the source.

The general relativity (GRT) based cosmology predicts that the luminosity decreases as (1+z)-4 and therefore very rapidly. One factor of (1+z)-1 comes from the time dilation reducing the emission rate. Second factor (1+z)-1 comes from the cosmic redshift. A factor (1+z)-2 comes from the fact that the apparent size as the area spanned by the source has decreased since the emission by cosmic expansion so that the apparent size is now by a factor (1+z)2 larger at the moment of emission. If the cosmic redshift is caused by some other mechanism instead of expansion so that one has steady state cosmology, one has much weaker (1+z)-1 dependence.

The findings of Lerner et al however suggest that the SB for identical spiral galaxies depends only weakly on the distance of the source. In the Einsteinian Universe this favors the steady state Universe. This is in conflict with the recent view of cosmology having a strong empirical support.

In the TGD Universe, galaxies are nodes of a network formed from cosmic strings thickened to flux tubes. The light from the galaxy from A to B travels only along the flux tubes connecting the source to the receiver. These flux tubes can stretch but the amount of light arriving B remains the same irrespective of distance. In Einsteinian space-time this kind of channeling would not happen and the intensity would decrease like 1/distance squared.

This mechanism would give rise to a compensating factor (1+z)2 so that the dependence of the BS on redshift would be (1+z)-2, while the BS in the static Einsteinian Universe would be (1+z)-1. For z ≈ 5, the TGD prediction for BS is by by a factor of 1/6 smaller than for static Universe whereas the GRT prediction is by a factor 1/196 smaller. TGD prediction is nearer to steady state Universe than expanding Universe based on Einsteinian view about space-time.

The article at TGD view of the engine powering jets from active galactic nuclei provides a model for how galactic jets would correspond to this kind of flux tube connections. See also the articles Cosmic string model for the formation of galaxies and stars and TGD view about quasars.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.