Monday, June 25, 2018

About Comorosan effect in the clustering of RNA II polymerase proteins

The time scales τ equal 5, 10, and 20 seconds appear in the clustering of RNA II polymerase proteins and Mediator proteins (see this and the previous posting). What is intriguing that so called Comorosan effect involves time scale of 5 seconds and its multiples claimed by Comorosan long time ago to be universal time scales in biology. The origin of these time scales has remained more or less a mystery although I have considered several TGD inspired explanations for this time scale is based on the notion of gravitational Planck constant (see this).

One can consider several starting point ideas, which need not be mutually exclusive.

  1. The time scales τ associated with RNA II polymerase and perhaps more general bio-catalytic systems could correspond to the durations of processes ending with "big" state function reduction. In zero energy ontology (ZEO) there are two kinds of state function reductions. "Small" reductions - analogs of weak measurements - leave passive boundary of causal diamond (CD) unaffected and thus give rise to self as generalized Zeno effect. The states at the active boundary change by a sequence of unitary time evolutions followed by measurements inducing also time localization of the active boundary of CD. The size of CD increases and gives rise to flow of time defined as the temporal distance between the tips of CD. Large reductions change the roles of the passive and active boundaries and mean death of self. The process with duration of τ could correspond to a life-time of self assignable to CD.

    Remark: It is not quite clear whether CD can disappear and generated from vacuum. In principle this is possible and the generation of mental images as sub-selves and sub-CDs could correspond to this kind of process.

  2. I have proposed (see this) that Josephson junctions are formed between reacting molecules in bio-catalysis. These could correspond to the shortened flux tubes . The difference EJ=ZeV of Coulomb energy of Cooper pair over flux tube defining Josephson junction between molecules would correspond to Josephson frequency fJ= 2eV/heff. If this frequency corresponds to τJ= 5 seconds, heff should be rather large since EJ is expected to be above thermal energy at physiological temperatures.

    Could Josephson radiation serve as a kind of of synchronizing clock for the state function reductions so that its role would be analogous to that of EEG in case of brain? A more plausible option is that Josephson radiation is a reaction to the presence of cyclotron radiation generated at MB and performing control actions at the biological body (BB) defined in very general sense. In the case of brain dark cyclotron radiation would generate EEG rhythms responsible for control via genome and dark generalized Josephson radiation modulated by nerve pulse patterns would mediate sensory input to the MB at EEG frequencies.

    A good guess is that the energy in question corresponds to Josephson energy for protein through cell membrane acting as Josephson junction and giving to ionic channel or pump. This energy could be universal as therefore same also in the molecular reactions. The flux tubes themselves have universal properties.

  3. The hypothesis ℏeff= ℏgr= GMm/β0c of Nottale for the value of gravitational Planck constant gives large ℏ. Here v00c has dimensions of velocity. For dark cyclotron photons this gives large energy Ec∝ ℏgr and for dark Josephson photons small frequency fJ∝ 1/hgr. Josephson time scale τf would be proportional to the mass m of the charged particle and therefore to mass number of ion involved. Cyclotron time scale does not depend on the mass of the charged particle at all and now sub-harmonics of τc are natural.

The time scales assignable to CD or the lifetime-time of self in question could correspond to either cyclotron of Josephson time scale τ.
  1. If one requires that the multiplies of the time scale 5 seconds are possible, Josephson radiation is favoured since the Josephson time scale proportional to hgr ∝ m ∝ A, A mass number of ion.

    The problem is that the values A= 2,3,4,5 are not plausible for ordinary nuclei in living matter. Dark nuclei at magnetic flux tubes consisting of dark proton sequences could however have arbitrary number of dark protons and if dark nuclei appear at flux tubes defining Josephson junctions, one would have the desired hierarchy.

  2. Although cyclotron frequencies do not have sub-harmonics naturally, MB could adapt to the situation by changing the thickness of its flux tubes and by flux conservation the magnetic field strength to which fc is proportional to. This would allow MB to produce cyclotron radiation with the same frequency as Josephson radiation and MB and BB would be in resonant coupling.

Consider now the model quantitatively.
  1. For ℏeff= ℏgr one has

    r= ℏgr/ℏ= GMDm/cβ0= 4.5 × 1014× (m/mp) (y/β0) .

    Here y=MD/ME gives the ratio of dark mass MD to the Earth mass ME. One can consider 2 favoured values for m corresponding to proton mass mp and electron mass me.

  2. E= hefff gives the concrete relationship f =(E/eV) × 2.4 × 1014× (h/heff) Hz between frequencies and energies. This gives

    x=E/eV = 0.4× r × (f/1014 Hz) .

  3. If the cyclotron frequency fc=300 Hz of proton for Bend=.2 Gauss corresponds to bio-photon energy of x eV, one obtains the condition

    r=GMDmp/ ℏ β0≈ .83 × 1012x .

    Note that the cyclotron energy does not depend on the mass of the charged particle. One obtains for the relation between Josephson energy and Josephson frequency the condition

    x=EJ/eV = 0.4× .83 × 10-2× (m/mp)× (xfJ/Hz) , EJ= ZeV .

    One should not confuse eV in ZeV with unit of energy. Note also that the value of Josephson energy does not depend on heff so that there is no actual mass dependence involved.

For proton one would give a hierarchy of time scales as A-multiples ofτ(p) and is therefore more natural so that it is natural to consider this case first.
  1. For fJ=.2 Hz corresponding to the Comorosan time scale of τ= 5 seconds this would give ZeV= .66x meV. This is above thermal energy Eth= T=27.5 meV at T=25 Celsius for x> 42. For ordinary photon (heff= h) proton cyclotron frequency fc(p) would correspond for x>42 to EUV energy E>42 eV and to wavelength of λ<31 nm.

    The energy scale of Josephson junctions formed by proteins through cell membrane of thickness L(151)=10 nm is slightly above thermal energy, which suggests x≈ 120 allowing to identify L(151)=10 nm as the length scale of the flux tube portion connecting the reactants. This would give E≈ 120 eV - the upper bound of EUV range. For x=120 one would have GMEmp y/v0≈ 1014 requiring β0/y≈ 2.2. The earlier estimates (see this) give for the mass MD the estimate y∼ 2× 10-4 giving β0∼ 4.4× 10-4. This is rather near to β0= 2-11∼ me/mp obtained also in the model for the orbits of inner planets as Bohr orbits.

  2. For ion with mass number A this would predict τA= A× τp= A× 5 seconds so that also multiples of the 5 second time scale would appear. These multiples were indeed found by Comoran and appear also in the case of RNA II polymerase.

  3. For proton one would thus have 2 biological extremes - EUV energy scale associated with cyclotron radiation and thermal energy scale assignable to Josephson radiation. Both would be assignable to dark photons with heff=hgr with very long wavelength. Dark and ordinary photons of both kind would be able to transform to each other meaning a coupling between very long lengths scales assignable to MB and short wavelengths/time scales assignable to BB.

    The energy scale of dark Josephson photons would be that assignable with junctions of length 10 nm with long wavelengths and energies slightly above Eth at physiological temperature. The EUV energy scale would be 120 eV for dark cyclotron photons of highest energy.

    For lower cyclotron energies suggested by the presence of bio-photons in the range containing visible and UV and obtained for Bend below .2 Gauss, the Josephson photons would have energies ≤ Eth. That the possible values of Bend are below the nominal value Bend=.2 Gauss deduced from the experiments of Blackman does not conform with the earlier ad hoc assumption that Bend represents lower bound. This does not change the earlier conclusions.

    Could the 120 eV energy scale have some physical meaning in TGD framework? What is interesting that dark nuclear physics with dark nucleon Compton length scaled up to atomic length scale and corresponding to p-adic prime k=137 has dark nuclear energy binding scale of .1 keV. Dark DNA having dark proton triplets as codons could correspond to k=137 and would be realized in water even during prebiotic period (see this). This energy is not far from 120 eV estimate obtained above.

    It is indeed natural to assume that Bend corresponds to upper bound since the values of magnetic field are expected to weaken farther from Earth's surface: weakening could correspond to thickening of flux tubes reducing the field intensity by flux conservation. The model for hearing (see this ) requires cyclotron frequencies considerably above proton's cyclotron frequency in Bend=.2 Gauss. This requires that audible frequencies are mapped to electron's cyclotron frequency having upper bound fc(e) = (mp/me) fc(p)≈ 6× 105 Hz. This frequency is indeed above the range of audible frequencies even for bats.

For electron one has hgr(e)= (me/mp)hgr(p) ≈ 5.3 × 10-4 hgr(p), ℏgr(p)=4.5× 1014/ (β0. Since Josephson energy remains invariant, the Josephson time scales up from τ(p)=5 seconds to τ(e)=(me/mP) τ(p)≈ 2.5 milliseconds, which is the time scale assignable to nerve pulses (see this).

To sum up, the model suggests that the idealization of flux tubes as kind of universal Josephson junctions. The model is consistent with bio-photon hypothesis. The constraints on hgr= GMDm/v0 are consistent with the earlier views and allows to assign Comorosan time scale 5 seconds to proton and nerve pulse time scale to electron as Josephson time scales. This inspires the question whether the dynamics of bio-catalysis and nerve pulse generation be seen as scaled variants of each other at quantum level? This would not be surprising if MB controls the dynamics. The earlier assumption that Bend=0.2 Gauss is minimal value for Bend must be replaced with the assumption that it is maximal value of Bend.

See the article Clustering of RNA polymerase molecules and Comorosan effect or the chapter Quantum Criticality and dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Why do RNA polymerase molecules cluster?

I received a link to a highly interesting popular article telling about the work of Ibrahim Cisse at MIT and colleagues (see this): at this time about clustering of proteins in the transcription of RNA. Similar clustering has been observed already earlier and interpreted as a phase separation Similar clustering has been observed already earlier and interpreted as a phase separation (see this). Now this interpretation is not proposed by experiments but experimenters say that it is quite possible but they cannot prove it.

I have already earlier discussed the coalescence of proteins into droplets as this kind of process in TGD framework. The basic TGD based ideas is that proteins - and biomolecules in general - are connected by flux tubes characterized by the value of Planck constant heff=n× h0 for the dark particles at the flux tube. The higher the value of n is the larger the energy of given state. For instance, the binding energies of atoms decrease like 1/n2. Therefore the formation of the molecular cluster liberates energy usable as metabolic energy.

Remark: h0 is the minimal value of heff. The best guess is that ordinary Planck constant equals to h=6h0 (see this and this).

TGD view about the findings

Gene control switches - such as RNA II polymerases in the DNA transcription to RNA - are found to form clusters called super-enhancers. Also so called Mediator proteins form clusters. In both cases the number of members is in the range 200-400. The clusters are stable but individual molecules spend very brief time in them. Clusers have average lifetime of 5.1±.4 seconds.

Why the clustering should take place? Why large number of these proteins are present although single one would be enough in the standard picture. In TGD framework one can imagine several explanations. One can imagine at least following reasons.

  1. One explanation could relate to non-determinism of state function reduction. The transcription and its initiation should be a deterministic process at the level of single gene. Suppose that the initiation of transcription is one particular outcome of state function reduction. If there is only single RNA II polymerase, which can make only single trial, the changes to initiate the transcription are low. This would be the case if the molecule provides metabolic energy to initiate the process and becomes too "tired" to try again. In nerve pulse transmission there is analogous situation: after the passing of the nerve pulse generation the neuron has dead time period. As a matter of fact, it turns out that the analogy could be much deeper.

    How to achieve the initiation with certainty in this kind of situation? Suppose that the other outcomes do not affect the situation appreciabley. If one particular RNA polymerase fails to initiate it, the others can try. If the number of RNA transcriptase molecule is large enough, the transcription is bound to begin eventually! This is much like in fairy tales about princess and suitors trying to kill the dragon to get the hand of princess. Eventually the penniless swineherd enters the stage.

  2. If the initiation of transcription requires large amount of metabolic energy then only some minimal number of N of RNA II polymerase molecules might be able to provide it collectively. The collective formed by N molecules could correspond to a formation of magnetic body with a large value of heff=n×h. The molecules would be connected by magnetic flux tubes.

  3. If the rate for occurrence is determined by amplitude which is superposition of amplitudes assignable to individual proteins the the rate is proportional to N2, N the number of RNA transcriptase molecules.

    The process in the case of cluster is indeed reported to to be suprisingly fast as compared to the expectations - something like 20 seconds. The earlier studies have suggests that single RNA polymerase stays at the DNA for minutes to hours. This would be a possible mechanism allowing to speed up bio-catalysis besides the mechanism allowing to find molecules to find by a reduction of heff/h= n for the bonds connecting the reactants and the associated liberation of metabolic energy allowing to kick the reactants over the potential wall hindering the reaction.

Concerning the situation before clustering there are two alternative options both relying on the model of liquid phase explaining Maxwell's rule assuming the presence of flux tube bonds in liquid and of water explaining its numerous anomalies in terms of flux tubes which can be also dark (see this).
  1. Option I: Molecules could be in a phase analogous to vapour phase and there would be very few flux tube bonds between them. The phase transition would create liquid phase as flux tube loops assignable to molecules would reconnect form flux tube pairs connecting the molecules to a tensor network giving rise to quantum liquid phase. The larger then value of n, the longer the bonds between molecules would be.

  2. Option I: The molecules are in the initial state connected by flux tubes and form a kind of liquid phase and the clustering reduces the value of n and therefore the lengths of flux tubes. This would liberate dark energy as metabolic energy going to the initiation of the transcription. One could indeed argue that connectedness in the initial state with large enough value of n is necessary since the protein cluster must have high enough "IQ" to perform intelligent intentional actions.

Protein blobs are said to be drawn together by the "floppy" bits (pieces) of intrinsically disorder proteins. What could this mean in the proposed picture? Disorder suggests absence of correlations between building bricks of floppy parts of the proteins.
  1. Could floppiness correspond to low string tension assignable to long flux loops with large heff/h=n assignable to the building bricks of "floppy" pieces? Could reconnection for these loops give rise to pairs of flux tubes connecting the proteins in the transition to liquid phase? Floppiness could also make possible to scan the enviroment by flux loops for flux loops of other molecules and in case of hit (cyclotron resonance) induce reconnection.

  2. In spite of floppiness in this sense, one could have quantum correlations between the internal quantum numbers of the building bricks of the floppy pieces. This would also increase the value of n serving as molecular IQ and provide molecule with higher metabolic energy liberated in the catalysis.

What about the interpretation of the time scales 5, 10, and 20 seconds? What is intriguing that so called Comorosan effect involves time scale of 5 seconds and its multiplest claimed by Comorosan long time ago to be universal time scales in bio-catalysis. This question will be considered in separate posting.

See the article Clustering of RNA polymerase molecules and Comorosan effect or the chapter Quantum Criticality and dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, June 22, 2018

"Invisible visible" matter identified

That 30 per cent of visible matter has remained invisible is not so well-known problem related to dark matter. It is now identified and assigned to the network of filaments in intergalactic space. Reader can consult the popular article "Researchers find last of universe's missing ordinary matter" (see this). The article "Observations of the missing baryons in the warm-hot intergalactic medium" by Nicastro et al (see this) describes the finding at technical level. Note that warm-hot refers to the temperature range 105-106 K.

In TGD framework one can interpret the filament network as as a signature of flux tubes/cosmic string network to which one can assign dark matter and dark energy. The interpretation could be that the "invisible visible" matter emerges from the network of cosmic strings as part of dark energy is transformed to ordinary matter. This is TGD variant of inflationary scenario with inflaton vacuum energy replaced with cosmic strings/flux tubes carrying dark energy and matter.

This inspires more detailed speculations about pre-stellar physics according to TGD. The questions are following. What preceded the formation of stellar cores? What heated the matter to the needed temperatures? The TGD inspired proposal is that it was dark nuclear physics (see the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?). Dark nuclei with heff=n× h0 were formed first and these decayed to ordinary nuclei or dark nuclei with smaller value of heff=n× h0 and heated the matter so that ordinary nuclear fusion became possible.

Remark: h0 is the minimal value of heff. The best guess is that ordinary Planck constant equals to h=6h0 (see this and this).

  1. The temperature of the recently detected missing baryonic matter is around 106 K and roughly 1/10:th of the temperature 107 K at solar core. This serves as a valuable guideline.

    I already earlier realized that the temperature at solar core, where fusion occurs happens to be same as the estimated temperature for the binding energy of dark nuclei identified as dark proton sequences with dark nucleon size equal to electron size. The estimate is obtained by scaling down the typical nuclear binding energy for low mass nuclei by the ratio 2-11 of sizes of ordinary and dark nuclear (electron/proton mass ratio, dark proton has same size as ordinary electron). This led to the idea that nuclear fusion in the solar core creates first dark nuclei, which then decay to ordinary nuclei and liberate essentially all of nuclear binding energy. After that ordinary nuclear fusion at resulting high enough temperature would take the lead.

  2. Dark nuclear strings can correspond to several values of heff=n× h0 with size scale scaled up by n. p-Adic length scales L(k)= 2(k-151)/2L(151), L(151)≈ 10 nm, define favoured values of n as integers in good approximation proportional to 2k/2. The binding energy scales for dark nuclei is inversely proportional to 1/n (to the inverse of the p-adic length scale). Could 106 K correspond to a p-adic length scale k=137 - atomic length scale of 1 Angstrom?

    Could dark cold fusion start at this temperature and first give rise to "pre-nuclear physics generating dark nuclei as dark proton sequences and with dark nuclear binding energy about . 1 keV with dark nuclei decaying to k=127 dark nuclei with binding energy about 1 keV, and lead to heating of the matter and eventually to cold fusion at k=127 and after than the ordinary fusion? Also the values intermediate in the range [137,127] can be considered as intermediate steps. Note that also k=131 is prime.

  3. Interestingly, the temperature at solar corona is about 1 million degrees and by factor 140-150 hotter than the inner solar surface. The heating of solar corona has remained a mystery and the obvious question is whether dark nuclear fusion giving rise to "pre-nuclear" fusion for k=137 generates the energy needed.

  4. If this picture makes sense, the standard views about the nuclear history of astrophysical objects stating that the nuclei in stars come from the nuclei from supernovas would change radically. Even planetary cores might be formed by a sequence of dark nuclear fusions ending with ordinary fusion and the iron in the Earth's core could be an outcome of dark nuclear fusion. The temperature at Earth's core is about 6× 103 K. This corresponds to k=151 in reasonable approximation.

    Remark: What is amusing that the earlier fractal analogy of Earth as cell would make sense in the sense that k=151 corresponds to the p-adic length scale of cell membrane.

    I have also considered the possibility that dark nuclear fusion could have provided metabolic energy for prebiotic lifeforms in underground oceans of Earth and that life came to the surface in Cambrian explosion (see this). The proposal would solve the hen-egg question which came first: metabolism or genetic code since dark proton sequences provide a realization of genetic code (see this).

  5. One can imagine also a longer sequence of p-adic length scales starting at lower temperatures and and longer p-adic length scales characterized by integer k for which prime values are the primary candidates. k=139 corresponding to T=.5× 106 K is one possibility. For k= 149 and k=151 (thicknesses for the lipid layer of the cell membrane and cell membrane) one would have T ≈ 2× 104 K and T ≈ 104 K - roughly the temperature at the surface of Sun and biologically important energies E= 2 eV of red light and E=1 eV of infrared light (quite recently it was found that also IR light can serve as metabolic energy in photosynthesis).

    Could dark nuclear fusion process occur at the surface of the Sun? Could one image that the sequence of dark phase transitions proceeding to opposite directions as: k=137 ← 139 ← 149 ← 151→ 149→ 139→ 137→ 131→ 127 between dark nuclear physics corresponding to p-adic length scales L(k) takes place as one proceeds from the surface of the Sun upwards to solar corona and downwards to the core. Of course, also other values of k can be considered: k:s in this sequence are primes: the ends of the warm-hot temperature range 105-106 corresponds roughly to k=143 = 13× 11 and k=137.

For TGD view about "cold fusion" and for comments about its possible role on star formation see the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, June 20, 2018

Interesting result about binocular rivalry

Science Alert reported an interesting result from neuroscience. The title of the popular article was A New Brain Experiment Just Got Closer to The Origins of Consciousness. The original article Human single neuron activity precedes emergence of conscious perception is published in Nature.

The researchers in Tel Aviv University studied people suffering from epilepsy: the epilepsy as such is however not relevant for the research interests. During more than 20 sessions the volunteers stared at a pair of images. Each image was located in front of one eye. Because each eye saw only one image, the brains couldn't fuse the images into single picture. Instead, the brain choose one image to deal with at a time. This process is known as binocular rivalry. The article claims that this process allows to separate visual stimulation and conscious seeing for each other. I would however argue that the outcome of the experiment relates to binocular rivalry rather than generation of conscious percept itself.

The finding was that medial-frontal lobe becomes active two seconds before the subject sees the picture. A second zone becomes active second later in medial-temporal lobe ( that is 1 second before the conscious visual percept). These time scales are rather long as compared to time scale of 0.08-.1 seconds associated with sensory mental images - one might call this time scale a duration of sensory cronon.

As article explains, these experiments differ from the usual experiments studying the behavior of medio-temporal neurons in response to various modifications of the sensory input (flashing a different image to the other eye; backward masking, in which a briefly presented image is suppressed by the immediate presentation of a mask image; and the attentional blink, in which the second of two target stimuli appearing in close succession is often not perceived). Also these experiments study what happens at brain level as the visual percept changes but the change is now induced externally rather than internally as in binocular rivalry. The response of MTL neurons started about .2-.3 seconds after the external manipulation. There was no activation before the change.

If I understood correctly, the interpretation of the finding was based on computational paradigm. According to this interpretation it takes about 2 seconds to compute the new visual percept when the decision about new percept is made. One might however argue that this computation should take 2 seconds also in the case of externally induced change of percept. Actually the time for the emergence of the percept is .2-.3 seconds and there was no activation before the change.

In TGD framework this longer time scale would naturally correspond to a higher level in self hierarchy. In self hierarchy mental images correspond to sub-selves and self is sub-self of self at the higher level of the hierarchy. Each level is characterized by time scale and the higher the level in the hierarchy, the longer the time scale.

  1. Could a higher level self direct its attention to alternative percepts in bio-ocular rivalry in more or less random manner? Could this re-directing of attention be seen as a motor action at some level of self hierarchy? This is the case when I turn my gaze from one object of the perceptive field to another one.

  2. In TGD picture motor based on zero energy ontology (ZEO) motor actions are identified as sensory percepts in opposite time direction: a signal is sent to geometric past and initiates neural processing leading to the motor action. This explains Libet's finding that motor action is preceded by a neural activity beginning A fraction of second before the conscious decision about motor action. Could the situation be the same now except that the time scale would be now longer? The longer time scale would suggest that the decision maker is not "me" characterized by a fraction of second time scale but some higher level in the hierarchy of selves associated with my biological and magnetic body.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, June 18, 2018

Multiple personality disorder as a Golden Road to a theory of consciousness?

Sabine gave an interesting link to a popular article in Scientific American: Could multiple personality disorder explain lie the universe and everything . At least the title is media sexy.

Side remark: Times are changing: nowadays even physicists allow for themselves the luxury of talking about things like life and consciousness.

The article discussed two basic problems encountered in theories of consciousness.

Hard problem

The first one is the hard problem: the understanding of qualia. In standard materialistic approach - physicalism as it is called - there is no hope of this since physicalism cannot distinguish between Zombie and conscious entity. Also dualistic approach requiring consistency with deterministic laws of physics leads to materialism and impasse. One should understand the physical correlates of qualia and but this is impossible. There are of course many other problems: in the world of physicalist there is no ethics and moral.

Materialism based Turing's test for consciousness based on behaviorism describing us as deterministic automatons is a bad joke: the ability of programmer to cheat someone to believe that computer is conscious does not make computer conscious.

Physics of course does not require materialism so that "physicalism" contains a lot of political ideology and even more: lack of thinking of basics labelled as "philosophizing". Here the basic problem of quantum measurement theory - the nondeterminism of state function reduction is inconsistent with the determinism of unitary time evolution - could serve as a guideline but the materialistic vision has made the problem taboo. Copenhagen interpretation says that wave function does not represent anything real but only knowledge of observer. No ontology, just epistemology. Period. Also the other interpretations do their best expel of conscious observer from the physical world. These conscious observers are indeed disgusting creatures since seem to have free will not describable in the Newtonian world. What one cannot describe, does not exist.

Combination problem

Article describes an alternative approach fo consciousness called constitutive panpsychism. In this view every physical entity is conscious - even elementary particles have rudimentary consciousness. Each physical entity has its own views about the Universe. Now one encounters combination problem: how these different views integrate to larger views?

This problem closely relates also to the problem of how to understand the idea that universe is single conscious entity - Self or God - when it is obvious that it decomposes to separate conscious entities.

Could dissociative identity disorder help?

The experiments could help here. The article begins by telling about multiple personality disorder or DID (dissociative identity disorder). Same person have very different kinds of personalities and is also conscious about the existence of "others". In the example considered one of the "others" was blind!

Single personality seems to dominate at time. This kind of splitting could occur in situations in which person meets a psychically insurmountable situation. Continually fighting authoritative parents might induce DID. Child believes that both parents are always right and simultaneously sees that they cannot be both right. This could lead to a dissociation as a manner to survive psychically. The other self is on the side of father and the other one on the side of mother.

The idea proposed in the article is that the entire Universe suffers from DID. This however does not solve the problem of how the Universe decomposes to conscious entities and is single conscious entity simultaneously.

TGD view about consciousness and DID

TGD inspired theory of quantum consciousness could be loosely classified as quantum version of constitutive physicalism. The important distinction is that not any system can be consciousness. System must de-entangle from the environment in order to have self identity: this corresponds to the fundamental division to me and the environment - separation as eastern thinkers call it. The purpose of meditative practices is to get rid of this separation and experience cosmic consciousness.

Unitary time evolution however entangles unentangled system immediately with environment so that something else is needed. In standard quantum measurement theory repeated quantum measurements of the s same observable would keep the systems un-entangled. Nothing would happen to the conscious system. Conscious entities however experience experience time flow and sensory input inducing thoughts and motor actions.

TGD inspired solution to the problem comes from zero energy ontology (ZEO) .

  1. Quantum states are analogous to events rather than time=constant snapshots: pairs of initial and finals states having opposite conserved net quantum numbers - this guarantees the conservation laws are satisfied. Zero energy state can be seen as quantum superposition of deterministic time evolutions connecting the initial and final states. The new element is that the time evolutions in question need not be parts of an infinitely long time evolution as in positive energy ontology. Zero energy states can be created from vacuum. A concrete realization is in terms of a scale hierarchy of causal diamonds, which I will not discuss now.

  2. During generalized Zeno effect second member - the passive one - is unaffected and de-entangled from environment, and represents the permanent part of selfness. Second member - the active one - moves towards future, entangles during a sequences of unitary evolutions followed by state function reductions, and changes. Every state function reduction locates this member with respect to geometric time. Geometric time grows and this is experienced as subjective time basically identifiable as the sequence of state function reductions. This view also explains the differences between geometric time and experienced time and also the problem caused by the conflict between determinism of unitary evolution and state function reductions. There are two times and two causalities. The changes of the active member give rise to various components of experience.

  3. The overall important delicacy would be that consciousness is not a property of a physical system: conscious experiencer is in the state function reduction in which quantum state in the sense of ZEO is replaced with a new one. Hence the "-ness" word "consciousness" is misleading: "nous" or the finnish word "tajunta" would be more appropriate. Conscious existence is conscious recreation in 4-D sense: quantum superposition of 4-D deterministic time evolutions is recreated. I think that here TGD differs dramatically from competing theories of consciousness.

ZEO framework the combination problem ceases to be a problem. Self hierarchy is predicted. Self has sub-selves, which it experiences as mental images and self is sub-self of some higher level self. Selves can entangle to form larger self: this solves the combination problem. For instance, the integration of visual fields of right and left eye involves entanglement. Also left and right brain entangle. Here it might be better to talk about magnetic bodies of left and right brain.

What about DID? One can imagine several explanations but the all rely same basic idea. Here is one explanation.

  1. Self could tend to decompose to two separate selves due to de-entanglement for long periods of time. Only single self would use the biological body at time. Both selves would however become gradually aware about the fact that some other lives in the same house. "Someone has eaten from my plate!" said the little bear!

    For instance, a decomposition of magnetic bodies of brain hemispheres to two separate parts could take place. These two persons would use the same biological body and if biological body is indeed controlled by the magnetic body, dramatic changes in the functioning of the biological body inducing even blindness might take place as the owner changes.

  2. The two separate selves could correspond normally to two sub-selves, say magnetic bodies assignable to left and right hemispheres. In the normal situation these magnetic bodies would most of the time entangle to form single self. In split personality disorder this would not take place.

    Consider the example in which endlessly fighting parents induce DID: the magnetic body of child would contain the mental images (sub-selves) of parents continuing their fight. The situation would be so intolerable that a divorce of mental images leading to DID would be unavoidable!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, June 17, 2018

Expanding Earth hypothesis, Platonic solids, and plate tectonics as symplectic flow

An FB discussion inspired by the recent findings of NASA suggesting the presence of life under the surface of Mars raised the question whether the TGD based Expanding Earth model is consistent with plate tectonics and with the motivating claim of Adams that the continents fit together nicely to cover the entire surface of Earth if its radius were one half of the recent radius. The outcome was what one might call Platonic plate tectonics.

  1. The expansion would have started from or generated decomposition of the Earth's crust to an icosahedral lattice with 20 faces, which contain analogs of what is known as cratons and having a total area equal to that of Earth before expansion. The prediction for the recent land area fraction is 25 per cent is 4.1 per cent too low. The simplest explanation is that expansion still continues but very slowly.

  2. The craton like objects (hereafter cratons) would move like 2-D rigid bodies and would fuse to form continents.

  3. The memory about the initial state should be preserved: otherwise there would exist no simple manner to reproduce the observation of Adams by simple motions of continents combined with downwards scaling. This might be achieved if cratons are connected by flux tubes to form a network. For maximal connectivity given triangular face is connected by flux tube to to all 3 nearest neighbour faces. Minimal connectivity corresponds to an essentially unique dodecahedral Hamiltonian cycle connecting cratons to single closed string. At least for maximal connectivity this memory would allow to understand the claim of Adams stating that the reduction of radius by factor 1/2 plus simple motions for the continents allow to transform the continents to single continent covering the entire surface of the scaled down Earth.

  4. The dynamics in scales longer than that of craton would be naturally a generalization of an incompressible liquid flow to area preserving dynamics defined by symplectic flow. The assumption that Hamilton satisfies Laplace equation and is thus a real or imaginary part of analytic function implies additional symmetry: the area preserving flow has dual.

For details and links see the article Expanding Earth hypothesis, Platonic solids, and plate tectonics as a symplectic flow or the longer article Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, June 11, 2018

New results in the geometric model of bio-harmony

For some years ago I developed a model of music harmony. As a surprising side product a model of genetic code predicting correctly the number of codons coding given amino-acid emerged. Since music expresses and creates emotions, one can ask whether genes could have "moods" characterized by these bio-harmonies. The fundamental realization could be in terms of dark photon triplets replacing phonon triplets for ordinary music.

  1. The model relies on the geometries of icosahedron and tetrahedron and representation of 12-note scale as so called Hamiltonian cycle at icosahedron going through all 12 vertices of icosahedron. The 20 faces correspond to allowed 3-chords for harmony defined by given Hamiltonian cycle. This brings in mind 20 amino-acids (AAs).

  2. One has three basic types of harmonies depending on whether the symmetries of icosahedron leaving the shape of the Hamiltonian cycle is Z6, Z4 or Z2. For Z2 there are two options: Z2,rot is generated by rotation of π and Z2,refl by reflection with respect to a median of equilateral triangle.

  3. Combining together one harmony from each type one obtains union of 3 harmonies and if there are no common chords between the harmonies, one has 20+20+20 3-chords and a strong resemblance with the code table. To given AA one assigns the orbit of given face under icosahedral isometries so that codons correspond to the points of the orbit and orbit to the corresponding AA. 4 chords are however missing from 64. These one obtains by adding tetrahedron. One can glue it to icosahedron along chosen face or keep is disjoint.

  4. The model in its original form predicts 256 different harmonies with 64 3-chords defining the harmony. DNA codon sequences would be analogous to sequences of chords, pieces of music. Same applies to mRNA. Music expresses and creates emotions and the natural proposal is that these bio-harmonies correlate with moods that would appear already at molecular level. They could be realized in terms of dark photon triplets realized in terms of light and perhaps even music (living matter is full of piezo-electrets). In fact, also the emotions generated by other art forms could be realized using music of dark light.

The model of music harmony is separate from the model of genetic code based on dark proton triplets and one of the challenges has been to demonstrate that they are equivalent. This inspires several questions.
  1. Could the number of harmonies be actually larger than 256 as the original model predicts? One could rotate the 3 fused Hamilton's cycles with respect to each by icosahedral rotations other leaving the face shared by icosahedron and tetrahedron invariant. There are however conditions to be satisfied.

    1. There is a purely mathematical restriction. If the fused 3 harmonies have no common 3-chords the number of coded AAs is 20. Can one give up the condition of having no common 3-chords and only require that the number of coded AAs is 20?

    2. There is also the question about the chemical realizability of the harmony. Is it possible to have DNA and RNA molecules to which the 3-chords of several harmonies couple resonantly? This could leave only very few realizable harmonies.

  2. The model predicts the representation of DNA and RNA codons as 3-chords. Melody is also an important aspect of music. Could AAs couple resonantly to the sums of the frequencies (modulo octave equivalence) of the 3-chords for codons coding for given AA? Could coding by the sum of frequencies appear in the coupling of tRNA with mRNA by codewords and coding by separate frequencies to the letterwise coupling of DNA and RNA nucleotides to DNA during replication and transcription?

  3. What about tRNA. Could tRNA correspond to pairs of harmonies with 20+20+444 codons? What about single 20+4=24 codon representation as kind of pre-tRNA?

  4. What is the origin of 12-note scale? Does genetic code force it? The affirmative answer to this question relies on the observation that 1-1 correspondence between codons and triplets of photons requires that the frequency assignable to the letter must depend on its position. This gives just 12 notes altogether. Simple symmetry arguments fix the correspondence between codons and 3-chords highly uniquely: only 4 alternatives are possible so that it would be possible to listen what DNA sequences sounds in given mood characterized by the harmony.

  5. What disharmony could mean? A possible answer comes from 6 Hamiltonian cycles having no symmetries. These disharmonies could express "negative" emotions.

See the article New results in the model of bio-harmony or the new chapter Geometric theory of bio-harmony

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, June 08, 2018

Your eyes are the mirrors of my soul!

A fascinating finding again: neuroscientist Giovanni Caputo reports that staring into someone's eyes for 10 minutes induces an altered state of consciousness.

This findings seems to provide direct support for one of the most radical predictions of TGD based quantum view about brain (see this). Neuroscientists assume that nerve pulse pattern generate in brain sensory mental images, in particular visual mental images. In TGD framework brain would build cognitive representations and decompose perceptive field into standard objects in this manner but would not produce sensory qualia. The sensory mental images would be realized at the level of sensory organs. This would involve repeated feedback by using virtual sensory input from brain (or even magnetic body of brain) to build standardized sensory mental images giving rise to pattern cognition. During REM sleep the virtual sensory input would form the entire sensory input. Nerve pulses are quite two slow to achieve this and they would only generate sensory patways, kind of wave guides, along which dark photons with non-standard value hefff=n×h0 of Planck constant would propagate forth and back.

This view allows to avoid the problem due to the fact that neuronal networks in various sensory areas look very much the same so that it is difficult to understand why they give rise to so different sensory qualia. The obvious objection is phantom limb phenomenon, which could be however understood is the pain in phantom limb is sensory memory of pain. It is indeed possible to produce sensory memories by an electrical stimulation of brain. In TGD the perceptive field would be 4-D and only sensory percepts would be localized to approximate time=constant snapshot having actually a finite duration of about .1 seconds. Memories (as distinguished from learned skills and conditionings) would correspond to contributions to memories from the geometric past.

Staring into eyes experience provides an opportunity to test the idea about virtual sensory input. A fusion of two conscious entities, call them A and B, at some level of self hierarchy might occur. This would involve entanglement, which in TGD framework would accompany the generation of magnetic flux tubes or actually flux tube pairs (by reconnection of flux loops) connecting the eyes of the experiencers and the propagation of the dark photons along flux tubes between the brains of A and B so that visual consciousness would be shared. For instance, A could see the virtual sensory input representing her own face at the face of B. This indeed happened! Volunteers had also out of body experiences (OBEs), had hallucinations of monsters, and saw besides themselves their relatives.

One particular fascinating question is what seeing one's own relatives could mean. The answer depends on whether the subject persons knew each other or not. If not, then the information about relatives of say A would have been transferred from A to B and then returned as virtual sensory input via eyes of B to eyes of A. This is of course possible also when the persons know each other. A would be looking into consciousness mirror defined by B! This experiment would be the first direct realization of fusion of two selves by quantum entanglement. The revolution in neuroscience is now in full swing!

See the article DMT, pineal gland, and the new view about sensory perception or the chapter Quantum Model for Nerve Pulse.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, June 05, 2018

When big dream changes to its diametric opposite

In Quanta Magazine there was an article by Robbert Dijkgraaf with title "There Are No Laws of Physics. There's Only the Landscape". Lubos Motl's aggressive rant reflects the deep frustration of superstring fanatics in the situation in which superstring theories are for all practical purposes dead.

The title of the article tells the message. The article repeats the same old wisdom: string model is the only viable model to describe physics. The problem is of course that string model fails to predict anything - as the title indeed tells. The natural reaction would be a simple question: "What went wrong?" but this question is not made. Instead one just gives up the very idea behind physics - that of finding principles that predict the universe as we observe it. Of course, it will predict also many other things but this universe inhabited by us must come out naturally.

In order to understand this it is good to notice that there are two views about mathematics and also physics.

  1. In the first view one tries to identify fundamental structures: classical number fields represent excellent example in this aspect. There are only 4 continuous number fields with real topology: reals, complex numbers, quaternions, and octonions. There is infinite number of p-adic number fields plus finite fields.

  2. In second view one tries to be as generic as possible and allows all imaginable structures.

The first view has traditionally dominated in physics and the very idea about unified theory crystallizes this view. Indeed, in the beginning the Big Good News in superstring theory was that we might have finally found the unique Theory Of Everything! It however turned out to be a hype. Now we are told that the Big Good News is that there is no such theory, only infinite landscape! The view about what is desirable has changed completely! This is actually familiar for psychologists: left brain builds the narrative which best suits to the situation and if all went wrong the new narrative represents the catastrophe as the initial goal!

I must admit being old-fashioned. By taking a sharper look, one begins to realize that this turning of coat is basically about the infinite vanity of human egos. Around 1984 when superstring mania began there was a horrible hurry to write epoch changing papers in the hope of even receiving a Nobel. No-one had time to think whether the basic idea that 2-D string world sheets are fundamental, is realistic. This idea would be realistic, if we lived in 2-D space-time. We however live in 4-D space-time and it would be natural to take 3-D surfaces to be the basic objects: they would represent either particle and 3-space depending on the the size scale of observer.

The idea was that spontaneous compactification would give 4-D space-time effectively. Soon it began to become clear that string model does not work as expected but it was quite too embarrassing to admit this: when too many people are wrong they decide collectively that they are right. Branes were introduced in the hope that 4-D space-time could correspond to brane. Around 1993 or so M-theory emerged as the last desperate attempt to make something out of superstring models. The outcome was landscape and almost a complete loss of predictivity. There were however some general predictions: SUSY at LHC energies, which was not found and the wrong sign of cosmological constant. This led eventually complete re-evaluation of what the great dream should have been: it was the landscape rather than unification of fundamental interactions and quantum theory of gravity!

I was lucky and managed to avoid of dropping to this collective trap. I had started towards the end of 1977 with the basic idea of TGD about space-time as 4-D surface in higher-dimensional space-time of form H=M4×S. This to solve the energy problem of general relativity due to the loss of Poincare symmetries: now they were lifted to H. It soon turned out that TGD can be also seen as a generalization of hadronic string model obtained by replacing string with 3-surface. I published my thesis 1982, 2 years before the first superstring revolution.

In the middle of superstring hysteria the obvious idea about the replacement of strings with 3-surfaces was missed despite that even the fundamental conformal invariance of 2-D string world sheets generalizes for 3-D light-like surfaces by their metric 2-dimensionality. Particles would have as basic building bricks 3-D light-like 3-surfaces at which the signature of the induced metric changes from Minkowskian to Euclidian. This makes 4-D space-time and the decomposition of higher-D imbedding space as H=M4×S unique.

S must be chosen so that H is completely unique both physically and mathematically. S= CP2 provides a geometrization of standard model symmetries, quantum numbers, and classical gauge fields plus gravitation. S= CP2 is also unique also mathematically: only CP2 and S4, E4 (and M4 in generalized sense) allow twistor spaces with Kähler structure. For this choice of H one has also so called M8-H duality . M8 provides number theoretical description using classical number fields part as a dual for the geometric description in terms of H (one could speak of number theoretical compactification which is not dynamical compactification which was hoped to give 4-D space-time in superstring models).

I am proud to confess that TGD view represents the old-fashioned view about unification: extreme local simplicity of dynamics at the fundamental level but topological complexity in all scales for many-sheeted space-time. At QFT limit (standard model plus GRT) the many-sheeted space-time is replaced with single sheeted one in long length scales. Topological information is lost and local dynamics becomes complex. TGD also changes the world view: length scale reductionism is replaced with fractal hierarchies, p-adic physics for various p-adic number fields and real numbers fused to adelic physics brings cognition to the realm of physics, quantum theory of consciousness baed on zero energy ontology and quantum biology emerge as applications or rather essential parts of fundamental physics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, June 04, 2018

Did animals emerge only 100,000-200,000 y ago or did the mitochondrial mutation rate increase dramatically at that time?

I encountered an interesting popular article telling about findings challenging Darwin's evolutionary theory. The original article of Stoeckle and Thaler is here.

The conclusion of the article is that almost all animals, 9 out of 10 animal species on Earth today, including humans, would have emerged about 100,000 200,000 years ago. According to Wikipedia all animals are assumed to have emerged about 650 million years ago from a common ancestor. Cambrian explosion began around 542 million years ago. According to Wikipedia Homo Sapiens would have emerged 300,000-800,000 years ago.

On basis of Darwin's theory based on survival of the fittest and adaptation to a new environment, one would expect that the species such as ants and humans with large populations distributed around the globe become genetically more diverse over time than the species living in the same environment. The study of so called neutral mutations not relevant for survival and assumed to occur with some constant rate however finds that this is not the case. The study of so called mitochondrial DNA barcodes across 100,000 species showed that the variation of neutral mutations became very small about 100,000-200,00 years ago. One could say that the evolution differentiating between them began (or effectively began) after this time. As if mitochondrial clocks for these species would have been reset to zero at that time as the article states it This is taken as a support for the conclusion that all animals emerged about the same time as humans.

The proposal of (at least ) the writer of popular article is that the life was almost wiped out by a great catastrophe and extraterrestrials could have helped to start the new beginning. This brings in mind Noah's Ark scenario. But can one argue that humans and the other animals emerged at that time: were they only survivors from a catastrophe. One can also argue that the rate of mitochondrial mutations increased dramatically for some reason at that time.

Could one think that great evolutionary leap initiating the differentiation of mitochondrial genomes at that time and that before it the differentiation was very slow for some reason? Why this change would have occurred simultaneously in almost all animals? Something should have happened to the mitochondria and what kind of external evolutionary pressure could have caused it?

  1. To me the idea about ETs performing large scale genetic engineering does not sound very convincing. That only a small fraction of animals survived the catastrophe sounds more plausible idea. Was it great flood? One can argue that animals living in water would have survived in this case. Could some cosmic event such as nearby supernova have produced radiation killing most animals? But is mass extinction really necessary? Could some evolutionary pressure without extinction caused the apparent resetting of mitochondrial clock?

  2. In TGD based quantum biology the great leaps could be caused by quantum criticality perhaps induced by some evolutionary pressure due to some kind of catastrophe. The value of heff=nh0 (h0 is the minimal value of Planck constant) - kind of IQ in very general sense - in some part of mitochondria could have increased and also its value would have fluctuated. Did a new longer length scale relevant to the functioning of mitochondrias emerge? Did the mitochondrial size increase? Here I meet the boundaries of my knowledge about evolutionary biology!

  3. Forget for a moment the possibility of mass extinction. Could the rate of mutations, in particular the rate of neutral mutations, have increased as a response to evolutionary pressure? Just the increased ability to change helps to survive. This rate would become high at quantum criticality due to the presence of large quantum fluctuations (variations of heff). If the mitochondria were far from quantum quantum criticality before the catastrophe, the rate of mutations would have been very slow. Animal kingdom would have lived a period of stagnation. The emerging quantum criticality - forced by a catastrophe but not involving an extinction - could have increased the rate dramatically.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, June 02, 2018

Replication of sequences of RNA codons is possible also in presence of folding!

There was a very interesting popular article in Spacedaily with title "Scientists crack how primordial life on Earth might have replicated itself" (see this). The research paper "Ribozyme-catalysed RNA synthesis using triplet building blocks" is here.

It is possible to replicate unfolded RNA strands in Lab by using enzymes known as ribozymes, which are RNA counterparts of enzymes, which are amino-adic sequences. In the presence of folding the replication is however impossible. Since ribozymes are in general folded, they cannot catalyze their own replication in this manner. The researchers however discovered that the replication using RNA triplets - genetic codons - as basic unit can be carried out in laboratory even for the folded RNA strands and with rather low error rate. Also the ribozyme involved can thus replicate. For units longer than 3 nucleotides the replication becomes prone to errors.

These findings are highly interesting in TGD framework. In TGD chemical realization of genetic code is not fundamental. Rather, dark matter level would provide the fundamental realizations of analogs of DNA, RNA, tRNA, and amino-acids as dark proton sequences giving rise to dark nuclei at magnetic flux tubes. Also ordinary nuclei correspond in TGD Universe to sequences of protons and neutrons forming string like entities assignable to magnetic flux tubes.

The basic unit representing DNA, RNA and tRNA codon and amino-acid would consist of 3 entangled dark protons. The essential aspect is that by entanglement the dark codons do not decompose to products of letters. This is like words of some languages, which do not allow decomposition to letters. This representation is holistic. As we learn to read and write, we learn the more analytic western view about words as letter sequences. Could the same hold true in evolution so that RNA triplets would have come first as entities pairing with dark RNA codons from from dark proton triplets as a whole? Later DNA codons would have emerged and paired with dark DNA codons. Now the coupling would have have been letter by letter in DNA replication and transcription to mRNA.

It is intriguing that tRNA consists of RNA triplets combined from amino-acids and analogs of mRNA triplets! The translation of mRNA to amino-acids having no 3-letter decomposition of course forces the holistic view but one can ask whether something deeper is involved. This might be the case. I have been wondering whether during RNA era RNA replicated using a prebiotic form of translational machinery, which replicated mRNA rather than translated RNA to protein formed from amino-acids (AAs).

  1. During RNA era amino-acids associated with pre-tRNA molecules would served as catalysts for replication of RNA codons. The linguistic mode would have been "holistic" during RNA er in accordance with the findings of the above experiments. RNA codon would have been the basic unit.

  2. This would have led to a smaller number of RNAs since RNA and RNA like molecules in tRNA are not in 1-1 correspondence. A more realistic option could have been replication of subset of RNA molecules appearing in tRNA in this manner.

  3. Then a great evolutionary leap leading from RNA era to DNA era would have occurred. AA catalyzed replication of RNA would have transformed to a translation of RNA to proteins and the roles of RNA and AA in tRNA would have changed. [Perhaps the increase of heff in some relevant structure as quantum criticality was reached led to the revolution?]

  4. At this step also (subset of) DNA and its transcription to (a subset of) mRNA corresponding to tRNA had to emerge to produce mRNA in transcription. In the recent biology DNA replicates and is transcribed nucleotide by nucleotide rather than using codon as a unit so that DNA and RNA polymerases catalyzing replication and transcription should have emerged at this step. An alternative option would involve the "tDNA" as the analog of "tRNA" and the emergence of polymerases later: this does not however look attractive if one accepts the idea about the transition from holistic to analytic mood.

    The ability of DNA to unwind is essential for the emergence of the "analytic linguistic mode" as an analog of written language (DNA) decomposing codons to triplets of letters. This must have been a crucial step in evolution comparable to the emergence of written language based on letters. Also the counterpart of RNA polymerase and separate RNA nucleotides for transcription should have emerged if not already present.

    The minimal picture would be emergence of a subset of DNA codons corresponding to RNAs associated with pre-tRNA and the emergence of the analogs of DNA and RNA polymerases as the roles of amino-acid and RNA codon in tRNA were changed.

  5. How DNA could have emerged from RNA? The chemical change would have been essentially the replacement of ribose with de-oxiribose to get DNA from RNA and U→ T. Single O-H in ribose was replaced with H. O forms hydrogen bonds with water and this had to change the hydrogen bonding characteristics of RNA.

    If the change of heff =n×h0 (one has h= 6× h0 in the most plausible scenario, see this and this) was involved, could it have led to stabilization of DNA. Did cell membrane emerge and allow to achieve this? I have proposed (see this) that the emergence of cell membrane meant the emergence of new representation of dark genetic code based on dark nuclei with larger value of heff.

The communication between dark and ordinary variants of biomolecules involves resonance mechanism and would also involve genetic code represented as 3-chords, music of light, and it is interesting to see whether this model provides additional insights.
  1. The proposal is that 3-chords assignable to nucleotides as music of light with allowed 64 chords defining what I have called bio-harmony is essential for the resonance (see this, this, and this). The 3 frequencies must be identical in the resonance: this is like turning 3 knobs in radio. This 3-fold resonance would correspond to the analytic mode. The second mode could be holistic in the sense that it would involve only the sum only the sum of the 3 frequencies modulo octave equivalence assigning a melody to a sequence of 3-chords.

  2. The proposal is that amino-acids having not triplet decomposition are holistic and couple to the sum of 3 frequencies assignable to tRNA and mRNA in this manner. Also the RNAs in tRNA could couple to mRNA in this manner. One could perhaps say that tRNA, mRNA and amino-acids codons sing whereas DNA provides the accompaniment proceeding as 3-chords. The couplings of DNA nucleotides to RNA nucleotides would realy on the frequencies assignable to nucleotides.

  3. If the sum of any 3 frequencies associated with mRNA codons is not the same except when the codons code for the same amino-acids, the representation of 3-chords with the sum of the notes is faithful. The frequencies to DNA and RNA nucleotides cannot be however independent of codons since the codons differing only by a permutation of letters would correspond to the same frequency and therefore code for the same amino-acid. Hence the information about the entire codon would be needed also in transcription and translation and could be provided either by dark DNA strand associated with DNA strand or by the interactions between the nucleotides of the DNA codon.

  4. The DNA codon itself would know that if is associated with dark codon and the frequencies assignable to nucleotides are determined by the dark DNA codon. It would be enough that the frequency of the letter depends on its position in the codon so that there would be 3 frequencies for every letter: 12 frequencies altogether.

    What puts bells ringing is that this the number of notes in 12-note scale for which the model of bio-harmony (see this and this) based on the fusion of icosahedral (12 vertices and 20 triangular faces) and tetrahedral geometries by gluing icosahedron and tetrahedron along one face, provides a model as Hamiltonian cycle and produces genetic code as a by-product. Different Hamiltonian cycles define different harmonies identified as correlates for molecular moods.

    Does each DNA nucleotide respond to 3 different frequencies coding for its position in the codon and do the 4 nucleotides give rise to the 12 notes of 12-note scale? There are many choices for the triplets but a good guess is that the intervals between the notes of triplet are same and that fourth note added to the triplet would be the first one to realize octave equivalence. This gives uniquely CEG #, C#FA,DF#/B b, and DG#B as the triplets assignable to the nucleotides. The emergence of 12-note scale in this manner would be a new element in the model of bio-harmony.

    There are 4!=24 options for the correspondence between {A, T, C, G} as the first letter and {C,C#,D,D#}. One can reduce this number by a simple argument.

    1. Letters and their conjugates form pyrimidine-purine pairs T, A and C,G. The square of conjugation is identity transformation. The replacement of note with note defining at distance of half-octave satisfies this condition (half-octave - tritonus - was a cursed interval in ancient music and the sound of ambulance realizes it).
      Conjugation could correspond to a transformation of 3-chords defined as

      CEG# ↔ DF#Bb , C#FA↔ D#GB .

    2. One could have

      {T, C} ↔ {CEG #, C#FA} , {A,G}↔ {DF#Bb,D#GB}


      {T, C} ↔ {DF#Bb,D#GB} , {A,G}↔ {CEG#, C#FA} .

    3. One can permute T and C and A and G in these correspondences. This leaves 8 alternative options. Fixing the order of the image of (T, C) to say (C,C#) fixes the order of the image of (A, G) to (D,D#) by the half-octave conjugation. This leaves 4 choices. Given the bio-harmony and having chosen one of these 4 options one could therefore check what given DNA sequence sounds as a sequence of 3-chords (see this).

      Anyone willing to do this kind of experimentation obtains from me the program modules used the Garage band programs to produce a sequence of chords. A further interesting experiment would be check what kind of melodies come out if one assigns to a chord a note as the sum of frequencies of the chord reduced by octave equivalence to basic octave.

    That the position the frequency associated with the nucleotide depends on its position in the codon would also reflect the biochemistry of the codon and this kind of dependence would be natural. In particular, different frequencies associated with the first and third codon would reflect the parity breaking defining orientation for DNA.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

LSND anomaly is here again!

Sabine Hossenfelder told about the finding of MinibooNe collaboration described in preprint Observation of a Significant Excess of Electron-Like Events in the MiniBooNE Short-Baseline Neutrino Experiment.

The findings give strong support for old and forgotten LSND anomaly - forgotten because it is in so blatant conflict with the standard model wisdom. The significance level of the anomaly is 6.1 sigmas in the new experiment. 5 sigma is regarded as the threshold for a discovery. It is nice to see this fellow again: anomalies are the theoreticians best friends.

To me this seems like a very important event from the point of view of standard model and even theoretical particle physics: this anomaly with other anomalies raises hopes that the patient could leave the sickbed after illness that has lasted for more than four decades after it became a victim of the GUT infection.

LSND as also other experiments are consistent with neutrino mixing model. LSND however produces electron excess as compared to other neutrino experiments. Anomaly means that the parameters of the neutrino mixing matrix (masses, mixing angles, phases) are not enough to explain all experiments.

One manner to explain the anomaly would be fourth "inert" neutrino having no couplings to electroweak bosons. TGD predicts both right and left-handed neutrinos and right-handed ones would not couple electroweakly. In massivation they would however combine to single massive neutrino just like in Higgs massivation Higgs gives components for massive gauge bosons and only neutral Higgs having no coupling to photon remains. Therefore this line of thought does not seem terribly promising in TGD framework.

For many years ago I explained the LSND neutrino anomaly in TGD framework as being due to the fact that neutrinos can correspond to several p-adic mass scales. p-Adic mass scale coming as power of 21/2 would bring in the needed additional parameter. The new particles could be ordinary neutrinos with different p-adic mass scales. The neutrinos used in experiment would have p-adic length scale depending on their origin. Lab, Earth's atmosphere, Sun, ... It is possible that the neutrinos transform during their travel to less massive neutrinos.

What is intriguing that the p-adic length scale range that can be considered as candidates for neutrino Compton lengths is biologically extremely interesting. This range could correspond to the p-adic length scales L(k)∼ 2(k-151)/2L(151), k= 151,157, 163, 167, varying from cell membrane thickness 10 nm to 2.5 μm. These length scales correspond to Gaussian Mersennes MG,k=(1+i)k-1. The appearance of four of 4 Gaussian Mersennes in such a short length scale interval is a number theoretic miracle. Could neutrinos or their dark variants with heff= n× h0 (h= 6× h0 is the most plausible option at this moment, see this and this) together with dark variants weak bosons effectively massless below their Compton length have a fundamental role in quantum biology?

For the TGD based new physics and also for LSND anomaly see chapter New Particle Physics Predicted by TGD: Part I of "p-Adic physics".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.