Saturday, January 20, 2018

About the Correspondence of Dark Nuclear Genetic Code and Ordinary Genetic Code

The idea about the realization of genetic code in terms of dark proton sequences giving rise to dark nuclei is one of the key ideas of TGD inspired quantum biology (see this). This vision was inspired by the totally unexpected observation that the states of three dark protons (or quarks) can be classified to 4 classes in which the number of states are same as those of DNA, RNA, tRNA, and amino-acids. Even more, it is possible to identify genetic code as a natural correspondence between the dark counterparts of DNA/RNA codons and dark amino-acids and the numbers of DNAs/RNAs coding given amino-acid are same as in the vertebrate code. What is new is that the dark codons do not reduce to ordered products of letters.

During years I have considered several alternatives for the representations of genetic code. For instance, one can consider the possibility that the letters of the genetic code correspond to the four spin-isospin states of nucleon or quark or for spin states of electron pair. Ordering of the letters as states is required and this is problematic from the point of view of tensor product unless the ordering reflects spatial ordering for the positions of particles representing the letters. One representation in terms of 3-chords formed by 3-photon states formed from dark photons emerges from the model of music harmony (see this). By octave equivalence the ordering of the notes is not needed.

Insights

The above observations inspire several speculative insights.

  1. The emergence of dark nuclei identified as dark proton sequences would relate to Pollack's effect in which irradiation of water generates in presence of gel phase bounding the water what Pollack calls exclusion zones (EZs). EZs are negatively charged and water has effective stoichiometry H1.5O. EZs deserve their name: somehow they manage to get rid of various impurities: this might be very important if EZs serve as regions carrying biologically important information. The protons of water molecules must go somewhere and the proposal is that they go to the magnetic body of some system consisting of flux tubes. The flux tubes contain the dark protons as sequences identifiable as dark nuclei.

  2. Since nuclear physics precedes chemistry, one can argue that prebiotic life is based on these dark biomolecules serving as a template for ordinary biomolecules. To some degree biochemistry would be shadow dynamics and dark dynamics would be extremely simple as compared to the biochemistry induced by it. In particular, DNA replication, transcription, and translation would be induced by their dark variants. One can even extend this vision: perhaps also ordinary nuclear physics and its scaled up counterpart explaining "cold fusion" are parts of evolutionary hierarchy of nuclear physics in various scales.

  3. Nature could have a kind of R&D lab allowing to test various new candidates for genes by using transcription and translation at the level of dark counterparts of the ordinary basic biomolecules.

Conditions on the model

The model must satisfy stringent conditions.

  1. Both the basis A, T, C, G and A, U, C, G as basic chemical building bricks of RNA and DNA must have emerged without the help of enzymes and ribozymes. It is known that the biochemical pathway known as pentose-phosphate pathway generates both ribose and ribose-5-phosphate defining the basic building brick of RNA. In DNA ribose is replaced with de-oxiribose obtained by removing one oxygen.

    Pyrimidines U, T and C having single aromatic ring are are reported by NASA to be generated under outer space conditions (see this). Carell et al have identified a mechanism leading to the generation of purines A and G, which besides pyrimidines C,T (U) are the basic building bricks of DNA and RNA. The crucial step is to make the solution involved slightly acidic by adding protons. TGD inspired model for the mechanism involves dark protons (see this).

    Basic amino-acids are generated in the Miller-Urey type experiments. Also nucleobases have been genererated in Miller-Urey type experiments.

    Therefore the basic building bricks can emerge without help of enzymes and ribozymes so that the presence of dark nuclei could lead to the emergence of the basic biopolymers and tRNA.

  2. Genetic code as a correspondence between RNA and corresponding dark proton sequences must emerge. Same true for DNA and also amino-acids and their dark counterparts. The basic idea is that metabolic energy transfer between biomolecules and their dark variants must be possible. This requires transitions with same transition energies so that resonance becomes possible. This is also essential for the pairing of DNA and dark DNA and also for the pairing of say dark DNA and dark RNA. The resonance condition could explain why just the known basic biomolecules are selected from a huge variety of candidates possible in ordinary biochemistry and there would be no need to assume that life as we know it emerges as a random accident.

  3. Metabolic energy transfer between molecules and their dark variants must be possible by resonance condition. The dark nuclear energy scale associated with biomolecule could correspond to the metabolic energy scale of .5 eV. This condition fixes the model to a high extent but also other dark nuclear scales with their own metabolic energy quanta are possible.

Vision

The basic problem in the understanding of the prebiotic evolution is how DNA, RNA, amino-acids and tRNA and perhaps even cell membrane and microtubules . The individual nucleotides and amino-acids emerge without the help of enzymes or ribozymes but the mystery is how their polymers emerged. If the dark variants of these molecules served as templates for their generation one avoids this hen-and-egg problem. The problem how just the biomolecules were picked up from a huge variety of candidates allowed by chemistry could be solved by the resonance condition making possible metabolic energy transfer between biomolecules and dark nuclei.

Simple scaling argument shows that the assumption that ordinary genetic code corresponds to heff/h=n=218 and therefore to the p-adic length scale L(141)≈ .3 nm corresponding to the distance between DNA and RNA bases predicts that the scale of dark nuclear excitation energies is .5 eV, the nominal value of metabolic energy quantum. This extends and modifies the vision about how prebiotic evolution led via RNA era to the recent biology. Unidentified infrared bands (UIBs) from interstellar space identified in terms of transition energies of dark nuclear physics support this vision and one can compre it to PAH world hypothesis.

p-Adic length scale hypothesis and thermodynamical considerations lead to ask whether cell membrane and microtubules could correspond to 2-D analogs of RNA strands associated with dark RNA codons forming lattice like structures. Thermal constraints allow cell membrane of thickness about 5 nm as a realization of k=149 level with n= 222 in terms of lipids as analogs of RNA codons. Metabolic energy quantum is predicted to be .04 eV, which corresponds to membrane potential. The thickness of neuronal membrane in the range 8-10 nm and could correspond to k=151 and n=223 in accordance with the idea that it corresponds to higher level in the cellular evolution reflecting that of dark nuclear physics.

Also microtubules could correspond to k=151 realization for which metabolic energy quantum is .02 eV slightly below thermal energy at room temperature: this could relate to the inherent instability of microtubules. Also a proposal for how microtubules could realize genetic code with the 2 conformations of tubulin dimers and 32 charges associated with ATP and ADP accompanying the dimer thus realizing the analogs of 64 analogs of RNA codons is made.

See the chapter About the Correspondence of Dark Nuclear Genetic Code and Ordinary Genetic Code or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, January 14, 2018

About heff/h=n as the number of sheets of Galois covering

The following considerations were motivated by the observation of a very stupid mistake that I have made repeatedly in some articles about TGD. Planck constant heff/h=n corresponds naturally to the number of sheets of the covering space defined by the space-time surface.

I have however claimed that one has n=ord(G), where ord(G) is the order of the Galois group G associated with the extension of rationals assignable to the sector of "world of classical worlds" (WCW) and the dynamics of the space-time surface (what this means will be considered below).

This claim of course cannot be true since the generic point of extension G has some subgroup H leaving it invariant and one has n= ord(G)/ord(H) dividing ord(G). Equality holds true only for Abelian extensions with cyclic G. For singular points isotropy group is H1⊃ H so that ord(H1)/ord(H) sheets of the covering touch each other. I do not know how I have ended up to a conclusion, which is so obviously wrong, and how I have managed for so long to not notice my blunder.

This observation forced me to consider more precisely what the idea about Galois group acting as a number theoretic symmetry group really means at space-time level and it turned out that M8-H correspondence gives a precise meaning for this idea.

Consider first the action of Galois group (see this and this).

  1. The action of Galois group leaves invariant the number theoretic norm characterizing the extension. The generic orbit of Galois group can be regarded as a discrete coset space G/H, H⊂ G. The action of Galois group is transitive for irreducible polynomials so that any two points at the orbit are G-related. For the singular points the isotropy group is larger than for generic points and the orbit is G/H1, H1⊃ H so that the number of points of the orbit divides n.
    Since rationals remain invariant under G, the orbit of any rational point contains only single point. The orbit of a point in the complement of rationals under G is analogous to an orbit of a point of sphere under discrete subgroup of SO(3).

    n=ord(G)/ord(H) divides the order ord(G) of Galois group G. The largest possible Galois group for n-D algebraic extension is permutation group Sn. A theorem of Frobenius states that this can be achieved for n=p, p prime if there is only single pair of complex roots (see this). Prime-dimensional extensions with heff/h=p would have maximal number theoretical symmetries and could be very special physically: p-adic physics again!

  2. The action of G on a point of space-time surface with imbedding space coordinates in n-D extension of rationals gives rise to an orbit containing n points except when the isotropy group leaving the point is larger than for a generic point. One therefore obtains singular covering with the sheets of the covering touching each other at singular points. Rational points are maximally singular points at which all sheets of the covering touch each other.

  3. At QFT limit of TGD the n dynamically identical sheets of covering are effectively replaced with single one and this effectively replaces h with heff=n× h in the exponent of action (Planck constant is still the familiar h at the fundamental level). n is naturally the dimension of the extension and thus satisfies n≤ ord(G). n= ord(G) is satisfied only if G is cyclic group.

The challenge is to define what space-time surface as Galois covering does really mean!
  1. The surface considered can be partonic 2-surface, string world sheet, space-like 3-surface at the boundary of CD, light-like orbit of partonic 2-surface, or space-time surface. What one actually has is only the data given by these discrete points having imbedding space coordinates in a given extension of rationals. One considers an extension of rationals determined by irreducible polynomial P but in p-adic context also roots of P determine finite-D extensions since ep is ordinary p-adic number.

  2. Somehow this data should give rise to possibly unique continuous surface. At the level of H=M4× CP2 this is impossible unless the dynamics satisfies besides the action principle also a huge number of additional conditions reducing the initial value data ans/or boundary data to a condition that the surface contains a discrete set of algebraic points.

    This condition is horribly strong, much more stringent than holography and even strong holography (SH) implied by the general coordinate invariance (GCI) in TGD framework. However, preferred extremal property at level of M4× CP2 following basically from GCI in TGD context might be equivalent with the reduction of boundary data to discrete data if M8-H correspondence is accepted. These data would be analogous to discrete data characterizing computer program so that an analogy of computationalism would emerge (see this).

One can argue that somehow the action of discrete Galois group must have a lift to a continuous flow.
  1. The linear superposition of the extension in the field of rationals does not extend uniquely to a linear superposition in the field reals since the expression of real number as sum of units of extension with real coefficients is highly non-unique. Therefore the naive extension of the extension of Galois group to all points of space-time surface fails.

  2. The old idea already due to Riemann is that Galois group is represented as the first homotopy group of the space. The space with homotopy group π1 has coverings for which points remain invariant under subgroup H of the homotopy group. For the universal covering the number of sheets equals to the order of π1. For the other coverings there is subgroup H⊂ π1 leaving the points invariant. For instance, for homotopy group π1(S1)= Z the subgroup is nZ and one has Z/nZ=Zp as the group of n-sheeted covering. For physical reasons its seems reasonable to restrict to finite-D Galois extensions and thus to finite homotopy groups.

    π1-G correspondence would allow to lift the action of Galois group to a flow determined only up to homotopy so that this condition is far from being sufficient.

  3. A stronger condition would be that π1 and therefore also G can be realized as a discrete subgroup of the isometry group of H=M4× CP2 or of M8 (M8-H correspondence) and can be lifted to continuous flow. Also this condition looks too weak to realize the required miracle. This lift is however strongly suggested by Langlands correspondence (see this).

The physically natural condition is that the preferred extremal property fixes the surface or at least space-time surface from a very small amount of data. The discrete set of algebraic points in given extension should serve as an analog of boundary data or initial value data.
  1. M8-H correspondence could indeed realize this idea. At the level of M8 space-time surfaces would be algebraic varieties whereas at the level of H they would be preferred extremals of an action principle which is sum of Kähler action and minimal surface term.

    They would thus satisfy partial differential equations implied by the variational principle and infinite number of gauge conditions stating that classical Noether charges vanish for a subgroup of symplectic group of δ M4+/-× CP2. For twistor lift the condition that the induced twistor structure for the 6-D surface represented as a surface in the 12-D Cartesian product of twistor spaces of M4 and CP2 reduces to twistor space of the space-time surface and is thus S2 bundle over 4-D space-time surface.

    The direct map M8→ H is possible in the associative space-time regions of X4⊂ M8 with quaternionic tangent or normal space. These regions correspond to external particles arriving into causal diamond (CD). As surfaces in H they are minimal surfaces and also extremals of Kähler action and do not depend at all on coupling parameters (universality of quantum criticality realized as associativity). In non-associative regions identified as interaction regions inside CDs the dynamics depends on coupling parameters and the direct map M8→ CP2 is not possible but preferred extremal property would fix the image in the interior of CD from the boundary data at the boundaries of CD.

  2. At the level of M8 the situation is very simple since space-time surfaces would correspond to zero loci for RE(P) or IM(P) (RE and IM are defined in quaternionic sense) of an octonionic polynomial P obtained from a real polynomial with coefficients having values in the field of rationals or in an extension of rationals. The extension of rationals would correspond to the extension defined by the roots of the polynomial P.

    If the coefficients are not rational but belong to an extension of rationals with Galois group G0, the Galois group of the extension defined by the polynomial has G0 as normal subgroup and one can argue that the relative Galois group Grel=G/G0 takes the role of Galois group.

    It seems that M8-H correspondence could allow to realize the lift of discrete data to obtain continuous space-time surfaces. The data fixing the real polynomial P and therefore also its octonionic variant are indeed discrete and correspond essentially to the roots of P.

  3. One of the elegant features of this picture is that the at the level of M8 there are highly unique linear coordinates of M8 consistent with the octonionic structure so that the notion of a M8 point belonging to extension of rationals does not lead to conflict with GCI. Linear coordinate changes of M8 coordinates not respecting the property of being a number in extension of rationals would define moduli space so that GCI would be achieved.

Does this option imply the lift of G to π1 or to even a discrete subgroup of isometries is not clear. Galois group should have a representation as a discrete subgroup of isometry group in order to realize the latter condition and Langlands correspondence supports this as already noticed. Note that only a rather restricted set of Galois groups can be lifted to subgroups of SU(2) appearing in McKay correspondence and hierarchy of inclusions of hyper-finite factors of type II1 labelled by these subgroups forming so called ADE hierarchy in 1-1 correspondence with ADE type Lie groups (see this). One must notice that there are additional complexities due to the possibility of quaternionic structure which bring in the Galois group SO(3) of quaternions.

See the short article About heff/h=n as the number of sheets of space-time surface as Galois covering or the article Does M8-H duality reduce classical TGD to octonionic algebraic geometry? or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, January 12, 2018

Condensed matter simulation of 4-D quantum Hall effect from TGD point of view

There is an interesting experimental work related to the condensed matter simulation of physics in space-times with D=4 spatial dimensions meaning that one would have D=1+4=5-dimensional space-time (see this and this). What is simulated is 4-D quantum Hall effect (QHE). In M-theory D= 1+4-dimensional branes would have 4 spatial dimensions and also 4-D QH would be possible so that the simulation allows to study this speculative higher-D physics but of course does not prove that 4 spatial dimensions are there.

In this article I try to understand the simulation, discuss the question whether 4 spatial dimensions and even 4+1 dimensions are possible in TGD framework in some sense, and also consider the general idea of the simulation higher-D physics using 4-D physics. This possibility is suggested by the fact that it is possible to imagine higher-dimensional spaces and physics: maybe this ability requires simulation of high-D physics using 4-D physics.

See the article Condensed matter simulation of 4-D quantum Hall effect from TGD point of view or the chapter Quantum Hall effect and Hierarchy of Planck Constants of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, January 11, 2018

Does the action of anesthetes prevent the formation of cognitive mental images?

I encountered an interesting popular article Scientists Just Changed Our Understanding of How Anaesthesia Messes With The Brain telling about the finding that anesthetes weaken the communications between neurons. It is found that a anesthete known as propofol restrics the movement of protein syntaxin 1a appearing as neurotransmitter at synapses and neurons.

The TGD inspired explanation for the loss of consciousness would be following. Nerve pulse activity is needed to generate neurotransmitters attaching to the receptors of post-synaptic neuron and in this manner forming connections between pre- and post-synaptic neurons giving rise to networks of active neurons. The transmitter would be like a relay in old-fashioned telephone network. Propofol would prevent the formation of the bridges and therefore of the networks of active neurons serving as correlates for mental images. No mental images, no higher level consciousness.

The earlier TGD inspired proposal was that anesthetes induce a hyperpolarization reducing the nerve pulse activity. How anesthetes could induce hyperpolarization is discussed at here: the model involves microtubules in an essential manner. Hyperpolarization would have same effect as the restriction of the movement of syntaxin 1a. This mechanism might be at work during sleep and also some anesthetes (but not propofol) could use it.

The TGD based interpretation relies on a profound re-interpretation of the function of transmitters and information molecules in general (see this). The basic idea is that connected networks of neurons correspond to mental images at neuronal level and that the effect of anesthetes is to prevent the formation of these networks.

  1. In TGD based model neither nerve pulses nor information molecules represent signals in intra-brain communications but build communication channels acting as relays fusing existing disjoint flux tubes associated with axons to network like connected structures as they attach to receptors.

    Flux tue networks make possible classical signalling by dark photons with heff=n× h. Dark photons make their presence manifest by occasionally transforming to ordinary photons identified as bio-photons with energies in visible and UV range. This signalling takes place at light velocity and is therefore optimal for communication and information processing purposes.

    Quantum mechanically flux tube networks correspond to so called tensor networks. Due to quantum coherence in the scale of network, quantum entanglement between the neurons of connected sub-networks is possible and networks serve as correlates for mental images.

    Nerve pulse patterns frequency modulate generalized Josephson radiation from neuronal membrane acting as a generalized Josephson junction. This radiation identifiable as EEG gives rise to sensory input to magnetic body (MB). MB in turn controls biological body (say brain) via dark cyclotron radiation, at least through genome, where it induces gene expression.

  2. All mental images at the level of brain are cognitive representations but they generate dark photon signals as virtual sensory inputs to sensory organs and in this manner give rise to sensory percepts as kind of artworks resulting in an iteration like process involving signalling forth and back using dark photons. This would make possible pattern recognition and formation of the objects of the perceptive field as cognitive representations in turn mapped to sensory percepts at sensory organs.

  3. In the case of hearing of speech the objects of the perceptive field are linear and represent words and sentences. In the case of written language the words decompose to linear sequences of syllables and these in turn into letters. In the case of sensory perception the sub-networks are 2-D or even 3-D and represent objects of the perceptive field. The topological dynamics of this network represents the dynamics of sensory perception and verbal and sensorily represented cognition (idiot savants).

See the article DMT, pineal gland, and the new view about sensory perception.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.




Wednesday, January 10, 2018

Strange spin asymmetry at RHIC

The popular article Surprising result shocks scientists studying spin tells about a peculiar effect in p-p and p-N (N for nucleus) observed at Relativistic Heavy Ion Collider (RHIC). In p-p scattering with polarized incoming proton there is asymmetry in the sense that the protons with vertical polarization with respect to scattering plane give rise to more neutrons slightly deflected to right than to left (see the figure of the article). In p-N scattering of vertically polarized protons the effect is also observed for neutrons but is stronger and has opposite sign for heavier nuclei! The effect came as a total surprise and is not understood. It seems however that the effects for proton and nuclear targets must have different origin since otherwise it is difficult to understand the change of the sign.

The abstract of the original article summarizes what has been observed.

During 2015 the Relativistic Heavy Ion Collider (RHIC) provided collisions of transversely polarized protons with Au and Al nuclei for the first time, enabling the exploration of transverse-single-spin asymmetries with heavy nuclei. Large single-spin asymmetries in very forward neutron production have been previously observed in transversely polarized p+p collisions at RHIC, and the existing theoretical framework that was successful in describing the single-spin asymmetry in p+p collisions predicts only a moderate atomic-mass-number (A) dependence. In contrast, the asymmetries observed at RHIC in p+A collisions showed a surprisingly strong A dependence in inclusive forward neutron production. The observed asymmetry in p+Al collisions is much smaller, while the asymmetry in p+Au collisions is a factor of three larger in absolute value and of opposite sign. The interplay of different neutron production mechanisms is discussed as a possible explanation of the observed A dependence.

Since diffractive effect in forward direction is in question, one can ask whether strong interactions have anything to do with the effect. This effect can take place at the level of nucleons and a quark level and these two effects should have different signs. Could electromagnetic spin orbit coupling cause the effect both at the level of nucleons in p-N collisions and at the level of quarks in p-p collisions?

  1. Spin-orbit interaction effect is relativistic effect: the magnetic field of target nucleus in the reference frame of projectile proton is nonvanishing: B= -γ v× E, γ= 1/(1-v2)1/2. The spin-orbit interaction Hamiltonian is


    HL-S = -μB ,


    where


    μ= gp μNS , μN= e/2mp

    is the magnetic moment of polarized proton proportional to spin S, which no has definite direction due to the polarization of incoming proton beam. The gyromagnetic factor gp equals to gp=2.79284734462(82) holds true for proton.

  2. Only the component of E orthogonal to v is involved and the coordinates in this direction are unaffected by the Lorentz transformations. One can express the transversal component of electric field as gradient

    Er= - ∂rV r/r .

    Velocity v can be expressed as v=p/mp so that the spin-orbit interaction Hamiltonian reads as

    HL-S= γ gp (e/2mp) (1/mp)LS [∂rV/r ] .

    For polarised proton the effect of this interaction could cause the left-right asymmetry. The reason is that the sign of the interaction Hamiltonia is opposite at left and right sides of the target since the sign of L=r× p is opposite at left- and right-hand sides. One can argue as in non-relativistic case that this potential generates a force which is radial and proportional to ∂r[(∂rV(r))/r)].

Consider first the scattering on nucleus.
  1. Inside the target nucleus one can assume that the potential is of the form V= kr2/2: the force vanishes! Hence the effect must indeed come from peripheral collisions. At the periphery responsible for almost forward scattering one as V(r)=Ze/r and one has ∂r(∂rV(r))/r)= 3Ze/r4, r=R, R the nuclear radius. One has R = kA1/3 for a constant density nucleus so that one has ∂r(∂rV(r))/r)= 3k-4eZA-4/3.

    The force decreases with A roughly like A-1/3 but the scattering proton can give its momentum to a larger number of nucleons inside the target nucleus. If all neutrons get their share of the transversal momentum, the effect is proportional to neutron number N=A-Z one would obtain the dependence Z(A-Z)A-4/3 ∼ A2/3. If no other effects are involved one would have for the ratio r of Al and Au asymmetries

    r=Al/Au ∼ Z(Al)N(Al)/Z(Au)A(u) × [A(Au)/A(Al)]4/3 .

    Using (Z,A)=(13,27) for Al and (Z,A)=(79,197) for Au one obtains the prediction r=.28. The actual value is r≈ .3 by estimating from Fig. 4 of the article is not far from this.

  2. This effect takes place only for protons but it deviates proton at either side to the interior of nucleus. One expects that the proton gives its transversal momentum components to other nucleons - also neutrons. This implies that sign of the effect is same as it would be for the spin-orbit coupling when the projectile is neutron. This could be the basic reason for the strange sign of the effect.

Consider next what could happen in p-p scattering.
  1. One must explain why neutrons with R-L asymmetry with respect to the scattering axis are created. This requires quark level consideration.

  2. The first guess is that one must consider spin orbit interaction for the quarks of the polarized proton scattering from the quarks of the unpolarized proton. What comes in mind is that one could in a reasonable approximation treat the unpolarized proton as single coherent entity. In this picture u and d quarks of polarized proton would have asymmetric diffractive scattering tending to go to the opposite sides of the scattering axis.

  3. The effect for d quarks would be opposite to that for u quarks. Since one has n=udd and and p=uud, the side which has more d quarks gives rise to neutron excess in the recombination of quarks to hadrons. This effect would have opposite sign than the effect in the case of nuclear target. This quark level effect would be present also for nuclear targets.

See the chapter New Particle Physics Predicted by TGD: Part II of "p-Adic Physics".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, January 06, 2018

Exciton-polariton Bose-Einstein condensate at room temperature and heff hierarchy

Ulla gave in my blog a link to a very interesting work about Bose-Einstein condensation of quasi-particles known as exciton-polaritons. The popular article tells about a research article published in Nature by IBM scientists.

Bose-Einstein condensation happens for exciton-polaritons at room temperature, this temperature is four orders of magnitude higher than the corresponding temperature for crystals. This puts bells ringing. Could heff/h=n be involved?

One learns from Wikipedia that exciton-polaritons are electron hole pairs- photons kick electron to higher energy state and exciton is created.These quasiparticles would form a Bose-Einstein condensate with large number of particles in ground state. The critical temperature corresponds to the divergence of Boltzmann factor given by Bose-Einstein statistics.

  1. The energy of excitons must be of order thermal energy at room temperature: IR photons are in question. Membrane potential happens to corresponds to this energy. That the material is organic, might be of relevance. Living matter involves various Bose-Einstein condensate and one can consider also excitons.

    As noticed the critical temperature is surprisingly high. For crystal BECs it is of order .01 K. Now by a factor 30,000 times higher!

  2. Does the large value of heff =n×h visible make the critical temperature so high?

    Here I must look at Wikipedia for BEC of quasiparticles. Unfortunately the formula for n1/3 is copied from source and contains several errors. Dimensions are completely wrong.

    It should read n1/3= (ℏ)-1 (meffkTcr)x, x= 1/2.

    [not x=-1/2 and 1/ℏ rather than ℏ as in Wikipedia formula. This is usual: it would important to have Wikipedia contributors who understand at least something about what they are copying from various sources].

  3. The correct formula for critical temperature Tcr reads as

    Tcr= (dn/dV)y2/meff , y=2/3.

    [Tcr replaces Tc and y=2/3 replaces y=2 in Wikipedia formula. Note that in Wikipedia formula dn/dV is denoted by n reserved now for heff=n×h].


  4. In TGD one can generalize by replacing ℏ with ℏeff=n ×ℏ so that one has

    Tcr→ n2Tcr .

    Critical temperature would behave like n2 and the high critical temperature (room temperature) could be understood. In crystals the critical temperature is very low but in organic matter a large value of n≈ 100 could change the situation. n≈ 100 would scale up the atomic scale of 1 Angstrom as a coherence length of valence electron orbitals to cell membrane thickness about 10 nm. There would be one dark electron-hole pair per volume taken by dark valence electron: this would look reasonable.

One must consider also the conservative option n=1. Tcr is also proportional to (dn/dV)2, where dn/dV is the density of excitons and to the inverse of the effective mass meff. meff must be of order electron mass so that the density dn/dV or n is the critical parameter. In standard physics so high a critical temperature would require either large density dn/dV about factor 106 higher than in crystals.

Is this possible?

  1. Fermi energy E is given by almost identical formula but with factor 1/2 appearing on the right hand side. Using the density dne/dV for electrons instead of dn/dV gives an upper bound for Tcr ≤ 2EF. EF varies in the range 2-10 eV. The actual values of Tcr in crystals is of order 10-6 eV so that the density of quasi particles must be very small for crystals: dncryst/dV≈ 10-9dne/dV .

  2. For crystal the size scale Lcryst of the volume taken by quasiparticle would be 10-3 times larger than that taken by electron, which varies in the range 101/3-102/3 Angstroms giving the range (220-460) nm for Lcryst.

  3. On the other hand, the thickness of the plastic layer is Llayer= 35 nm, roughly 10 times smaller than Lcryst. One can argue that Lplast ≈ Llayer is a natural order of magnitude for Lcryst for quasiparticle in plastic layer. If so, the density of quasiparticles is roughly 103 times higher than for crystals. The (dn/dV)2-proportionality of Tcr would give the factor Tcr,plast≈ 106 Tcr,cryst so that there would be no need for non-standard value of heff!

    But is the assumption Lplast ≈ Llayer really justified in standard physics framework? Why this would be the case? What would make the dirty plastic different from super pure crystal?

The question which option is correct remains open: conservative would of course argue that the now-new-physics option is correct and might be right.

For background see the chapter Criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, January 04, 2018

What could idiot savants teach to us about Natural Intelligence?

Recently a humanoid robot known as Sophia has gained a lot of attention in net (see the article by Ben Goertzel, Eddie Monroe, Julia Moss, David Hanson and Gino Yu titled with title " Loving AI: Humanoid Robots as Agents of Human Consciousness Expansion (summary of early research progress)" .

This led to ask the question about the distinctions of Natural and Artificial Intelligence and about how to model Natural Intelligence. One might think that idiot savants could help answering this kind of question but so it turned out to be!

Mathematical genii and idiot savants seem to have something in common

It is hard to understand the miraculous arithmetical abilities of both some mathematical genii and idiot savants lacking completely conceptual thinking and conscious information processing based on algorithms. I have discussed the number theoretical feats here.

Not all individual capable of memory and arithmetic feats are idiot savants. These mathematical feats are not those of idiot savant and involve high level mathematical conceptualization. How Indian self-taught number-theoretical genius Ramajunan discovered his formulas remains still a mystery suggesting totally different kind of information processing. Ramanujan himself told that he got his formulas from his personal God.

Ramajunan's feats lose some of their mystery if higher level selves are involved. I have considered a possible explanation based on ZEO, which allows to consider the possibility that quantum computation type processing could be carried out in both time directions alternately. The mental image representing the computation would experience several deaths following by re-incarnations with opposite direction of clock time (the time direction in which the size of CD increases). The process requiring very long time in the usual positive energy ontology would take only short time when measured as the total shift for the tip of either boundary of CD - the duration of computations at opposite boundary would much longer!

Sacks tells about idiot savant twins with intelligence quotient of 60 having amazing numerical abilities despite that they could not understand even the simplest mathematical concepts. For instance, twins "saw" that the number of matches scattered along floor was 111 and also "saw" the decomposition of integer to factors and primality. A mechanism explaining this based on the formation of wholes by quantum entanglement is proposed here. The model does not however involve any details.

Flux tube networks as basic structures

One can build a more detailed model for what the twins did by assuming that information processing is based on 2-dimensional discrete structures formed by neurons (one can also consider 3-D structures consisting of 2-D layers and the cortex indeed has this kind of cylindrical structures consisting of 6 layers). For simplicity one can assume large enough plane region forming a square lattice and defined by neuron layer in brain. The information processing should involve minimal amount of linguistic features.

  1. A natural geometric representation of number N is as a set of active points (neurons) of a 2-D lattice. Neuron is active it is connected by a flux tube to at least one other neuron. The connection is formed/strengthened by nerve pulse activity creating small neuro-transmitter induced bridges between neurons. Quite generally, information molecules would serve the same purpose (see this and this).

    Active neurons would form a collection of connected sets of the plane region in question. Any set of this kind with given number N of active neurons would give an equivalent representation of number N. At quantum level the N neurons could form union of K connected sub-networks consisting Nk neurons with ∑ Nk=N.

  2. There is a large number of representations distinguished by the detailed topology of the network and a particular union of sub-networks would carry much more information than the mere numbers Nk and N code. Even telling, which neurons are active (Boolean information) is only part of the story.

    The subsets of Nk points would have large number of representations since the shape of these objects could vary. A natural interpretation would be in terms of objects of a picture. This kind of representation would naturally result in terms of virtual sensory input from brain to retina and possibly also other sensory organs and lead to a decomposition of the perceptive field to objects.

    The representation would thus contain both geometric information - interpretation as image - and number theoretic information provided by the decomposition. The K subsets would correspond to one particular element of a partition algebra generalizing Boolean algebra for which one has partition to set and its complement (see this).

  3. The number N provides the minimum amount of information about the situation and can be regarded as a representation of number. One can imagine two extremes for the representations of N.

    1. The first extreme corresponds to K linear structures. This would correspond to linear linguistic representation mode characteristic for information processing used in classical computers. One could consider interpretation as K words of language providing names for say objects of an image. The extreme is just one linear structure representing single word. Cognition could use this kind of representations.

    2. Second extreme corresponds to single square lattice like structure with each neuron connected to the say 4 nearest neighbors. This lattice has one incomplete layer: string with some neurons missing. This kind of representation would be optimal for representation of images representing single object.

      For N active neurons one can consider a representation as a pile of linear strings containing pk neurons, where p is prime. If N is divisible by pk: N= Mpk one obtains a M× pk lattice. If not one can have M× pk lattice connected to a subset of neurons along string with pk neurons. One would have representation of the notion of divisibility by given power of prime as a rectangle! If N is prime this representation does not exist!


Flux tube dynamics

The classical topological dynamics for the flux tube system induced by nerve pulse activity building temporary bridges between neurons would allow phase transitions changing the number of sub-networks, the numbers of neurons in them, and the topology of individual networks. This topological dynamics would generalize Boolean dynamics of computer programs.

  1. Flux tube networks as sets of all active neurons can be also identified as elements of Boolean algebra defined by the subsets of discretize planar or even 3-D regions (layer of neurons). This would allow to project flux tube networks and their dynamics to Boolean algebra and their dynamics. In this projection the topology of the flux tube network does not matter much: it is enough that each neurons is connected to some neuron (bit 1). One might therefore think of (a highly non-unique) lifting of computer programs to nerve pulse patterns activating corresponding subsets of neurons. If the dynamics of flux tube network determined by space-time dynamics is consistent with the Boolean projection, topological flux tube dynamics induced by space-time dynamics would define computer program.

  2. At the next step one could take into account the number of connected sub-networks: this suggests a generalization of Boolean algebra to partition algebras so that one does not consider only subset and its complement but decomposition into n subsets which one can think as having different colors (see this). This leads to a generalization of Boolean (2-adic) logic to p-adic logic, and a possible generalization of computer programs as Boolean dynamical evolutions.

  3. At the third step also the detailed topology of each connected sub-network is taken into account and brings in further structure. Even higher-dimensional structures could be represented as discretized versions by allowing representation of higher-dimensional simplexes as connected sub-networks. Here many-sheeted space-time suggests a possible manner to add artificial dimensions.

This dynamics would also allow to realize basic arithmetics. In the case of summation the initial state of the network would be a collection of K disjoint networks with Nk elements and in final state single connected set with N=∑ Nk elements. The simplest representation is as a pile of K strings with Nk elements. Product M× N could be reduced to a sum of M sets with N element: this could be represented as a pile of M linear strings.

Number theoretical feats of twins and flux tube dynamics

Flux tube dynamics suggests a mechanism for how the twins managed to see the number of the matches scattered on the floor and also how they managed to see the decomposition of number into primes or prime powers. Sacks indeed tells that the eyes of the twins were rolling wildly during their feats. What is required is that the visual perception of the matches on the floor was subject to dynamics allowing to deform the topology of the associated network. Suppose that some preferred network topology or network topologies allowed to recognize the number of matches and tell it using language (therefore also linear language is involved). The natural assumption is that the favored network topology is connected.

The two extremes in which the network is connected are favored modes for this representation.

  1. Option I corresponds to any linear string giving a linguistic representation as the number neurons (which would be activated by seeing the matches scattered on the floor). A large number of equivalent representations is possible. This representation might be optimal for associating to N its name. The verbal expression of the name could be completely automatic association without any conceptual content. The different representations carry also geometric information about the shape of the string: melody in music could be this kind of curve whereas words of speech would be represented by straight lines.

  2. Option II corresponds to a maximally connected lattice like structure formed as pile of strings with pk neurons for a given prime: N= M1× pk+M2, 0≤ Mi < pk. The highest string in the pile misses some neurons. This representation would be maximally connected. It contains more information than that about the value of N.

Option II provides also number theoretical information allowing a model for the feats of the twins.
  1. As far the checking the primeness of N is considered, one can assume k=1. For the primes pi dividing N one would find a representation of N as a rectangle. If N is prime, one finds no rectangles of this kind (or finds only the degenerate 1× p rectangle). This serves a geometric signature of primeness. Twins would have tried to find all piles of strings with p neurons, p=2,3,5,... A slower procedure checkes for divisibility by n=2,3,4,....

  2. The decomposition into prime factors would proceed in the similar manner by starting from p=2 and proceeding to larger primes p=3,5,7,.... When a prime factor pi is found only single vertical string from the pile is been taken and the process is repeated for this string but considering only primes p>pi. The process would have been completely visual and would not involve any verbal thinking.

For the storage of memories the 2-D (or possibly 3-D representation) is non-economical and the use of 1-D representation replacing images with their names is much more economic. For information processing such as decomposition into primes, the 2-D or even 3-D representation are much more powerful.

See the article Artificial Intelligence, Natural Intelligence, and TGD or the chapter of "TGD based view about living matter and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Artificial Intelligence, Natural Intelligence, and TGD

Recently a humanoid robot known as Sophia has gained a lot of attention in net (see the article by Ben Goertzel, Eddie Monroe, Julia Moss, David Hanson and Gino Yu titled with title " Loving AI: Humanoid Robots as Agents of Human Consciousness Expansion (summary of early research progress)" .

Sophia uses AI, visual data processing, and facial recognition. Sophia imitates human gestures and facial expressions and is able to answer questions and make simple conversations on predefined topics. The AI program used analyzes conversations, extracts data, and uses it to improve responses in the future. To a skeptic Sophia looks like a highly advanced version of ELIZA.

Personally I am rather skeptic view about strong AI relying on a mechanistic view about intelligence. This leads to transhumanism and notions such as mind uploading. It is however good to air out one's thinking sometimes.

Computers should have a description also in the quantal Universe of TGD and this forces to look more precisely about the idealizations of AI. This process led to a change of my attitudes. The fusion of human consciousness and presumably rather primitive computer consciousness but correlating with the program running in it might be possible in TGD Universe, and TGD inspired quantum biology and the recent ideas about prebiotic systems provide rather concrete ideas in attempts to realize this fusion.

TGD also strongly suggests that there is also what might be called Natural Intelligence relying on 2-D cognitive representations defined by networks consisting of nodes (neurons) and flux tubes (axons with nerve pulse patters) connecting them rather than linear 1-D representation used by AI. The topological dynamics of these networks has Boolean dynamics of computer programs as a projection but is much more general and could allow to represent objects of perceptive field and number theoretic cognition.

See the article Artificial Intelligence, Natural Intelligence, and TGD or the chapter of "TGD based view about living matter and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.