Tuesday, January 30, 2018

Maxwell's demon from TGD viewpoint

In Facebook I received a link to an interesting popular Science News article titled A New Information Engine is Pushing the Boundaries of Thermodynamics. The article told about the progress in generalizing the conventional second law of thermodynamics to take information as an additional parameter.

Carnot engine is the standard practical application. One has two systems A and B, both in thermal equilibrium but with different temperatures TA and TB ≥ TA. By second law one has heat flow Q from A to B the two systems, and Carnot's engine transforms some of this heat to work. Carnot's law gives an upper bound for the efficiency of the engine as η= W/Q ≤ (T2-T1)/T2. The possibility to transform information to work forces to generalize Carnot's law.

Since information is basically conscious information, this generalization is highly interesting from the point of view of quantum theories of consciousness and quantum biology. Certainly the generalization is highly non-trivial. Especially so in standard physics framework, where only entropy is defined at fundamental level and is regarded as ensemble entropy and basically has very little to do with conscious information. Therefore the argumentation is kind of artwork.

1. Maxwell's demon in its original form


Maxwell's demon appears in a thought experiment in which one considers a system consisting of two volumes A and B of gas in thermal equilibrium at same temperature. At the boundary between A and B having a small hole sits a demon checking whether a molecule coming from A has velocity above some threshold: if so it allows the molecule to go to B. Demon monitors also the molecules coming from B and if the velocity is below the threshold it allows the molecule to continue to A. As a consequence, temperature and pressure differences develop between A and B. Pressure difference can do work very much voltage between the cathode and anode of battery. One can indeed add a tube analogous to wire between ends of the entire system and pressure difference causes a flow of mass doing thus work: one has pump.

The result is in conflict with the second law and one can ask what goes wrong. From the Wikipedia article one learns that a lot of arguments have been represented con and pro Maxwell's demon. Biologist might answer immediately. Demon must measure the states of molecules and this requires cognition and memory, which is turn require metabolic energy. When one takes this into account this, paradox should disappear and second law should remain true in a generalized form in which one takes into account the needed metabolic energy.

2. Experimental realization of Maxwell's demon

The popular article describes an experiment actualizing Maxwell's demon carried out by Govind Paneru, Dong Yun Lee, Tsvi Tlusty, and Hyuk Kyu Pak . Below is the abstract of the article Lossless Brownian Information Engine published in Phys Rev Letters (see this).

We report on a lossless information engine that converts nearly all available information from an error-free feedback protocol into mechanical work. Combining high-precision detection at a resolution of 1 nm with ultrafast feedback control, the engine is tuned to extract the maximum work from information on the position of a Brownian particle. We show that the work produced by the engine achieves a bound set by a generalized second law of thermodynamics, demonstrating for the first time the sharpness of this bound. We validate a generalized Jarzynski equality for error-free feedback-controlled information engines.

Unfortunately, the article is behind paywall and I failed to find it in arXiv. The popular article uses notions like "particle trapped by light at room temperature" and photodiode as "light trap" without really defining what these expressions mean. For instance, it is said that the light trap would follow particles moving in definite direction (from A to B in Maxwell's thought experiment). I must admit that I am not at all sure what the precise meaning of this statement is.

3. TGD view about the situation

TGD inspired theory of consciousness can be regarded as a quantum measurement theory based on zero energy ontology (ZEO) and it it is interesting to try to analyze the experiment in this conceptual framework.

3.1 TGD view about the experiment

The natural quantum interpretation is that the photodiode following the photon is performing repeated quantum measurements, which in standard quantum theory do not affect the state of the particle after the first measurement. From the viewpoint of TGD inspired consciousness, which can be regarded as a generalization of quantum measurement theory forced by zero energy ontology (ZEO), the situation could be as follows.

  1. Photo-diode following the particle by would be like conscious entity directing attention its to the particle and keeping it in focus. In TGD Universe directed attention has as classical space-time correlates flux tubes connecting attendee and target of attention: in ER-EPR correspondence the flux tubes are replaced with wormholes, which suit better to GRT based framework. Flux tubes make also possible entanglement between attendee and target. The two systems become single system during the period of attention and one could say that the attention separates the particle from the rest.

  2. Directed attention costs metabolic energy. Same would be true also now - photo-diode indeed requires energy feed. Directed attention creates mental image the conscious entity associated with the mental images can be regarded as a generalized Zeno effect or as a sequence of weak measurements.

    Tracking would thus mean that particle's momentum is measured repeatedly so that the particle is forced to continue with the same momentum. Gradually this would affect the thermal distribution and generate temperature and pressure gradients. Directed attention could be also seen as a mechanism of volition in quantum biology.

  3. This looks nice but one can ask what about the collisions of the particle with other molecules of gas: don't they interfere with the Zeno effect? If the period between repeated measurements is shorter than the average time between the collisions of particles, this is not a problem. But is there any effect in this case? The directed attention or a sequence of quantum measurements could separate the particle from the environment by de-entangling it from the envirobment. Could it be that collisions would not occur during this period so that attendee and target would form a subsystem de-entangled from rest of the world?

3.2 ZEO variant of Maxwell's demon

Zero energy ontology (ZEO) forces to consider different arrangement producing energy somewhat like in perpetuum mobile but not breaking the conservation of energy in any obvious manner. The idea pops into my mind occasionally and I reject it every time and will do so again.

  1. Zero energy states (ZESs) are like physical events: pairs of positive and negative energy state with energy E and -E: this codes for energy conservation.

  2. One can have quantum superposition of ZESs with different values of energy E and with average value < E> of energy. In state function reduction < E> can change and in principle this does not break conservation of energy since one has still superposition of pairs with energies E and -E.

  3. For instance, the probabilities for states with energy E could be given by thermal distribution parameterized by temperature parameter T: one would have "square root" of thermodynamic distribution for energies. "Square root" of thermodynamics is indeed forced by ZEO. One would have essentially entanglement in time direction. Single particle states would realize square root of thermodynamical ensemble, which would not be a fictive notion anymore.

    The coefficients for the state pairs would have also phases and these phases would bring in something new and very probably very important in living matter. System characterized by temperature T would not be so uninteresting as we think, there could be hidden phase information.

If T increases in reduction then < E> increases in state function reduction. Reduction could also measure the value of E. Could the system increase its < E> in state function reductions? My proposal for an answer is "No".

In ordinary thermodynamics energy should be fed from environment to increase < E>: how environment would enter into the game now?

  1. State function reduction always reduces the entanglement of system S with environment, call it Senv. Could the increase of < E> be compensated by compensating change of -< E> in Senv. Indeed, the conservation of energy for single state is expected have statistical counterpart: energy would come from environment as a kind of metabolic energy. Therefore also the "square root of thermodynamics would prevent perpetuum mobile.

  2. This would be the case if the reduction measures the energy of the entire system Stot=S + Senv - so that Stot is always in energy eigenstate with eigenvalue Etot and Etot does not change in reductions and in unitary evolutions between them. Can one pose this condition?

3.3 Time reversal and apparent breaking of second law in zero energy ontology (ZEO)

ZEO based theory of consciousness (see this) forces to consider also a genuine breaking of the second law.

  1. In ZEO self as a conscious entity corresponds to a generalized Zeno effect or equivalently a sequence of analogs of weak measurements as "small" state function reductions. The state at passive boundary of CD is unaffected as also the members of state pairs at it.

    Second boundary of CD (active boundary) shifts farther away from the passive one and the members of state pairs at it change giving rise to the conscious experience of self. Clock time time identified as temporal distance between the tips of CD increases. This gives rise to the correspondence between clock time and subjective time identified as sequence of weak reductions.

  2. Also "large" state function reductions are possible and also unavoidable. The roles of active and passive boundary are changed and time reversal occurs for the clock time. One can say that self dies and re-incarnates as a time-reversed self.

    At the next re-incarnation self with the original arrow of clock time would be reborn and continue life from time value shifted towards future from the moment of death: its identity as a physical could be however very different. One can of course wonder whether sleep could mean a life in opposite direction of clock time and wake-up a reincarnation in the usual sense.

    The time-reversed self need not have conscious memories about its former life cycle: only the collections of un-entangled subsystems at passive boundary carry information about this period. A continuation of conscious experience could however take place in different sense: the contents of consciousness associated with the magnetic body of self could survive the death as near-death-experiences indeed suggest.

  3. The time reversed system obeys second law but with opposite time direction as normally. Already Italian physicist Fantappie proposed that this occurs routinely in living matter and christened the entropy for time reversed systems syntropy. Processes like spontaneous assembly of complex molecules from their building bricks could be controlled by time reversed selves.

    In TGD inspired biology motor actions could be seen as generation of signal propagating backwards in time and defining sub-system with revered arrow of time and inducing the activity preceding motor activity before the conscious decision leading to it is made: this with respect to geometric time. There are many effects supporting the occurrence of these time reversals.

  4. How the possibility of time reversals relates to the second law? One might argue that second law emerges from the non-determinism of state function reduction alone. Second law would transform to its temporal mirror image when one looks the system from outside with unchanged arrow of clock time.

    But does the second law continue to hold in statistical sense as one takes average over several incarnations? One might think that this is the case since generalized Zeno effect generalizes ordinary Zeno effect and at the limit of positive energy ontology one would effectively have a sequence of ordinary state function reductions leading leading to second law.

3.4 Negentropy Maximation Principle (NMP)

TGD also predicts what I call Negentropy Maximization Principle (NMP) .

  1. Entanglement coefficients belong to extension of rationals allowing interpretation as both real and p-adic numbers in the extension of p-adics induced by the extension of rationals defining the adele.

    One can assign ordinary entanglement entropy to the real sector of adele and entanglement negentropy with the p-adic sectors of adelic physics: for latter the analog of ordinary Shannon entropy is negative and thus the interpretation as conscious information is possible. The information is assigned with the pairing defined by entanglement whereas entropy is associated with the loss of precise knowledge about the state of particle in entangled state.

  2. One can also consider the difference of sum of p-adic entanglement negentropies and real entanglement entropy as the negentropy. This quantity can be positive for algebraic extensions of rationals and its maximal value increases with the complexity of the extension and with p-adic prime.

    Also the information defined in this manner would increase during evolution assignable to the gradual increase of dimension of algebraic extension of rationals, which can take place in "large" state function reductions (re-incarnations of self): if the eigenvalues of density matrix are algebraic numbers in an extension of the extension of rationals, the "large" state function must take place.

  3. NMP would hold true in statistical sense - and mathematically very much analogous to second law - and would relate to evolution. In particular, one can understand why the emergence of intelligent systems is - rather paradoxically - accompanied by the generation of entropy. To have large entanglement negentropy in p-adic sectors one must have large entanglement entropy in real sector since same entanglement defines both.

3.5 Dark matter as phases of matter labelled by the hierarchy of Planck constants

The hierarchy of Planck constants heff/h=n is a further key notion in TGD inspired quantum biology.

  1. The hierarchy of Planck constants heff/h=n implied by adelic physics as physics of both sensory experience (real numbers) and cognition (p-adic number fields) is basic prediction of TGD (see this). Planck constant characterizes the dimension of the algebraic extension of rationals characterizing the cognitive representations, and is bound to increase since the number of extensions with dimension larger than given dimension is infinite whereas those with smaller dimension is finite.

  2. The ability to generate negentropy increases during evolution. System need not however generate negentropy and can even reduce it. In statistical sense negentropic resources however increase: things get better in the long run. In biology metabolic energy feed brings to system molecules having valence bonds with heff/h=n larger than that for atoms (see this), and this increases the ability of the system to generate negentropy and in statistical sense this leads to the increase of negentropy.

For the article Maxwell's demon from TGD viewpoint.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, January 27, 2018

Emotions as sensory percepts about the state of magnetic body?

What emotions are? How emotions are created? How are they represented: in brains, at body, or perhaps somewhere else? One can consider these questions from the point of view of neuroscience, endocrinology, and quantum physics. Emotions can be divided to lower level emotions accompanied by intention/need/desire (hunger is accompanied by the need to eat) and thus distinguishing them from sensory qualia whereas higher level emotions like catharsis and the experience of beauty not accompanied by any desire. What does does this division correspond to?

  1. TGD inspired answer to the questions is that emotions are sensory percepts about the state of magnetic body (MB). Sensory-motor loop generalizes: various glands excreting hormones to blood stream and binding to receptors give rise to the analog of motor output.

  2. Consider first neuronal level. Neural transmitters binding to receptors serve as bridges allowing to build connected networks of neurons from existing building bricks. They are accompanied by flux tube networks giving rise to tensor networks as quantum coherent entangled structures (tensor nets) serving as correlates of mental images and allowing classical signalling with light velocity using dark photons. These tensor networks represent our mental images only if they correspond to our sub-selves (see this).

    In a similar manner hormones give rise to networks of ordinary cells implying in particular that emotional memories are realized in (biological) body (BB). Nervous system gives information about the state of these networks to brain and hypothalamus serves as the analog of motor cortex sending hormones controlling the excretion of hormones at lower level glands.

  3. The hierarchy of Planck constants defines a hierarchy of dark matters and heff=n× defines a kind of IQ. The levels of MB corresponding to large/small values of n would correspond to higher/lower emotions.

MB decomposes to two basic parts: the part in the scale of BB and formed by networks having cells and larger structures as nodes (forming a fractal hierarchy) and the part in the scales larger than BB.
  1. In the scales larger than that of BB (long scales) the change the topology is not easy and the dynamics involves oscillations of MB - analogs of Alfwen waves - and analogs of ordinary motor actions changing the shape of flux tubes but leaving its topology unaffected (these actions might represent or serve as templates for ordinary motor actions in body scale (see this).

  2. In the scales larger than that of BB (long scales) the change the topology is not easy and the dynamics involves oscillations of MB - analogs of Alfwen waves - and analogs of ordinary motor actions changing the shape of flux tubes but leaving its topology unaffected (these action might represent or serve as templates for ordinary motor actions in body scale).

    Alfwen waves with cyclotron frequencies and generalized Josephson frequencies assignable to cell membrane as Josephson junction would be involved see this). The size scale of particular onion-like layer of MB corresponds to the wavelength scale for cyclotron frequencies and is proportional to heff/h=n for dark photons. For instance, alpha band in EEG corresponds to the scale of Earth but the energy scale of dark photons is that of bio-photons.

    The TGD inspired model of music harmony (see this) gives as a side product a model of genetic code predicting correctly the numbers of codons coding for aminoacids for vertebrate code. The model allows to see sensory percepts about the dynamics in large scales as analog of music experience. The notes of 3-chords of the harmony correspond to light as dark photons and frequencies defining the notes of the chord: cyclotron radiation and generalized Josephson radiation from cell membrane would represent examples of dark light. Music expresses and creates emotions and music harmonies would correspond to various emotional states/moods realized at the level of DNA and its dark counterpart (dark nuclei represented as dark proton sequences). MB would be like a music instrument with flux tubes serving as strings. It is difficult to assign any specific desire to large scale sensory percepts about MB and the interpretation as higher emotions - or rather feelings - makes sense.

See the article Emotions as sensory percepts about the state of magnetic body? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, January 26, 2018

Getting memories by eating those who already have them

While writing an article about emotions as sensory percepts about the state of magnetic body I learned about extremely interesting findings. I have already earlier written about some of the finding that both pieces of split planaria have the memories (identified as learned skills or conditionings) of the original planaria (see this). The news at this time was that planaria get the memories of planaria that they have eaten!

To begin with, one must carefully distinguish between genuine memories and memories as behavioral patterns) (conditionings, skills).

  1. Cognitive memories as behavioral patterns are assumed to be due to the strengthening of synaptic contacts (long term potentiation (LPT) giving rise to nerve circuits, which are active or easily activated. In TGD framework activation means formation of flux tube network giving rise to quantum entangled state with neurons at the nodes: neural activity generates transmitters serving as bridges between flux tubes associated with axons and create flux tube network carrying a conscious mental image. A quantum coherent entangled tensor network is formed and also classical communications using dark photons are possible in this state. These neurons are firing synchronously. Nerve pulses would not be signals between neurons but would induce communications to magnetic body in scales even larger than body.

  2. Genuine memories - say episodal memories - would in TGD (zero energy ontology, ZEO) correspond to neural activities in geometric past: kind of seeing in time direction. These are typically verbal memories but also sensory memories are possible and can be induced by electric stimulation of brain.

Consider now the experiments discussed in the popular article Somewhere in the brain is a storage device for memories). They all relate to the identification of memory as a behavioral pattern induced by conditioning and are therefore emotional memories.
  1. In one experiment sea slugs learned to avoid painful stimulus. This led to a generation of synaptic contacts between neutrons involving increased synaptic strength - long term potentiation (LPT). Then some drug was used to destroy the LPT. The problem was that the lost contacts were not those formed when the memory was formed!

  2. In second experiment mice were used. A conditioned fear (LPT) was induced in mice and again the generation of synaptic contacts was observed. Then the contacts - long term potentiation - was destroyed completely. Memories as conditioned fear however remained!

It was an amusing accident to learn about this just when I was building a model for emotions as sensory percepts about the state of magnetic body (MB) fundamental in TGD inspired quantum biology.
  1. MB consists of a part formed from highly dynamical flux tube tensor networks having cells and also other structures with other size scales (fractality) as nodes. MB has also a part outside body involving rather large values of heff= n× h and having to higher cognitive IQ. Corresponding emotions would be higher level emotions (like experience of beauty) whereas bodily emotions are primitive and involve positive/negative coloring inducing a desire to preserve/change the situation in turn inducing an emotional counterpart of motor activity as excretion of hormones from emotional brain with hypothalamus in the role of highest motor areas and lower glands (both in brain and in body) in the role of lower motor areas.

  2. In the recent case the memories are definitely emotional memories and in TGD framework they would be naturally at the level of body and generated as mental images associated with large numbers of ordinary cells appearing as nodes of quantum entangled flux tube networks giving rise to tensor networks (see this). Hormones would be the tool to modify and generate these networks.

  3. Emotional memories would be represented by the conditioning and analog of LPT at the level of body rather than at the level of brain! Hormones like also other information molecules would act as relays connecting existing pieces of network to larger ones! The neural activity would be involved only with the generation of memories and induce hypothalamus to generate the fear network using the hormones controlling hormonal activities of lower level glands.

  4. The model could also explain the finding that in the splitting of flatworm the both new flatworms inherit the memories and that even non-trained flatworms eating trained flatworms get their memories (defined as behavioral patterns involving emotional conditioning).

See the article Emotions as sensory percepts about the state of magnetic body? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, January 20, 2018

About the Correspondence of Dark Nuclear Genetic Code and Ordinary Genetic Code

The idea about the realization of genetic code in terms of dark proton sequences giving rise to dark nuclei is one of the key ideas of TGD inspired quantum biology (see this). This vision was inspired by the totally unexpected observation that the states of three dark protons (or quarks) can be classified to 4 classes in which the number of states are same as those of DNA, RNA, tRNA, and amino-acids. Even more, it is possible to identify genetic code as a natural correspondence between the dark counterparts of DNA/RNA codons and dark amino-acids and the numbers of DNAs/RNAs coding given amino-acid are same as in the vertebrate code. What is new is that the dark codons do not reduce to ordered products of letters.

During years I have considered several alternatives for the representations of genetic code. For instance, one can consider the possibility that the letters of the genetic code correspond to the four spin-isospin states of nucleon or quark or for spin states of electron pair. Ordering of the letters as states is required and this is problematic from the point of view of tensor product unless the ordering reflects spatial ordering for the positions of particles representing the letters. One representation in terms of 3-chords formed by 3-photon states formed from dark photons emerges from the model of music harmony (see this). By octave equivalence the ordering of the notes is not needed.

Insights

The above observations inspire several speculative insights.

  1. The emergence of dark nuclei identified as dark proton sequences would relate to Pollack's effect in which irradiation of water generates in presence of gel phase bounding the water what Pollack calls exclusion zones (EZs). EZs are negatively charged and water has effective stoichiometry H1.5O. EZs deserve their name: somehow they manage to get rid of various impurities: this might be very important if EZs serve as regions carrying biologically important information. The protons of water molecules must go somewhere and the proposal is that they go to the magnetic body of some system consisting of flux tubes. The flux tubes contain the dark protons as sequences identifiable as dark nuclei.

  2. Since nuclear physics precedes chemistry, one can argue that prebiotic life is based on these dark biomolecules serving as a template for ordinary biomolecules. To some degree biochemistry would be shadow dynamics and dark dynamics would be extremely simple as compared to the biochemistry induced by it. In particular, DNA replication, transcription, and translation would be induced by their dark variants. One can even extend this vision: perhaps also ordinary nuclear physics and its scaled up counterpart explaining "cold fusion" are parts of evolutionary hierarchy of nuclear physics in various scales.

  3. Nature could have a kind of R&D lab allowing to test various new candidates for genes by using transcription and translation at the level of dark counterparts of the ordinary basic biomolecules.

Conditions on the model

The model must satisfy stringent conditions.

  1. Both the basis A, T, C, G and A, U, C, G as basic chemical building bricks of RNA and DNA must have emerged without the help of enzymes and ribozymes. It is known that the biochemical pathway known as pentose-phosphate pathway generates both ribose and ribose-5-phosphate defining the basic building brick of RNA. In DNA ribose is replaced with de-oxiribose obtained by removing one oxygen.

    Pyrimidines U, T and C having single aromatic ring are are reported by NASA to be generated under outer space conditions (see this). Carell et al have identified a mechanism leading to the generation of purines A and G, which besides pyrimidines C,T (U) are the basic building bricks of DNA and RNA. The crucial step is to make the solution involved slightly acidic by adding protons. TGD inspired model for the mechanism involves dark protons (see this).

    Basic amino-acids are generated in the Miller-Urey type experiments. Also nucleobases have been genererated in Miller-Urey type experiments.

    Therefore the basic building bricks can emerge without help of enzymes and ribozymes so that the presence of dark nuclei could lead to the emergence of the basic biopolymers and tRNA.

  2. Genetic code as a correspondence between RNA and corresponding dark proton sequences must emerge. Same true for DNA and also amino-acids and their dark counterparts. The basic idea is that metabolic energy transfer between biomolecules and their dark variants must be possible. This requires transitions with same transition energies so that resonance becomes possible. This is also essential for the pairing of DNA and dark DNA and also for the pairing of say dark DNA and dark RNA. The resonance condition could explain why just the known basic biomolecules are selected from a huge variety of candidates possible in ordinary biochemistry and there would be no need to assume that life as we know it emerges as a random accident.

  3. Metabolic energy transfer between molecules and their dark variants must be possible by resonance condition. The dark nuclear energy scale associated with biomolecule could correspond to the metabolic energy scale of .5 eV. This condition fixes the model to a high extent but also other dark nuclear scales with their own metabolic energy quanta are possible.

Vision

The basic problem in the understanding of the prebiotic evolution is how DNA, RNA, amino-acids and tRNA and perhaps even cell membrane and microtubules . The individual nucleotides and amino-acids emerge without the help of enzymes or ribozymes but the mystery is how their polymers emerged. If the dark variants of these molecules served as templates for their generation one avoids this hen-and-egg problem. The problem how just the biomolecules were picked up from a huge variety of candidates allowed by chemistry could be solved by the resonance condition making possible metabolic energy transfer between biomolecules and dark nuclei.

Simple scaling argument shows that the assumption that ordinary genetic code corresponds to heff/h=n=218 and therefore to the p-adic length scale L(141)≈ .3 nm corresponding to the distance between DNA and RNA bases predicts that the scale of dark nuclear excitation energies is .5 eV, the nominal value of metabolic energy quantum. This extends and modifies the vision about how prebiotic evolution led via RNA era to the recent biology. Unidentified infrared bands (UIBs) from interstellar space identified in terms of transition energies of dark nuclear physics support this vision and one can compre it to PAH world hypothesis.

p-Adic length scale hypothesis and thermodynamical considerations lead to ask whether cell membrane and microtubules could correspond to 2-D analogs of RNA strands associated with dark RNA codons forming lattice like structures. Thermal constraints allow cell membrane of thickness about 5 nm as a realization of k=149 level with n= 222 in terms of lipids as analogs of RNA codons. Metabolic energy quantum is predicted to be .04 eV, which corresponds to membrane potential. The thickness of neuronal membrane in the range 8-10 nm and could correspond to k=151 and n=223 in accordance with the idea that it corresponds to higher level in the cellular evolution reflecting that of dark nuclear physics.

Also microtubules could correspond to k=151 realization for which metabolic energy quantum is .02 eV slightly below thermal energy at room temperature: this could relate to the inherent instability of microtubules. Also a proposal for how microtubules could realize genetic code with the 2 conformations of tubulin dimers and 32 charges associated with ATP and ADP accompanying the dimer thus realizing the analogs of 64 analogs of RNA codons is made.

See the chapter About the Correspondence of Dark Nuclear Genetic Code and Ordinary Genetic Code or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, January 14, 2018

About heff/h=n as the number of sheets of Galois covering

The following considerations were motivated by the observation of a very stupid mistake that I have made repeatedly in some articles about TGD. Planck constant heff/h=n corresponds naturally to the number of sheets of the covering space defined by the space-time surface.

I have however claimed that one has n=ord(G), where ord(G) is the order of the Galois group G associated with the extension of rationals assignable to the sector of "world of classical worlds" (WCW) and the dynamics of the space-time surface (what this means will be considered below).

This claim of course cannot be true since the generic point of extension G has some subgroup H leaving it invariant and one has n= ord(G)/ord(H) dividing ord(G). Equality holds true only for Abelian extensions with cyclic G. For singular points isotropy group is H1⊃ H so that ord(H1)/ord(H) sheets of the covering touch each other. I do not know how I have ended up to a conclusion, which is so obviously wrong, and how I have managed for so long to not notice my blunder.

This observation forced me to consider more precisely what the idea about Galois group acting as a number theoretic symmetry group really means at space-time level and it turned out that M8-H correspondence gives a precise meaning for this idea.

Consider first the action of Galois group (see this and this).

  1. The action of Galois group leaves invariant the number theoretic norm characterizing the extension. The generic orbit of Galois group can be regarded as a discrete coset space G/H, H⊂ G. The action of Galois group is transitive for irreducible polynomials so that any two points at the orbit are G-related. For the singular points the isotropy group is larger than for generic points and the orbit is G/H1, H1⊃ H so that the number of points of the orbit divides n.
    Since rationals remain invariant under G, the orbit of any rational point contains only single point. The orbit of a point in the complement of rationals under G is analogous to an orbit of a point of sphere under discrete subgroup of SO(3).

    n=ord(G)/ord(H) divides the order ord(G) of Galois group G. The largest possible Galois group for n-D algebraic extension is permutation group Sn. A theorem of Frobenius states that this can be achieved for n=p, p prime if there is only single pair of complex roots (see this). Prime-dimensional extensions with heff/h=p would have maximal number theoretical symmetries and could be very special physically: p-adic physics again!

  2. The action of G on a point of space-time surface with imbedding space coordinates in n-D extension of rationals gives rise to an orbit containing n points except when the isotropy group leaving the point is larger than for a generic point. One therefore obtains singular covering with the sheets of the covering touching each other at singular points. Rational points are maximally singular points at which all sheets of the covering touch each other.

  3. At QFT limit of TGD the n dynamically identical sheets of covering are effectively replaced with single one and this effectively replaces h with heff=n× h in the exponent of action (Planck constant is still the familiar h at the fundamental level). n is naturally the dimension of the extension and thus satisfies n≤ ord(G). n= ord(G) is satisfied only if G is cyclic group.

The challenge is to define what space-time surface as Galois covering does really mean!
  1. The surface considered can be partonic 2-surface, string world sheet, space-like 3-surface at the boundary of CD, light-like orbit of partonic 2-surface, or space-time surface. What one actually has is only the data given by these discrete points having imbedding space coordinates in a given extension of rationals. One considers an extension of rationals determined by irreducible polynomial P but in p-adic context also roots of P determine finite-D extensions since ep is ordinary p-adic number.

  2. Somehow this data should give rise to possibly unique continuous surface. At the level of H=M4× CP2 this is impossible unless the dynamics satisfies besides the action principle also a huge number of additional conditions reducing the initial value data ans/or boundary data to a condition that the surface contains a discrete set of algebraic points.

    This condition is horribly strong, much more stringent than holography and even strong holography (SH) implied by the general coordinate invariance (GCI) in TGD framework. However, preferred extremal property at level of M4× CP2 following basically from GCI in TGD context might be equivalent with the reduction of boundary data to discrete data if M8-H correspondence is accepted. These data would be analogous to discrete data characterizing computer program so that an analogy of computationalism would emerge (see this).

One can argue that somehow the action of discrete Galois group must have a lift to a continuous flow.
  1. The linear superposition of the extension in the field of rationals does not extend uniquely to a linear superposition in the field reals since the expression of real number as sum of units of extension with real coefficients is highly non-unique. Therefore the naive extension of the extension of Galois group to all points of space-time surface fails.

  2. The old idea already due to Riemann is that Galois group is represented as the first homotopy group of the space. The space with homotopy group π1 has coverings for which points remain invariant under subgroup H of the homotopy group. For the universal covering the number of sheets equals to the order of π1. For the other coverings there is subgroup H⊂ π1 leaving the points invariant. For instance, for homotopy group π1(S1)= Z the subgroup is nZ and one has Z/nZ=Zp as the group of n-sheeted covering. For physical reasons its seems reasonable to restrict to finite-D Galois extensions and thus to finite homotopy groups.

    π1-G correspondence would allow to lift the action of Galois group to a flow determined only up to homotopy so that this condition is far from being sufficient.

  3. A stronger condition would be that π1 and therefore also G can be realized as a discrete subgroup of the isometry group of H=M4× CP2 or of M8 (M8-H correspondence) and can be lifted to continuous flow. Also this condition looks too weak to realize the required miracle. This lift is however strongly suggested by Langlands correspondence (see this).

The physically natural condition is that the preferred extremal property fixes the surface or at least space-time surface from a very small amount of data. The discrete set of algebraic points in given extension should serve as an analog of boundary data or initial value data.
  1. M8-H correspondence could indeed realize this idea. At the level of M8 space-time surfaces would be algebraic varieties whereas at the level of H they would be preferred extremals of an action principle which is sum of Kähler action and minimal surface term.

    They would thus satisfy partial differential equations implied by the variational principle and infinite number of gauge conditions stating that classical Noether charges vanish for a subgroup of symplectic group of δ M4+/-× CP2. For twistor lift the condition that the induced twistor structure for the 6-D surface represented as a surface in the 12-D Cartesian product of twistor spaces of M4 and CP2 reduces to twistor space of the space-time surface and is thus S2 bundle over 4-D space-time surface.

    The direct map M8→ H is possible in the associative space-time regions of X4⊂ M8 with quaternionic tangent or normal space. These regions correspond to external particles arriving into causal diamond (CD). As surfaces in H they are minimal surfaces and also extremals of Kähler action and do not depend at all on coupling parameters (universality of quantum criticality realized as associativity). In non-associative regions identified as interaction regions inside CDs the dynamics depends on coupling parameters and the direct map M8→ CP2 is not possible but preferred extremal property would fix the image in the interior of CD from the boundary data at the boundaries of CD.

  2. At the level of M8 the situation is very simple since space-time surfaces would correspond to zero loci for RE(P) or IM(P) (RE and IM are defined in quaternionic sense) of an octonionic polynomial P obtained from a real polynomial with coefficients having values in the field of rationals or in an extension of rationals. The extension of rationals would correspond to the extension defined by the roots of the polynomial P.

    If the coefficients are not rational but belong to an extension of rationals with Galois group G0, the Galois group of the extension defined by the polynomial has G0 as normal subgroup and one can argue that the relative Galois group Grel=G/G0 takes the role of Galois group.

    It seems that M8-H correspondence could allow to realize the lift of discrete data to obtain continuous space-time surfaces. The data fixing the real polynomial P and therefore also its octonionic variant are indeed discrete and correspond essentially to the roots of P.

  3. One of the elegant features of this picture is that the at the level of M8 there are highly unique linear coordinates of M8 consistent with the octonionic structure so that the notion of a M8 point belonging to extension of rationals does not lead to conflict with GCI. Linear coordinate changes of M8 coordinates not respecting the property of being a number in extension of rationals would define moduli space so that GCI would be achieved.

Does this option imply the lift of G to π1 or to even a discrete subgroup of isometries is not clear. Galois group should have a representation as a discrete subgroup of isometry group in order to realize the latter condition and Langlands correspondence supports this as already noticed. Note that only a rather restricted set of Galois groups can be lifted to subgroups of SU(2) appearing in McKay correspondence and hierarchy of inclusions of hyper-finite factors of type II1 labelled by these subgroups forming so called ADE hierarchy in 1-1 correspondence with ADE type Lie groups (see this). One must notice that there are additional complexities due to the possibility of quaternionic structure which bring in the Galois group SO(3) of quaternions.

See the short article About heff/h=n as the number of sheets of space-time surface as Galois covering or the article Does M8-H duality reduce classical TGD to octonionic algebraic geometry? or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, January 12, 2018

Condensed matter simulation of 4-D quantum Hall effect from TGD point of view

There is an interesting experimental work related to the condensed matter simulation of physics in space-times with D=4 spatial dimensions meaning that one would have D=1+4=5-dimensional space-time (see this and this). What is simulated is 4-D quantum Hall effect (QHE). In M-theory D= 1+4-dimensional branes would have 4 spatial dimensions and also 4-D QH would be possible so that the simulation allows to study this speculative higher-D physics but of course does not prove that 4 spatial dimensions are there.

In this article I try to understand the simulation, discuss the question whether 4 spatial dimensions and even 4+1 dimensions are possible in TGD framework in some sense, and also consider the general idea of the simulation higher-D physics using 4-D physics. This possibility is suggested by the fact that it is possible to imagine higher-dimensional spaces and physics: maybe this ability requires simulation of high-D physics using 4-D physics.

See the article Condensed matter simulation of 4-D quantum Hall effect from TGD point of view or the chapter Quantum Hall effect and Hierarchy of Planck Constants of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, January 11, 2018

Does the action of anesthetes prevent the formation of cognitive mental images?

I encountered an interesting popular article Scientists Just Changed Our Understanding of How Anaesthesia Messes With The Brain telling about the finding that anesthetes weaken the communications between neurons. It is found that a anesthete known as propofol restrics the movement of protein syntaxin 1a appearing as neurotransmitter at synapses and neurons.

The TGD inspired explanation for the loss of consciousness would be following. Nerve pulse activity is needed to generate neurotransmitters attaching to the receptors of post-synaptic neuron and in this manner forming connections between pre- and post-synaptic neurons giving rise to networks of active neurons. The transmitter would be like a relay in old-fashioned telephone network. Propofol would prevent the formation of the bridges and therefore of the networks of active neurons serving as correlates for mental images. No mental images, no higher level consciousness.

The earlier TGD inspired proposal was that anesthetes induce a hyperpolarization reducing the nerve pulse activity. How anesthetes could induce hyperpolarization is discussed at here: the model involves microtubules in an essential manner. Hyperpolarization would have same effect as the restriction of the movement of syntaxin 1a. This mechanism might be at work during sleep and also some anesthetes (but not propofol) could use it.

The TGD based interpretation relies on a profound re-interpretation of the function of transmitters and information molecules in general (see this). The basic idea is that connected networks of neurons correspond to mental images at neuronal level and that the effect of anesthetes is to prevent the formation of these networks.

  1. In TGD based model neither nerve pulses nor information molecules represent signals in intra-brain communications but build communication channels acting as relays fusing existing disjoint flux tubes associated with axons to network like connected structures as they attach to receptors.

    Flux tue networks make possible classical signalling by dark photons with heff=n× h. Dark photons make their presence manifest by occasionally transforming to ordinary photons identified as bio-photons with energies in visible and UV range. This signalling takes place at light velocity and is therefore optimal for communication and information processing purposes.

    Quantum mechanically flux tube networks correspond to so called tensor networks. Due to quantum coherence in the scale of network, quantum entanglement between the neurons of connected sub-networks is possible and networks serve as correlates for mental images.

    Nerve pulse patterns frequency modulate generalized Josephson radiation from neuronal membrane acting as a generalized Josephson junction. This radiation identifiable as EEG gives rise to sensory input to magnetic body (MB). MB in turn controls biological body (say brain) via dark cyclotron radiation, at least through genome, where it induces gene expression.

  2. All mental images at the level of brain are cognitive representations but they generate dark photon signals as virtual sensory inputs to sensory organs and in this manner give rise to sensory percepts as kind of artworks resulting in an iteration like process involving signalling forth and back using dark photons. This would make possible pattern recognition and formation of the objects of the perceptive field as cognitive representations in turn mapped to sensory percepts at sensory organs.

  3. In the case of hearing of speech the objects of the perceptive field are linear and represent words and sentences. In the case of written language the words decompose to linear sequences of syllables and these in turn into letters. In the case of sensory perception the sub-networks are 2-D or even 3-D and represent objects of the perceptive field. The topological dynamics of this network represents the dynamics of sensory perception and verbal and sensorily represented cognition (idiot savants).

See the article DMT, pineal gland, and the new view about sensory perception.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.




Wednesday, January 10, 2018

Strange spin asymmetry at RHIC

The popular article Surprising result shocks scientists studying spin tells about a peculiar effect in p-p and p-N (N for nucleus) observed at Relativistic Heavy Ion Collider (RHIC). In p-p scattering with polarized incoming proton there is asymmetry in the sense that the protons with vertical polarization with respect to scattering plane give rise to more neutrons slightly deflected to right than to left (see the figure of the article). In p-N scattering of vertically polarized protons the effect is also observed for neutrons but is stronger and has opposite sign for heavier nuclei! The effect came as a total surprise and is not understood. It seems however that the effects for proton and nuclear targets must have different origin since otherwise it is difficult to understand the change of the sign.

The abstract of the original article summarizes what has been observed.

During 2015 the Relativistic Heavy Ion Collider (RHIC) provided collisions of transversely polarized protons with Au and Al nuclei for the first time, enabling the exploration of transverse-single-spin asymmetries with heavy nuclei. Large single-spin asymmetries in very forward neutron production have been previously observed in transversely polarized p+p collisions at RHIC, and the existing theoretical framework that was successful in describing the single-spin asymmetry in p+p collisions predicts only a moderate atomic-mass-number (A) dependence. In contrast, the asymmetries observed at RHIC in p+A collisions showed a surprisingly strong A dependence in inclusive forward neutron production. The observed asymmetry in p+Al collisions is much smaller, while the asymmetry in p+Au collisions is a factor of three larger in absolute value and of opposite sign. The interplay of different neutron production mechanisms is discussed as a possible explanation of the observed A dependence.

Since diffractive effect in forward direction is in question, one can ask whether strong interactions have anything to do with the effect. This effect can take place at the level of nucleons and a quark level and these two effects should have different signs. Could electromagnetic spin orbit coupling cause the effect both at the level of nucleons in p-N collisions and at the level of quarks in p-p collisions?

  1. Spin-orbit interaction effect is relativistic effect: the magnetic field of target nucleus in the reference frame of projectile proton is nonvanishing: B= -γ v× E, γ= 1/(1-v2)1/2. The spin-orbit interaction Hamiltonian is


    HL-S = -μB ,


    where


    μ= gp μNS , μN= e/2mp

    is the magnetic moment of polarized proton proportional to spin S, which no has definite direction due to the polarization of incoming proton beam. The gyromagnetic factor gp equals to gp=2.79284734462(82) holds true for proton.

  2. Only the component of E orthogonal to v is involved and the coordinates in this direction are unaffected by the Lorentz transformations. One can express the transversal component of electric field as gradient

    Er= - ∂rV r/r .

    Velocity v can be expressed as v=p/mp so that the spin-orbit interaction Hamiltonian reads as

    HL-S= γ gp (e/2mp) (1/mp)LS [∂rV/r ] .

    For polarised proton the effect of this interaction could cause the left-right asymmetry. The reason is that the sign of the interaction Hamiltonia is opposite at left and right sides of the target since the sign of L=r× p is opposite at left- and right-hand sides. One can argue as in non-relativistic case that this potential generates a force which is radial and proportional to ∂r[(∂rV(r))/r)].

Consider first the scattering on nucleus.
  1. Inside the target nucleus one can assume that the potential is of the form V= kr2/2: the force vanishes! Hence the effect must indeed come from peripheral collisions. At the periphery responsible for almost forward scattering one as V(r)=Ze/r and one has ∂r(∂rV(r))/r)= 3Ze/r4, r=R, R the nuclear radius. One has R = kA1/3 for a constant density nucleus so that one has ∂r(∂rV(r))/r)= 3k-4eZA-4/3.

    The force decreases with A roughly like A-1/3 but the scattering proton can give its momentum to a larger number of nucleons inside the target nucleus. If all neutrons get their share of the transversal momentum, the effect is proportional to neutron number N=A-Z one would obtain the dependence Z(A-Z)A-4/3 ∼ A2/3. If no other effects are involved one would have for the ratio r of Al and Au asymmetries

    r=Al/Au ∼ Z(Al)N(Al)/Z(Au)A(u) × [A(Au)/A(Al)]4/3 .

    Using (Z,A)=(13,27) for Al and (Z,A)=(79,197) for Au one obtains the prediction r=.28. The actual value is r≈ .3 by estimating from Fig. 4 of the article is not far from this.

  2. This effect takes place only for protons but it deviates proton at either side to the interior of nucleus. One expects that the proton gives its transversal momentum components to other nucleons - also neutrons. This implies that sign of the effect is same as it would be for the spin-orbit coupling when the projectile is neutron. This could be the basic reason for the strange sign of the effect.

Consider next what could happen in p-p scattering.
  1. One must explain why neutrons with R-L asymmetry with respect to the scattering axis are created. This requires quark level consideration.

  2. The first guess is that one must consider spin orbit interaction for the quarks of the polarized proton scattering from the quarks of the unpolarized proton. What comes in mind is that one could in a reasonable approximation treat the unpolarized proton as single coherent entity. In this picture u and d quarks of polarized proton would have asymmetric diffractive scattering tending to go to the opposite sides of the scattering axis.

  3. The effect for d quarks would be opposite to that for u quarks. Since one has n=udd and and p=uud, the side which has more d quarks gives rise to neutron excess in the recombination of quarks to hadrons. This effect would have opposite sign than the effect in the case of nuclear target. This quark level effect would be present also for nuclear targets.

See the chapter New Particle Physics Predicted by TGD: Part II of "p-Adic Physics".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, January 06, 2018

Exciton-polariton Bose-Einstein condensate at room temperature and heff hierarchy

Ulla gave in my blog a link to a very interesting work about Bose-Einstein condensation of quasi-particles known as exciton-polaritons. The popular article tells about a research article published in Nature by IBM scientists.

Bose-Einstein condensation happens for exciton-polaritons at room temperature, this temperature is four orders of magnitude higher than the corresponding temperature for crystals. This puts bells ringing. Could heff/h=n be involved?

One learns from Wikipedia that exciton-polaritons are electron hole pairs- photons kick electron to higher energy state and exciton is created.These quasiparticles would form a Bose-Einstein condensate with large number of particles in ground state. The critical temperature corresponds to the divergence of Boltzmann factor given by Bose-Einstein statistics.

  1. The energy of excitons must be of order thermal energy at room temperature: IR photons are in question. Membrane potential happens to corresponds to this energy. That the material is organic, might be of relevance. Living matter involves various Bose-Einstein condensate and one can consider also excitons.

    As noticed the critical temperature is surprisingly high. For crystal BECs it is of order .01 K. Now by a factor 30,000 times higher!

  2. Does the large value of heff =n×h visible make the critical temperature so high?

    Here I must look at Wikipedia for BEC of quasiparticles. Unfortunately the formula for n1/3 is copied from source and contains several errors. Dimensions are completely wrong.

    It should read n1/3= (ℏ)-1 (meffkTcr)x, x= 1/2.

    [not x=-1/2 and 1/ℏ rather than ℏ as in Wikipedia formula. This is usual: it would important to have Wikipedia contributors who understand at least something about what they are copying from various sources].

  3. The correct formula for critical temperature Tcr reads as

    Tcr= (dn/dV)y2/meff , y=2/3.

    [Tcr replaces Tc and y=2/3 replaces y=2 in Wikipedia formula. Note that in Wikipedia formula dn/dV is denoted by n reserved now for heff=n×h].


  4. In TGD one can generalize by replacing ℏ with ℏeff=n ×ℏ so that one has

    Tcr→ n2Tcr .

    Critical temperature would behave like n2 and the high critical temperature (room temperature) could be understood. In crystals the critical temperature is very low but in organic matter a large value of n≈ 100 could change the situation. n≈ 100 would scale up the atomic scale of 1 Angstrom as a coherence length of valence electron orbitals to cell membrane thickness about 10 nm. There would be one dark electron-hole pair per volume taken by dark valence electron: this would look reasonable.

One must consider also the conservative option n=1. Tcr is also proportional to (dn/dV)2, where dn/dV is the density of excitons and to the inverse of the effective mass meff. meff must be of order electron mass so that the density dn/dV or n is the critical parameter. In standard physics so high a critical temperature would require either large density dn/dV about factor 106 higher than in crystals.

Is this possible?

  1. Fermi energy E is given by almost identical formula but with factor 1/2 appearing on the right hand side. Using the density dne/dV for electrons instead of dn/dV gives an upper bound for Tcr ≤ 2EF. EF varies in the range 2-10 eV. The actual values of Tcr in crystals is of order 10-6 eV so that the density of quasi particles must be very small for crystals: dncryst/dV≈ 10-9dne/dV .

  2. For crystal the size scale Lcryst of the volume taken by quasiparticle would be 10-3 times larger than that taken by electron, which varies in the range 101/3-102/3 Angstroms giving the range (220-460) nm for Lcryst.

  3. On the other hand, the thickness of the plastic layer is Llayer= 35 nm, roughly 10 times smaller than Lcryst. One can argue that Lplast ≈ Llayer is a natural order of magnitude for Lcryst for quasiparticle in plastic layer. If so, the density of quasiparticles is roughly 103 times higher than for crystals. The (dn/dV)2-proportionality of Tcr would give the factor Tcr,plast≈ 106 Tcr,cryst so that there would be no need for non-standard value of heff!

    But is the assumption Lplast ≈ Llayer really justified in standard physics framework? Why this would be the case? What would make the dirty plastic different from super pure crystal?

The question which option is correct remains open: conservative would of course argue that the now-new-physics option is correct and might be right.

For background see the chapter Criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, January 04, 2018

What could idiot savants teach to us about Natural Intelligence?

Recently a humanoid robot known as Sophia has gained a lot of attention in net (see the article by Ben Goertzel, Eddie Monroe, Julia Moss, David Hanson and Gino Yu titled with title " Loving AI: Humanoid Robots as Agents of Human Consciousness Expansion (summary of early research progress)" .

This led to ask the question about the distinctions of Natural and Artificial Intelligence and about how to model Natural Intelligence. One might think that idiot savants could help answering this kind of question but so it turned out to be!

Mathematical genii and idiot savants seem to have something in common

It is hard to understand the miraculous arithmetical abilities of both some mathematical genii and idiot savants lacking completely conceptual thinking and conscious information processing based on algorithms. I have discussed the number theoretical feats here.

Not all individual capable of memory and arithmetic feats are idiot savants. These mathematical feats are not those of idiot savant and involve high level mathematical conceptualization. How Indian self-taught number-theoretical genius Ramajunan discovered his formulas remains still a mystery suggesting totally different kind of information processing. Ramanujan himself told that he got his formulas from his personal God.

Ramajunan's feats lose some of their mystery if higher level selves are involved. I have considered a possible explanation based on ZEO, which allows to consider the possibility that quantum computation type processing could be carried out in both time directions alternately. The mental image representing the computation would experience several deaths following by re-incarnations with opposite direction of clock time (the time direction in which the size of CD increases). The process requiring very long time in the usual positive energy ontology would take only short time when measured as the total shift for the tip of either boundary of CD - the duration of computations at opposite boundary would much longer!

Sacks tells about idiot savant twins with intelligence quotient of 60 having amazing numerical abilities despite that they could not understand even the simplest mathematical concepts. For instance, twins "saw" that the number of matches scattered along floor was 111 and also "saw" the decomposition of integer to factors and primality. A mechanism explaining this based on the formation of wholes by quantum entanglement is proposed here. The model does not however involve any details.

Flux tube networks as basic structures

One can build a more detailed model for what the twins did by assuming that information processing is based on 2-dimensional discrete structures formed by neurons (one can also consider 3-D structures consisting of 2-D layers and the cortex indeed has this kind of cylindrical structures consisting of 6 layers). For simplicity one can assume large enough plane region forming a square lattice and defined by neuron layer in brain. The information processing should involve minimal amount of linguistic features.

  1. A natural geometric representation of number N is as a set of active points (neurons) of a 2-D lattice. Neuron is active it is connected by a flux tube to at least one other neuron. The connection is formed/strengthened by nerve pulse activity creating small neuro-transmitter induced bridges between neurons. Quite generally, information molecules would serve the same purpose (see this and this).

    Active neurons would form a collection of connected sets of the plane region in question. Any set of this kind with given number N of active neurons would give an equivalent representation of number N. At quantum level the N neurons could form union of K connected sub-networks consisting Nk neurons with ∑ Nk=N.

  2. There is a large number of representations distinguished by the detailed topology of the network and a particular union of sub-networks would carry much more information than the mere numbers Nk and N code. Even telling, which neurons are active (Boolean information) is only part of the story.

    The subsets of Nk points would have large number of representations since the shape of these objects could vary. A natural interpretation would be in terms of objects of a picture. This kind of representation would naturally result in terms of virtual sensory input from brain to retina and possibly also other sensory organs and lead to a decomposition of the perceptive field to objects.

    The representation would thus contain both geometric information - interpretation as image - and number theoretic information provided by the decomposition. The K subsets would correspond to one particular element of a partition algebra generalizing Boolean algebra for which one has partition to set and its complement (see this).

  3. The number N provides the minimum amount of information about the situation and can be regarded as a representation of number. One can imagine two extremes for the representations of N.

    1. The first extreme corresponds to K linear structures. This would correspond to linear linguistic representation mode characteristic for information processing used in classical computers. One could consider interpretation as K words of language providing names for say objects of an image. The extreme is just one linear structure representing single word. Cognition could use this kind of representations.

    2. Second extreme corresponds to single square lattice like structure with each neuron connected to the say 4 nearest neighbors. This lattice has one incomplete layer: string with some neurons missing. This kind of representation would be optimal for representation of images representing single object.

      For N active neurons one can consider a representation as a pile of linear strings containing pk neurons, where p is prime. If N is divisible by pk: N= Mpk one obtains a M× pk lattice. If not one can have M× pk lattice connected to a subset of neurons along string with pk neurons. One would have representation of the notion of divisibility by given power of prime as a rectangle! If N is prime this representation does not exist!


Flux tube dynamics

The classical topological dynamics for the flux tube system induced by nerve pulse activity building temporary bridges between neurons would allow phase transitions changing the number of sub-networks, the numbers of neurons in them, and the topology of individual networks. This topological dynamics would generalize Boolean dynamics of computer programs.

  1. Flux tube networks as sets of all active neurons can be also identified as elements of Boolean algebra defined by the subsets of discretize planar or even 3-D regions (layer of neurons). This would allow to project flux tube networks and their dynamics to Boolean algebra and their dynamics. In this projection the topology of the flux tube network does not matter much: it is enough that each neurons is connected to some neuron (bit 1). One might therefore think of (a highly non-unique) lifting of computer programs to nerve pulse patterns activating corresponding subsets of neurons. If the dynamics of flux tube network determined by space-time dynamics is consistent with the Boolean projection, topological flux tube dynamics induced by space-time dynamics would define computer program.

  2. At the next step one could take into account the number of connected sub-networks: this suggests a generalization of Boolean algebra to partition algebras so that one does not consider only subset and its complement but decomposition into n subsets which one can think as having different colors (see this). This leads to a generalization of Boolean (2-adic) logic to p-adic logic, and a possible generalization of computer programs as Boolean dynamical evolutions.

  3. At the third step also the detailed topology of each connected sub-network is taken into account and brings in further structure. Even higher-dimensional structures could be represented as discretized versions by allowing representation of higher-dimensional simplexes as connected sub-networks. Here many-sheeted space-time suggests a possible manner to add artificial dimensions.

This dynamics would also allow to realize basic arithmetics. In the case of summation the initial state of the network would be a collection of K disjoint networks with Nk elements and in final state single connected set with N=∑ Nk elements. The simplest representation is as a pile of K strings with Nk elements. Product M× N could be reduced to a sum of M sets with N element: this could be represented as a pile of M linear strings.

Number theoretical feats of twins and flux tube dynamics

Flux tube dynamics suggests a mechanism for how the twins managed to see the number of the matches scattered on the floor and also how they managed to see the decomposition of number into primes or prime powers. Sacks indeed tells that the eyes of the twins were rolling wildly during their feats. What is required is that the visual perception of the matches on the floor was subject to dynamics allowing to deform the topology of the associated network. Suppose that some preferred network topology or network topologies allowed to recognize the number of matches and tell it using language (therefore also linear language is involved). The natural assumption is that the favored network topology is connected.

The two extremes in which the network is connected are favored modes for this representation.

  1. Option I corresponds to any linear string giving a linguistic representation as the number neurons (which would be activated by seeing the matches scattered on the floor). A large number of equivalent representations is possible. This representation might be optimal for associating to N its name. The verbal expression of the name could be completely automatic association without any conceptual content. The different representations carry also geometric information about the shape of the string: melody in music could be this kind of curve whereas words of speech would be represented by straight lines.

  2. Option II corresponds to a maximally connected lattice like structure formed as pile of strings with pk neurons for a given prime: N= M1× pk+M2, 0≤ Mi < pk. The highest string in the pile misses some neurons. This representation would be maximally connected. It contains more information than that about the value of N.

Option II provides also number theoretical information allowing a model for the feats of the twins.
  1. As far the checking the primeness of N is considered, one can assume k=1. For the primes pi dividing N one would find a representation of N as a rectangle. If N is prime, one finds no rectangles of this kind (or finds only the degenerate 1× p rectangle). This serves a geometric signature of primeness. Twins would have tried to find all piles of strings with p neurons, p=2,3,5,... A slower procedure checkes for divisibility by n=2,3,4,....

  2. The decomposition into prime factors would proceed in the similar manner by starting from p=2 and proceeding to larger primes p=3,5,7,.... When a prime factor pi is found only single vertical string from the pile is been taken and the process is repeated for this string but considering only primes p>pi. The process would have been completely visual and would not involve any verbal thinking.

For the storage of memories the 2-D (or possibly 3-D representation) is non-economical and the use of 1-D representation replacing images with their names is much more economic. For information processing such as decomposition into primes, the 2-D or even 3-D representation are much more powerful.

See the article Artificial Intelligence, Natural Intelligence, and TGD or the chapter of "TGD based view about living matter and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Artificial Intelligence, Natural Intelligence, and TGD

Recently a humanoid robot known as Sophia has gained a lot of attention in net (see the article by Ben Goertzel, Eddie Monroe, Julia Moss, David Hanson and Gino Yu titled with title " Loving AI: Humanoid Robots as Agents of Human Consciousness Expansion (summary of early research progress)" .

Sophia uses AI, visual data processing, and facial recognition. Sophia imitates human gestures and facial expressions and is able to answer questions and make simple conversations on predefined topics. The AI program used analyzes conversations, extracts data, and uses it to improve responses in the future. To a skeptic Sophia looks like a highly advanced version of ELIZA.

Personally I am rather skeptic view about strong AI relying on a mechanistic view about intelligence. This leads to transhumanism and notions such as mind uploading. It is however good to air out one's thinking sometimes.

Computers should have a description also in the quantal Universe of TGD and this forces to look more precisely about the idealizations of AI. This process led to a change of my attitudes. The fusion of human consciousness and presumably rather primitive computer consciousness but correlating with the program running in it might be possible in TGD Universe, and TGD inspired quantum biology and the recent ideas about prebiotic systems provide rather concrete ideas in attempts to realize this fusion.

TGD also strongly suggests that there is also what might be called Natural Intelligence relying on 2-D cognitive representations defined by networks consisting of nodes (neurons) and flux tubes (axons with nerve pulse patters) connecting them rather than linear 1-D representation used by AI. The topological dynamics of these networks has Boolean dynamics of computer programs as a projection but is much more general and could allow to represent objects of perceptive field and number theoretic cognition.

See the article Artificial Intelligence, Natural Intelligence, and TGD or the chapter of "TGD based view about living matter and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.