Saturday, May 22, 2021

Chemistry revolution

Thanks for Moore Thaung for a very interesting article of new chemistry. Unfortunately, a subscription to New Scientist is required. One can however find in the web several popular articles telling about the changing views of chemical bonds.

This weird chemical bond acts like a mash-up of hydrogen and covalent bonds tells about hybrids of hydrogen and and covalent bonds. For short bond lengths these bonds become strong valence bonds and for long bond lengths weak hydrogen bonds which can even have length of 3 Angstrom.

Strange bonds entirely new to chemists predicted in ammonia hydrides tells that ammonium NH3 can form in the presence of hydrogen in very high pressure an exotic compound NH7, which can decay to NH4+ + H2+ H. NH4+ is also exotic.

Sticking together: Another look at chemical bonds and bonding discusses the theory of chemical bonds proposed by Prof. David Brown, which has turned out to be very successful. His article Another look at bonds and bonding is published in Structural Chemistry 31(1), 2019.

The bond theory of David Brown

The bond theory of David Brown is of special interest.

  1. The theory involves the notion of electric flux as a purely classical element. The delocalization of valence electrons is of course a non-classical element and one can argue that this aspect is not well-understood in standard chemistry.

    In the TGD framework, the counterpart of electric flux is a flux tube carrying magnetic flux, which can be monopole flux. Thetube can also carry an electric flux and a simple modification of purely magneticflux tubes gives tubes carrying also an electric flux.

  2. The key concept besides the notions of valence defined as the number Nv of valence electrons belonging to bonds, and the number of valence bonds Nb, is valence strength defined as Nv/Nb. The total electric flux is the sum of fluxes assignable to the bonds and equals to the total electric charge -Nve of valence electrons.

    By flux conservation, the electric fluxes at the ends of a given bond are opposite and this gives a strong constraint on the model. This condition is new from the point of standard bond theory and is purely classical.

  3. The configurations with minimum energy are expected to be symmetric. In this case, the electric fluxes for the bonds are expected to be identical and proportional to the common bond strength.
    1. An important implication of flux conservation in the symmetric case is that the valence strengths must be the same for bonded atoms. This condition excludes a large number of candidates.
    2. If Nb is larger than Nv the flux is fractional. This would represent an exotic situation. An interesting question, is whether the flux could correspond to a quark pair or two quark pairs possible in TGD framework in long scales: in this case the flux would be 1/3:rd or 2/3:rd of the flux associated with a single valence electron.
  4. The model works for many kinds of bonds, and is claimed to work even for hydrogen bonds, and can be used to predict possible bonding structures. What is remarkable, that the notion of conserved electric flux assignable to chemical bonds resonates with the TGD view that non-trivial space-time topology behind the notion of flux tube is directly visible at the level of chemistry.

TGD view about chemical bonds

I remember the time when I realized that TGD suggests a description of the chemical bond in terms of the space-time topology. Could chemistry books be wrong, was the question, which I barely dared to articulate.

Gradually I learned that chemistry books do not really allow any deeper understanding of chemical bonds. One just says that they follow from Schodinger equation but computational complexity prevents proving this.

TGD indeed implies a revolution in chemistry. Some chemical bonds are accompanied by flux tubes carrying dark particles with effective Planck constant heff>h=6h0. Valence electrons of the less electronegative atom would get to the flux tube and become dark. This leads to a model of valence bonds and the value of heff/h0= n increases as one moves to the right along the row of the periodic table. This implies delocalization of the valence electrons to longer scale scaling like heff2 for the Bohr model and this is essential for the delocalization. This delocalization would be essential for chemistry of valence bonds and for biochemistry in particular.

The article also mentions bonds without electrons. Hydrogen bond is of course an example of such: now it would be a proton that becomes dark and has heff>h. In water one could have a spectrum of heff values with various bond lengths and this would give water its very special properties. Even flux tubes without any particles but creating correlations and correlates of entanglement between atoms involved are possible.

Also heff<h bonds are possible. Randell Mills has found evidence for a variant of hydrogenfor which energies are scaled by factor 1/4: this would mean heff=h/2.

An interesting possibility is that in the past scaled down atoms with heff= h/2 have existed. Could they correspond to most of the dark matter, the primordial dark matter? The strange disappearance ofthe valence electrons of some transition metals in heating has been also known for decades: heating would provide the energy needed to increase he<>ff for valence electrons so that they become dark relative to us?

In biology metabolic energy would be used to increase heff, which serves as a kind of universal IQ as a measure of algebraic complexity.

For background, see this, this, and this .

For the topics discussed, see the article Revolution in chemistry.

For a summary of the earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, May 13, 2021

Has AI hit a dead end?

95 per cent of brain activity has been found to be fluctuations seemingly unrelated to conscious activities involving sensory perception, motor actions and cognition. In the neuroscience framework they are interpreted as noise. Since fluctuations are poison for deterministic computation, the finding poses a serious problem for model of the brain as a deterministic classical computer.

In this article the TGD based interpretation of the long range fluctuations as quantum fluctuations characterized by the value of the effective Planck constant heff=nh0 labelling the phases of ordinary matter identified as dark matter and residing at magnetic body (MB) of the system is discussed. n has number theoretic interpretation and can be regarded as a universal IQ so that fluctuations are a prerequisite for intelligence. According to the TGD based view about neuroscience primary sensory percepts reside at the sensory organs which requires back and forth communications between brain and sensory organs to build sensory perceptions as standardized mental images. These communications must be fast and the proposal is that they use dark photon signals.

In this view nerve pulses do not represent signals inside the brain but act as neural relays at synaptic junctions making possible long range dark photon communications inside the brain. Part of the metabolic energy associated with the fluctuations could be used to the building of mental images in the proposed manner. Nerve pulse patterns generate Josephson radiation communicating sensory information to MB and also require metabolic energy. Dark cyclotron radiation from MB represents control signals to the brain. In both cases, long range fluctuations at brain level are involved.

See the article Has AI hit a dead end? or the chapter Artificial Intelligence, Natural Intelligence, and TGD

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Updated version of Expanding Earth Model

I wrote an updated version of the Expanding Earth Model (EEM)based on the assumption that during the Cambrian Explosion (CE) for about .5 billion years ago, the radius of Earth increased by factor 2.

The recent findings demonstrating that the Earth's mantle contains water and even pockets of fluid water plus a detailed discussion of various objections against EEM lead to an updated version of the model. The new key element is that the value of heff was heff=3h0=h/2 at the atomic level before CE for Earth. Earth consisted of matter which would be dark relative to us. In CE the transition heff=3h0=6h0= h took place and induced scaling by a factor 2. This transition also initiated biological evolution.

The finding that Earth was already billions of years ago covered by water suggests that this water had heff=h so that it could leak almost freely into the interior of Earth and because of its darkness could have much lower temperature and pressure than the heff=h/2 matter around it. Therefore life could evolve in Mother Gaia's womb shielded from cosmic rays and meteoric bombardment.

See the article Updated version of Expanding Earth model or the chapter Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, May 07, 2021

A chordate able to regenerate copies of it when dissected into 3 parts

The popular article Polycarpa mytiligera can regrow all of its organs if dissected into three pieces tells about  an extraordinary biological discovery.

The creature known as  Polycarpa mytiligera  is a marine animal commonly found in   Gulf of Eilat that is capable of regenerating its organs. The surprising discovery was that  the animal can regenerate all of its organs even when dissected into three fragments.

Such a high regenerative capacity has not been detected earlier  in a chordate animal that reproduces only by sexual reproduction. In the experiment, the researchers dissected specimens in a method that left part of the body without a nerve center, heart, and part of the digestive system. Not only did each part of the creature survive the dissection on its own, all of the organs regenerated in each of the three sections.

This is highly interesting challenge for TGD.  The information about the  full animal body  was needed for a full generation. How it was preserved in dissection? Was genetic information, as it is understood in standard biology, really enough to achieve  this? 

  1.  In TGD inspired quantum biology magnetic body (MB) carrying dark matter as h_eff/h_0=n phases is the  key notion. h_eff is an effective Planck constant defining the scale of quantum coherence. n is dimension of extension of rationals defined by a polynomial defining space-time region,  and serves  as a measure for algebraic complexity and serves as a kind of IQ. MB with high IQ defined by n serves as  the master of the biological body (BB)  controlling it and receiving information from it. The layers of MB also define  abstracted representations of BB. 
  2. If BB suffers damage, the information about BB is not lost at MB and  MB, which carries abstracted representations about BB and able to control BB, could  restore BB partially. Healing of wounds would be the basic example.  A more dramatic example about healing was discovered by Peoch:  the neurons of the  salamander brain can be shuffled like cards in a package but the animal recovers. 

    Indeed, since nothing happens to the MB of salamander or Polycarpa Mytilera,  recovery is in principle possible. The  new finding gives additional support for MB as a carrier of the biological information.

One can also make questions about  the recovery process itself. Could recovery be seen as a self-organization process of some kind? 
  1. In the TGD framework, quantum measurement theory relies on zero energy ontology (ZEO)  and solves  its  basic problem. The basic prediction is that in the TGD counterparts of ordinary state function reductions ("big" SFRs or BSFRs) time reversal takes place.  In small SFRs (SSFRs) identifiable as analogs of "weak" measurements, the arrow of time is preserved. ZEO  makes it also  possible to understand why the Universe looks classical in all scales although BSFRs occur in all scales at the dark onion-like layers of MB controlling the lower layers with ordinary biomatter at the bottom of the hierarchy.
  2.  Time reversed dissipation after BSFR looks like self-organization from the perspective of the outsider with a standard arrow of time, called it briefly O,  and would bea  basic self-organization process in living systems. In dissipation gradients disappear but in time-reversed dissipation they appear from the perspective of O.  
  3. This  makes possible also self-organized quantum criticality (SOQC), which is impossible in standard thermodynamics because criticality by definition means instability. The change of the arrow of time changes the situation from the perspective of  O  since the  time reversed  system tends to approach the criticality. Homeostasis would rely  SOQC  rather than on extremely complex  deterministic control programs as in the computerism based picture. Change the arrow of time for a subsystem and let it happen. Very Buddhist approach to healing! 
  4. The change of the arrow of time would be also central in the healing processes and also regeneration.
      For a summary of earlier postings see Latest progress in TGD.

      Articles and other material related to TGD.

Sunday, May 02, 2021

AI research may have hit a dead end

I found a link to a very interesting article titled "Artificial intelligence research may have hit a dead end" followed by the comment "Misfired" neurons might be a brain feature, not a bug — and that's something AI research can't take into account" (see this).

Also Philip K. Dick's 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" is mentioned.  Would an intelligent robot  (if it were still a robot) dream?

AI models the brain as a deterministic computer. Computer does not dream:  it does just what  is needed to solve a highly specialized problem (just what a top  specialist does in his job;  computer is the  idol of every professional highflier). 

Computerism assumes  physicalism denying such things as genuine free will but this is not seen as a problem.  Also the mainstream  neuroscientist believes in physicalism. Some computational imperialists   even claim that physics reduces to computerism.

1. Is 95 per cent of brain activity mere noise?

What might be called neuroscience of fluctuations has however  led to a strange conclusion: 95 per cent of brain's activity and therefore metabolic energy seems to be used  to  generate fluctuations,  which in standard neuroscience represents noise.   Neuroscientists have routinely averaged out this "noise" and concentrated on the study of  what can be regarded as  motor actions and sensory input. These contributions seem to represent only ripples in a vast sea of activity.

[Amusingly, junk DNA corresponds to 95 per cent of DNA in the case of humans, as the article observes.]

By the way,  EEG is  still often regarded  as a mere noise. This  represents a similar puzzle: why the brain would use a lot of metabolic energy to send information to outer space: coding of information about contents of consciousness and brain state indeed requires a lot of metabolic energy.   To sum up, the brain seems to be diametrically opposite to a computer in the sense that spontaneous fluctuations are poison for a computer but food for the brain.

What article suggests  is that this 95 per cent could correspond to "dreaming"  that is  imagination. Ability to imagine  would give rise to intelligence rather than the property of being a dead automaton. Dreams would be freely associating cognitive fluctuations - whatever that might mean physically. Interestingly, it is mentioned that newborns dream twice as much as adults: they must learn. One can learn by imaging, not merely by doing all possible mistakes in the real world.

What can one say about these findings in the TGD framework?

2. Could fluctuations be induced by quantum fluctuations in quantum critical Universe of TGD?

Consider first the TGD interpretation of quantum fluctuations.

  1.  TGD Universe is  quantal  in all scales. Zero energy ontology (ZEO) allows to overcome the basic objection that the universe looks classical in long scales: ZEO view about quantum jumps forces the Universe to look classical for  the outsider. The experiments of Minev et al indeed demonstrated this concretely.
  2. TGD Universe is also quantum critical in all scales. Quantum criticality means that the system is maximally complex  and sensitive  for perturbations. Complexity means that the system is ideal for representing the  external world via sensory inputs. By criticality implying maximal sensitivity it is also an ideal sensory receptor  and motor instrument.
  3. The basic characteristic of criticality are long range fluctuations. They are not random noise but highly correlated.  Could the fluctuation in  the brain correspond to quantum fluctuations. 
 Long range quantum fluctuations are not possible for the ordinary value of  Planck constant.  
     
  1. Number theoretical view about TGD, generalizing ordinary physics of sensory experience to the physics of both sensory experience and cognition, leads to the prediction that there is infinite hierarchy of phases of ordinary matter  identifiable as dark matter and labelled by the values of effective Planck constant heff= nh0, n is dimension for an extension of rationals defined by a polynomial determining space-time region.
  2. The value of n serves as a measure for complexity and therefore defines a kind of IQ. The longer the scale of quantum fluctuations, the higher the value of n, and the larger the heff, and the longer the scale of quantum coherence. Fluctuations would make  the brain intelligent. Their  absence would make the brain a complete idiot - an ideal computer.
  3. The higher the value of heff, the larger the energy of  the particle when other parameters are kept as constant. This means that intelligence requires metabolic energy feed to increase heff and keep its values the same, since heff tends to be spontaneously reduced.
One can however argue that since the brain consists of ordinary matter,  brain fluctuations cannot be quantal. 
  1. In TGD they would be induced by quantum fluctuations at the level of  the magnetic body (MB) having a hierarchical onion-like structure. The dark matter would be ordinary particles with heff=nh0 at MB and since heff/h0 serves as a measure of IQ it would be higher for dark matter than for ordinary biomatter. MB containing dark matter would be the "boss" controlling the biological body (BB).  
  2. The quantum coherence of MB would force ordinary coherence of ordinary biomatter as a forced coherence. Ordinary matter would be like soldiers  obeying the  orders and in this manner behaving apparently like a larger coherent unit.
MB would receive sensory input from BB and control it by using EEG realizes as dark photons. This would explain EEG and its probably existing scaled  variants.

3. TGD view about sensory perception, motor actions, and dreaming and imagination

The proposal of the article was that most of the brain activity goes to "dreaming". Dreaming, hallucinations,  and imagination are poorly understood notions in neuroscience.  TGD provides a rather detailed view about these notions.

  1. What  distinguishes TGD from neuroscience is that sensory receptors are assumed to serve as carriers of sensory percepts. Zero energy ontology (ZEO)  providing a new view about time and memory makes it possible  to solve the basic objections related to  the phantom limb phenomenon: pain in  the phantom limb would be sensory memory.   
  2. The assumption that sensory percepts are artworks rather than passive records  of sensory input   requires virtual sensory input from brain to sensory organs and build-up of the final  percept by  pattern recognition -  an iterative procedure involving very many forth-and back signals. Nerve pulse transmission is quite too slow a process to allow this and signals propagating with maximal signal velocity are suggestive.
  3.  Nerve pulses and neurotransmitters  would not represent real communication but give rise to  temporary intra-brain communication lines along which communications as dark photon signals would take place with maximal signal velocity using dark photons  (characterized by heff/h0=n) transforming to biophotons in an  energy conserving manner. 

    Neurotransmitters and also other  information molecules (hormones, messengers) attached to receptors would serve as bridges fusing  permanent but disjoint communication lines along axons to a connected temporary communication line for dark photons to propagate.  Nerve pulses would also generate generalized Josephson radiation allowing communications   between biological body (BB) and magnetic body (MB) using EEG.  Meridian system could be a permanently connected system of communication lines.

    This picture leads to a concrete  proposal about the roles of  DMT and pineal gland  concerning imagination and dreams and hallucinations. 

Returning to the original topic,  the natural question is following: How large fraction  of the 95 percent of brain activity goes to  feedback not present in the brain of the standard neuroscience? This would include  the construction of the feedback to sensory organs as  virtual sensory inputs to build standardized mental images. Dreams are a special case of this. There  is  also the virtual sensory input which does not reach sensory organs and gives rise to imagination, in particular internal speech.

 Similar picture applies to virtual  motor input and the construction of motor output as "standardized motor patterns" - this notion makes sense only in ZEO.  Note that the feedback loop could extend from brain to MB. 

There is also an interesting finding related to motor activities. In the experiments made for rats it is found that the spontaneous brain activity increases dramatically as the rat moves. This brings in mind a lecturer who moves forth and back  as he talks. This rhythmic motion could give rise to a brain/body rhythm  coupling the lecturer to a layer of MB with large heff. Its   quantum coherence of MB would induce ordinary coherence of BB in body scale and with large heff and raise the "IQ" of the lecture.  Thinking requires motion!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


The notion of holography in TGD framework

Thanks to Bob Tang for the link to Sabine Hossenfelder's article about holography. I will not comment about the content of the link but about TGD view of holography.

What "Universe as a hologram" does really mean must be first defined. In pop physics this notion has remained very loose. In the following I summarize the TGD based view about what holography means in the geometric sense.

In TGD 3-D surfaces are basic objects and replace 3-space. Holography is not a new principle but reduces to general coordinate invariance.

1. "Ordinary" holography

General coordinate invariance in 4-D sense requires that they correspond to single 4-D surface-space-time - at which general coordinate transformations act. Space-time surface is like Bohr orbit, preferred extremal for the action defining the space-time surface.

This is nothing but holography in standard sense and leads to zero energy ontology (ZEO) meaning that quantum states are superpositions of 3-surfaces or equivalently, of 4-D surfaces.

[The alternative to ZEO would be path integral approach, which is mathematically ill-defined and makes no sense in TGD framework due to horrible divergence difficulties.]

ZEO has profound implications for quantum theory itself and solves the measurement problem and also implies that the arrow of time changes in "big" (ordinary) state function reductions as opposed to "small" SFRs ("weak" measurements). Also the question at which length scale quantum behavior transforms to classical, becomes obsolete.

2. Strong form of holography (SH)

Besides space-time-like 3-surfaces at the boundaries of causal diamond CD serving as ends of space-time surface (initial value problem) there are light-like surfaces at which the signatures of the metric changes from Minkowskian to Euclidian (boundary value problem). Euclidian regions correspond to fundamental particles from which elementary particles are made of.

If either space-like or light-like 3-surfaces are assumed to be enough as data for holography (initial value problem is equivalent with boundary value problem), the conclusion is that their interactions as partonic 2-surfaces are enough as data. This would give rise to a strong form of holography, SH.

Intuitive arguments suggest several alternative mathematical realizations for the holography for space-time surfaces in H=M4×CP2. They should be equivalent.

  1. Space-time surfaces are extremals of both volume action (minimal surfaces) having interpretation in terms of length scale dependent cosmological constant and of Kähler action. This double extremal property reduces the conditions to purely algebraic ones with no dependence on coupling parameters. This corresponds to the universality of quantum critical dynamics. Space-time surfaces are analogs of complex sub-manifolds of complex imbedding space.
  2. Second realization is in terms of analogs of Kac-Moody and Virasoro gauge conditions for a sub-algebra of super-symplectic algebra (SSA) isomorphic with the entire SSA and acting as isometries of the "world of classical worlds" (WCW). SSA has non-negative conformal weights and generalizes Kac-Moody algebras in that there are two variables instead of single complex z coordinate: the complex coordinate z of sphere S2(to which light-one boundary reduces metrically) and light-like radial coordinate r of the light-cone boundary. Also the Kac-Moody type algebra assignable to isometries of H at light-like partonic orbits involve the counterparts of z and r. A huge generalization of symmetries of string theory is in question.

3. Number theoretic holography

M8-H duality leads to number theoretic holography, which is even more powerful than SH.

  1. In complexified M8 - complexified octonions - space-time surface would be "roots" of octonionic polynomials guaranteeing that the normal space of space-time surface is associative/quaternionic. Associativity in this sense would fix the dynamics. These surfaces would be algebraic whereas at the level of H surfaces satisfy partial differential equations reducing to algebraic equations due to the complex surface analogy.
  2. M8 would be analogous to momentum space and space-time surface in it analogous to Fermi ball. M8-H duality would generalize the q-p duality of wave mechanics having no generalization in quantum field theories and string models.
  3. M8-H duality would map the 4-surfaces in M8 to H=M4× CP2. Given region of space-time surface would be determined by the coefficients of a rational polynomial. Number theoretic holography would reduce the data to a finite number rational numbers - or n points of the space-time region (n is the degree of the polynomial). The polynomials would give rise to an evolutionary hierarchy with n as a measure for complexity and having interpretations in terms of effective Planck constant heff/h0=n.
For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.