Friday, May 07, 2021

A chordate able to regenerate copies of it when dissected into 3 parts

The popular article Polycarpa mytiligera can regrow all of its organs if dissected into three pieces tells about  an extraordinary biological discovery.

The creature known as  Polycarpa mytiligera  is a marine animal commonly found in   Gulf of Eilat that is capable of regenerating its organs. The surprising discovery was that  the animal can regenerate all of its organs even when dissected into three fragments.

Such a high regenerative capacity has not been detected earlier  in a chordate animal that reproduces only by sexual reproduction. In the experiment, the researchers dissected specimens in a method that left part of the body without a nerve center, heart, and part of the digestive system. Not only did each part of the creature survive the dissection on its own, all of the organs regenerated in each of the three sections.

This is highly interesting challenge for TGD.  The information about the  full animal body  was needed for a full generation. How it was preserved in dissection? Was genetic information, as it is understood in standard biology, really enough to achieve  this? 

  1.  In TGD inspired quantum biology magnetic body (MB) carrying dark matter as h_eff/h_0=n phases is the  key notion. h_eff is an effective Planck constant defining the scale of quantum coherence. n is dimension of extension of rationals defined by a polynomial defining space-time region,  and serves  as a measure for algebraic complexity and serves as a kind of IQ. MB with high IQ defined by n serves as  the master of the biological body (BB)  controlling it and receiving information from it. The layers of MB also define  abstracted representations of BB. 
  2. If BB suffers damage, the information about BB is not lost at MB and  MB, which carries abstracted representations about BB and able to control BB, could  restore BB partially. Healing of wounds would be the basic example.  A more dramatic example about healing was discovered by Peoch:  the neurons of the  salamander brain can be shuffled like cards in a package but the animal recovers. 

    Indeed, since nothing happens to the MB of salamander or Polycarpa Mytilera,  recovery is in principle possible. The  new finding gives additional support for MB as a carrier of the biological information.

One can also make questions about  the recovery process itself. Could recovery be seen as a self-organization process of some kind? 
  1. In the TGD framework, quantum measurement theory relies on zero energy ontology (ZEO)  and solves  its  basic problem. The basic prediction is that in the TGD counterparts of ordinary state function reductions ("big" SFRs or BSFRs) time reversal takes place.  In small SFRs (SSFRs) identifiable as analogs of "weak" measurements, the arrow of time is preserved. ZEO  makes it also  possible to understand why the Universe looks classical in all scales although BSFRs occur in all scales at the dark onion-like layers of MB controlling the lower layers with ordinary biomatter at the bottom of the hierarchy.
  2.  Time reversed dissipation after BSFR looks like self-organization from the perspective of the outsider with a standard arrow of time, called it briefly O,  and would bea  basic self-organization process in living systems. In dissipation gradients disappear but in time-reversed dissipation they appear from the perspective of O.  
  3. This  makes possible also self-organized quantum criticality (SOQC), which is impossible in standard thermodynamics because criticality by definition means instability. The change of the arrow of time changes the situation from the perspective of  O  since the  time reversed  system tends to approach the criticality. Homeostasis would rely  SOQC  rather than on extremely complex  deterministic control programs as in the computerism based picture. Change the arrow of time for a subsystem and let it happen. Very Buddhist approach to healing! 
  4. The change of the arrow of time would be also central in the healing processes and also regeneration.
      For a summary of earlier postings see Latest progress in TGD.

      Articles and other material related to TGD.

Sunday, May 02, 2021

AI research may have hit a dead end

I found a link to a very interesting article titled "Artificial intelligence research may have hit a dead end" followed by the comment "Misfired" neurons might be a brain feature, not a bug — and that's something AI research can't take into account" (see this).

Also Philip K. Dick's 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" is mentioned.  Would an intelligent robot  (if it were still a robot) dream?

AI models the brain as a deterministic computer. Computer does not dream:  it does just what  is needed to solve a highly specialized problem (just what a top  specialist does in his job;  computer is the  idol of every professional highflier). 

Computerism assumes  physicalism denying such things as genuine free will but this is not seen as a problem.  Also the mainstream  neuroscientist believes in physicalism. Some computational imperialists   even claim that physics reduces to computerism.

1. Is 95 per cent of brain activity mere noise?

What might be called neuroscience of fluctuations has however  led to a strange conclusion: 95 per cent of brain's activity and therefore metabolic energy seems to be used  to  generate fluctuations,  which in standard neuroscience represents noise.   Neuroscientists have routinely averaged out this "noise" and concentrated on the study of  what can be regarded as  motor actions and sensory input. These contributions seem to represent only ripples in a vast sea of activity.

[Amusingly, junk DNA corresponds to 95 per cent of DNA in the case of humans, as the article observes.]

By the way,  EEG is  still often regarded  as a mere noise. This  represents a similar puzzle: why the brain would use a lot of metabolic energy to send information to outer space: coding of information about contents of consciousness and brain state indeed requires a lot of metabolic energy.   To sum up, the brain seems to be diametrically opposite to a computer in the sense that spontaneous fluctuations are poison for a computer but food for the brain.

What article suggests  is that this 95 per cent could correspond to "dreaming"  that is  imagination. Ability to imagine  would give rise to intelligence rather than the property of being a dead automaton. Dreams would be freely associating cognitive fluctuations - whatever that might mean physically. Interestingly, it is mentioned that newborns dream twice as much as adults: they must learn. One can learn by imaging, not merely by doing all possible mistakes in the real world.

What can one say about these findings in the TGD framework?

2. Could fluctuations be induced by quantum fluctuations in quantum critical Universe of TGD?

Consider first the TGD interpretation of quantum fluctuations.

  1.  TGD Universe is  quantal  in all scales. Zero energy ontology (ZEO) allows to overcome the basic objection that the universe looks classical in long scales: ZEO view about quantum jumps forces the Universe to look classical for  the outsider. The experiments of Minev et al indeed demonstrated this concretely.
  2. TGD Universe is also quantum critical in all scales. Quantum criticality means that the system is maximally complex  and sensitive  for perturbations. Complexity means that the system is ideal for representing the  external world via sensory inputs. By criticality implying maximal sensitivity it is also an ideal sensory receptor  and motor instrument.
  3. The basic characteristic of criticality are long range fluctuations. They are not random noise but highly correlated.  Could the fluctuation in  the brain correspond to quantum fluctuations. 
 Long range quantum fluctuations are not possible for the ordinary value of  Planck constant.  
  1. Number theoretical view about TGD, generalizing ordinary physics of sensory experience to the physics of both sensory experience and cognition, leads to the prediction that there is infinite hierarchy of phases of ordinary matter  identifiable as dark matter and labelled by the values of effective Planck constant heff= nh0, n is dimension for an extension of rationals defined by a polynomial determining space-time region.
  2. The value of n serves as a measure for complexity and therefore defines a kind of IQ. The longer the scale of quantum fluctuations, the higher the value of n, and the larger the heff, and the longer the scale of quantum coherence. Fluctuations would make  the brain intelligent. Their  absence would make the brain a complete idiot - an ideal computer.
  3. The higher the value of heff, the larger the energy of  the particle when other parameters are kept as constant. This means that intelligence requires metabolic energy feed to increase heff and keep its values the same, since heff tends to be spontaneously reduced.
One can however argue that since the brain consists of ordinary matter,  brain fluctuations cannot be quantal. 
  1. In TGD they would be induced by quantum fluctuations at the level of  the magnetic body (MB) having a hierarchical onion-like structure. The dark matter would be ordinary particles with heff=nh0 at MB and since heff/h0 serves as a measure of IQ it would be higher for dark matter than for ordinary biomatter. MB containing dark matter would be the "boss" controlling the biological body (BB).  
  2. The quantum coherence of MB would force ordinary coherence of ordinary biomatter as a forced coherence. Ordinary matter would be like soldiers  obeying the  orders and in this manner behaving apparently like a larger coherent unit.
MB would receive sensory input from BB and control it by using EEG realizes as dark photons. This would explain EEG and its probably existing scaled  variants.

3. TGD view about sensory perception, motor actions, and dreaming and imagination

The proposal of the article was that most of the brain activity goes to "dreaming". Dreaming, hallucinations,  and imagination are poorly understood notions in neuroscience.  TGD provides a rather detailed view about these notions.

  1. What  distinguishes TGD from neuroscience is that sensory receptors are assumed to serve as carriers of sensory percepts. Zero energy ontology (ZEO)  providing a new view about time and memory makes it possible  to solve the basic objections related to  the phantom limb phenomenon: pain in  the phantom limb would be sensory memory.   
  2. The assumption that sensory percepts are artworks rather than passive records  of sensory input   requires virtual sensory input from brain to sensory organs and build-up of the final  percept by  pattern recognition -  an iterative procedure involving very many forth-and back signals. Nerve pulse transmission is quite too slow a process to allow this and signals propagating with maximal signal velocity are suggestive.
  3.  Nerve pulses and neurotransmitters  would not represent real communication but give rise to  temporary intra-brain communication lines along which communications as dark photon signals would take place with maximal signal velocity using dark photons  (characterized by heff/h0=n) transforming to biophotons in an  energy conserving manner. 

    Neurotransmitters and also other  information molecules (hormones, messengers) attached to receptors would serve as bridges fusing  permanent but disjoint communication lines along axons to a connected temporary communication line for dark photons to propagate.  Nerve pulses would also generate generalized Josephson radiation allowing communications   between biological body (BB) and magnetic body (MB) using EEG.  Meridian system could be a permanently connected system of communication lines.

    This picture leads to a concrete  proposal about the roles of  DMT and pineal gland  concerning imagination and dreams and hallucinations. 

Returning to the original topic,  the natural question is following: How large fraction  of the 95 percent of brain activity goes to  feedback not present in the brain of the standard neuroscience? This would include  the construction of the feedback to sensory organs as  virtual sensory inputs to build standardized mental images. Dreams are a special case of this. There  is  also the virtual sensory input which does not reach sensory organs and gives rise to imagination, in particular internal speech.

 Similar picture applies to virtual  motor input and the construction of motor output as "standardized motor patterns" - this notion makes sense only in ZEO.  Note that the feedback loop could extend from brain to MB. 

There is also an interesting finding related to motor activities. In the experiments made for rats it is found that the spontaneous brain activity increases dramatically as the rat moves. This brings in mind a lecturer who moves forth and back  as he talks. This rhythmic motion could give rise to a brain/body rhythm  coupling the lecturer to a layer of MB with large heff. Its   quantum coherence of MB would induce ordinary coherence of BB in body scale and with large heff and raise the "IQ" of the lecture.  Thinking requires motion!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

The notion of holography in TGD framework

Thanks to Bob Tang for the link to Sabine Hossenfelder's article about holography. I will not comment about the content of the link but about TGD view of holography.

What "Universe as a hologram" does really mean must be first defined. In pop physics this notion has remained very loose. In the following I summarize the TGD based view about what holography means in the geometric sense.

In TGD 3-D surfaces are basic objects and replace 3-space. Holography is not a new principle but reduces to general coordinate invariance.

1. "Ordinary" holography

General coordinate invariance in 4-D sense requires that they correspond to single 4-D surface-space-time - at which general coordinate transformations act. Space-time surface is like Bohr orbit, preferred extremal for the action defining the space-time surface.

This is nothing but holography in standard sense and leads to zero energy ontology (ZEO) meaning that quantum states are superpositions of 3-surfaces or equivalently, of 4-D surfaces.

[The alternative to ZEO would be path integral approach, which is mathematically ill-defined and makes no sense in TGD framework due to horrible divergence difficulties.]

ZEO has profound implications for quantum theory itself and solves the measurement problem and also implies that the arrow of time changes in "big" (ordinary) state function reductions as opposed to "small" SFRs ("weak" measurements). Also the question at which length scale quantum behavior transforms to classical, becomes obsolete.

2. Strong form of holography (SH)

Besides space-time-like 3-surfaces at the boundaries of causal diamond CD serving as ends of space-time surface (initial value problem) there are light-like surfaces at which the signatures of the metric changes from Minkowskian to Euclidian (boundary value problem). Euclidian regions correspond to fundamental particles from which elementary particles are made of.

If either space-like or light-like 3-surfaces are assumed to be enough as data for holography (initial value problem is equivalent with boundary value problem), the conclusion is that their interactions as partonic 2-surfaces are enough as data. This would give rise to a strong form of holography, SH.

Intuitive arguments suggest several alternative mathematical realizations for the holography for space-time surfaces in H=M4×CP2. They should be equivalent.

  1. Space-time surfaces are extremals of both volume action (minimal surfaces) having interpretation in terms of length scale dependent cosmological constant and of Kähler action. This double extremal property reduces the conditions to purely algebraic ones with no dependence on coupling parameters. This corresponds to the universality of quantum critical dynamics. Space-time surfaces are analogs of complex sub-manifolds of complex imbedding space.
  2. Second realization is in terms of analogs of Kac-Moody and Virasoro gauge conditions for a sub-algebra of super-symplectic algebra (SSA) isomorphic with the entire SSA and acting as isometries of the "world of classical worlds" (WCW). SSA has non-negative conformal weights and generalizes Kac-Moody algebras in that there are two variables instead of single complex z coordinate: the complex coordinate z of sphere S2(to which light-one boundary reduces metrically) and light-like radial coordinate r of the light-cone boundary. Also the Kac-Moody type algebra assignable to isometries of H at light-like partonic orbits involve the counterparts of z and r. A huge generalization of symmetries of string theory is in question.

3. Number theoretic holography

M8-H duality leads to number theoretic holography, which is even more powerful than SH.

  1. In complexified M8 - complexified octonions - space-time surface would be "roots" of octonionic polynomials guaranteeing that the normal space of space-time surface is associative/quaternionic. Associativity in this sense would fix the dynamics. These surfaces would be algebraic whereas at the level of H surfaces satisfy partial differential equations reducing to algebraic equations due to the complex surface analogy.
  2. M8 would be analogous to momentum space and space-time surface in it analogous to Fermi ball. M8-H duality would generalize the q-p duality of wave mechanics having no generalization in quantum field theories and string models.
  3. M8-H duality would map the 4-surfaces in M8 to H=M4× CP2. Given region of space-time surface would be determined by the coefficients of a rational polynomial. Number theoretic holography would reduce the data to a finite number rational numbers - or n points of the space-time region (n is the degree of the polynomial). The polynomials would give rise to an evolutionary hierarchy with n as a measure for complexity and having interpretations in terms of effective Planck constant heff/h0=n.
For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Thursday, April 22, 2021

The rational and intuitive modes of problem solving from the TGD point of view

Reza Rastmanesh sent me a link to an article with title "The Impact of the Mode of Thought in Complex Decisions: Intuitive Decisions are Better". The following musings are inspired by this article.

As one learns from the article, it seems that problem solving and decision making rely on two basic approaches which correspond to right-left brain dichotomy.

  1. Rational thinking in the ideal case error free provided the basic assumptions are correct and data are reliable. It is however mechanical and cannot lead to "eurekas". Computers can nowadays do it more reliably than humans. In mathematics Goedel's theorem tells that mere rational deduction is not enough for basic arithmetics: in a given axiomatic system there is an infinite number of non-provable truths.
  2. Intuitive approach is less reliable but can be much faster, is holistic, based on affect rather than cold rationality, and can lead to new insights which only afterwards can be deduced but possibly only by adding some new basic assumption. In this case one can speak of a discovery.

What looks paradoxical is that besides induction of affective mood favoring intuitive problem solving, distraction is one way to induce intuitive thought. In TGD framework, the interpretation would be that distraction forces to give up the attempt to solve the problem at the level conscious to me - I am simply too stupid- , and delegates the problem to a higher level of the hierarchy (layers of magnetic body) representing higher level of abstraction (see this) and a more holistic view. This would make it possible to solve the problem.

A real life example about the connection with distraction is in order. In problem solving mood, I find that simple tasks of everyday life become difficult. I decide to do something, start to do this but decide to do also something at the same time, do it, and then realize that I do not remember what I had decided to do primarily, and even that I had decided to do something. I have seriously asked myself whether these are the first signs of dementia. The fact however is that this has been always the case - more or less.

My friends however tell me that there is no reason to worry, I am just what is called "absent minded professor". Perhaps I am indeed just absent minded - or almost permanently distracted - but certainly never a professor if this depends on colleagues.

I have many times experienced in real life that intuitive approach is more reliable than rational thinking when one must make decisions. I still find it difficult to confess that I have been cheated many times but I must do it now. I have felt from the beginning that this is happening but my rational mind has forced myself to believe that this is not the case. I have not wanted to insult the swindlers by somehow suggesting that I am not quite sure about their real motives.

Sleeping over night would be a basic example of this delegation of the problem to a higher intelligence. From personal experience sleeping over night is for me almost the only manner to get new ideas and solve problems which do not reduce to mere mechanical calculations. Often the problem and its solution pop up simultaneously during morning hours and going to the computer makes it possible to write out the details. The attempt to solve a problem by hard thinking later during the day does not lead anywhere.

An example about this relates to my own work. As some new idea has emerged, I have sometimes given it up after some rational thought. Later it has however turned out that the idea made sense after all but for different reasons that I had thought.

A concrete example relates to dark matter idenfied as heff=n×h0≥h phases of ordinary matter at magnetic body in the recent TGD based model.The problem was the following.

Blackman and many others observed at seventies that ELF radiation in EEG range has strange effects on the behavior of vertebrates visible also physiologically. These effects looked quantal. This however does not make sense in standard quantum theory since energies are incredibly small and far below the thermal energies. For this reason mainstream refused to take the effects seriously and it was forgotten.

  1. My first proposal was based on the notion of many-sheeted space-time. Perhaps the photons and ions responding to them were at space-time sheets at which the temperature is extremely low so that the thermal objection does not bite.
  2. Then I entered a different idea. Perhaps the value of Planck constant varies and one has a very large value heff=n×h0 of the effective Planck constant. n would correspond to the number of identical space-time sheets for the space-time surfaceas a covering space. This led to a nice theory and later I could deduce it from a number theoretic vision unifying real and various p-adic physics to adelic physics describing correlates of both sensory experience and cognition.

As I thought about this during last night, a question popped up. Could this original approach be correct after all? Could the heff approach be wrong? This would destroy 15 years of work: horrible! Or could these two approaches be consistent? This turned out to be the case!

  1. The temperature at flux tubes and flux quanta of the magnetic body (MB) is in general below Hagedorn temperature TH dictated by the flux tube thickness: the reason is that the number of geometric degrees of freedom is infinite. Flux tube behaves in good approximation like string and the notion of TH emerged in string models. For instance, in living matter TH corresponds to the physiological temperature, around 37 degrees Celsius for humans.
  2. TH is associated with dark matter with heff=n×h0, and n is the number of space-time sheets of the covering. TH characterizes n-sheeted structure. What is the temperature at a single sheet of covering?
  3. Thermal energy is proportional to the temperature. For an n-sheeted structure one has by the additivity of thermal energy for different identical sheets TH =n×TH(sheet) implying

    TH(sheet) =TH/n.

    For the huge values of heff and thus of n, T(sheet)H(sheet) is indeed extremely small! The original explanation is consistent with the number theory based explanation! Trust your intuition! But be however cautious!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, April 13, 2021

Does muon's anomalous anomalous magnetic moment imply new physics?

Lepton universality predicts that the magnetic moments of leptons should be the same apart from the corrections due to different masses. Leptons have besides the magnetic moment predicted by Dirac equation also anomalous magnetic moment which is predicted to come from various radiative corrections.

The standard model predictions for the anomalous magnetic moments of the electron are ae= (ge-2)/2= .00115965218091 and aμ =(gμ-2)/2= .00116591804.

The anomalous magnetic moments of electron and muon differ by .1 per cent. This breaking of universality is however due to the different masses of electron and muon rather than different interactions.

1. The finding of the Fermilab experiment

The breaking of universality could also come from interactions and the Fermilab experiment (see this) and earlier experiments suggest this. The experiment shows that in the case of muon the magnetic moment differs by from the predicted: the deviation from the standard model prediction is 2.5×10-4 per cent. This indicates that there might be interactions violating the lepton universality. Besides the problem with the muon's magnetic moment, which differs from that of the electron, there is also a second problem. The decays of B mesons seem to break universality of fermion interactions: indications for the breaking of universality have emerged during years so that this is not new.

The measurement result involves various sources of error and one can estimate the probability that the measurement outcome is due to this kind of random fluctuations. The number of standard deviations tells how far the measurement result is from the maximum of the probability distribution. The deviation is expressed using standard deviation as a unit. Standard deviation is essentially the width of the distribution. For instance, 4 standard deviations tells that the probability that the result is random fluctuation is .6 per cent. For 5 standard deviations from predicted is .0001 per cent and is regarded as the discovery limit.

2. Theoretical uncertainties

There are also theoretical uncertainties related to the calculation of magnetic moment. There are 3 contributions: electroweak, QCD, and hadronic contributions. The electroweak and QCD corrections are "easily" calculable. The hadronic contributions are difficult to estimate since perturbative QCD does not apply at the hadronic energies. There are groups which claim that their estimation of hadronic contributions produces a prediction consistent with the Fermilab finding and the earlier findings consistent with the Fermilab finding.

The prediction based on experimentally deduced R ratio characterizing the rate for the decay of a virtual photon  to  a qquark pair allows to estimate the hadronic contribution and gives a prediction for hadronic contributions which is in conflict with experimental findings. On the other hand, the calculations based on lattice QCD give a result consistent with the experimental value (see this). Should one trust experiment or theory?

3. Is a wider perspective needed?

To my opinion, one should see the problem from a bigger perspective than a question about how accurate the standard model is.

  1. Standard Model does not explain fermion families. Also GUTs fail in this respect: the mass ratios of fermions vary in the range spanned by 11 orders of magnitude. This is not a small gauge symmetry breaking but something totally different: mass scale is the appropriate notion and p-adic length scale hypothesis provides it.
  2. One must also challenge the belief that lattice QCD can describe low energy hadron physics. There might be much deeper problems than the inability to compute hadronic contributions to g-2. Perturbative QCD describes only high energy interactions and QCD might exist only in the perturbative sense.The fact is that low energy hadron physics is virtually existent. Saying this aloud of course irritates lattice QCD professionals but the reduction of QCD to thermodynamics in the Euclidian space-time looks to me implausible. There are deep problems with Wick rotation.

    For instance, massless dispersion relation E2-p2= 0 in M4 translates to E2+p2 =0 in E4: massless fields disappear completely since one has only E=0,p=0 zero mode. There are similar problems with the massless Dirac equation. For the massive case the situation is not so bad as this. There is the strong CP problem caused by instantons and a problem with multiplication of spinor degrees of freedom since the 4-D cube has the topology of 4-torus and allows 16 spinor structures.

    Quarks explain only a few per cent of hadron mass just as ordinary matter explains only a few percent of mass in cosmology. Hadron physics might therefore involve something totally new and color interaction could differ from a genuine gauge interaction.

    4. What TGD can say about family replication phenomenon?

    In TGD framework, the topological explanation of family replication phenomenon identifying partonic 2-surfaces as fundamental building blocks of elementary particles provides the needed understanding and predicts 3 different fermion generations corresponding to 3 lowest general: sphere, torus, and sphere with two handles (see this).

    Conformal Z2 symmetry for partonic 2-surfaces is present for the lowest 3 genera but not for the higher ones for which one must talk about many handle states with continuous mass spectrum. p-Adic thermodynamics allows to estimate the masses of new boson by simple scaling arguments and Mersenne prime hypothesis.

    In the TGD framework the two findings can be seen as indications for the failure of lepton universality. Besides 3 light fermion generations TGD also predicts 3 light generations for electroweak bosons, gluons, and Higgs. These generations are more massive than weak bosons and p-adic length scale hypothesis also allows to estimate their masses.

    The couplings of the lightest generations to the gauge bosons obey fermion universality (are identical) but the couplings of the 2 higher generations cannot do so since the charge matrices of 3 generations must be orthogonal to each other. This predicts breaking of fermion universality which in quantum field theory approximation comes from the loops coupling fermions to the 2 higher boson generations.

    This prediction is a test for TGD based topological view about family replication phenomenon in terms of the genus of partonic 2-surface: partonic 2-surface can be sphere, torus or sphere with two handles. TGD also explains why higher generations are experimentally absent.

    5. What does TGD say about low energy hadron physics?

    There is also the question about whether QCD catches all aspects of strong interactions. In TGD color magnetic flux tubes carry Kaehler magnetic energy and volume energy parametrized by length scale dependent cosmological constant so that a connection with cosmology indeed emerges. The reconnections of U-shaped flux tubes give rise to the TGD counterparts of meson exchanges of old-fashioned hadron physics. See this .

    Color group need not be a gauge group but analogous to a Kac-Moody group or Yangian group (only non-negative conformal weights). In TGD framework SU(3) at the level of M4xCP2 is not a gauge symmetry but acts as isometries of CP2 and fermions do not carry color as analog of spin but as angular momentum like quantum number. At the level of compelexified M8 SU(3) is a subgroup of G2 acting as octonion automorphism and defines Yangian replacing the local gauge group.

    For the TGD based model see this and this.

    For a summary of earlier postings see Latest progress in TGD.

    Articles and other material related to TGD.

Friday, April 09, 2021

EEG and the structure of magnetosphere

Roughly 15 years  ago I proposed the idea that Earth's  magnetosphere (MS) could serve as a sensory canvas in the sense that biological systems, in particular the vertebrate brain, could have sensory representations realized at the "personal" magnetic body (MB)  closely associated with the MS of the Earth. EEG would make communications to  and control by MB possible. 

 At that time I did not yet  have  the idea about number theoretical realization of the  hierarchy of Planck constants heff=nh0 in the framework of adelic physics fusing the physics of sensory experience and cognition. This hierarchy is crucial for understanding the basic aspects of living matter such as metabolism, coherence in long scales, correlates of cognition, and even evolution.

Also the concept of zero energy ontology (ZEO) forming now the basis of the quantum TGD was missing although there was already the about communication to past using negative energy signals. ZEO is now in a central role in the understanding of self-organization - not only the biological one. The new view about time predicting that time reversal occurs in ordinary state function reductions (SFRs) allows to understand homeostasis as self-organized quantum criticality. 

For these reasons it is interesting to consider the notion of sensory canvas from the new perspective. This article discusses besides  the earlier ideas about the MS  also the proposal that it is possible to associate EEG bands to the regions of MS via the correspondence between EEG   frequency with the distance of the region from Earth.   Also the idea  that the structure of MS could be a  fractal analog of the vertebrate body is tested quantitatively by comparing various scales involved.

See the article EEG and the structure of magnetosphere or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD . 

Three alternative generalizations of Nottale's hypothesis in TGD framework

Nottale's gravitational Planc constant ℏgr= GMm/v0 contains  the velocity parameter v0 as the only parameter. In the perturbative expansion  of  the scattering amplitudes β0=v0/c appears  in the role of fine structure constant.    

There is however a problem.

  1. The model  for the effects of ELF radiation on vertebrate brain  inspired by  a generalization of Nottale's hypothesis by replacing the total mass M in the case of Earth by MD≈ 10-4ME suggests that in this case the dark particles involved couple only to a part of mass identifiable as dark mass MD.
  2.   Since only GM appears in the basic formulas, the  alternative option is that the value of G is reduced to GD. This conforms with the fact that in the  TGD framework CP2 length is the fundamental parameter  G is a prediction of the theory and therefore can vary. 
  3. A further option is that the parameter β0=v0/c≤ 1 is variable and equals to β0=1 or to a value not much smaller than 1, say β0=1/2.
These three options are  critically discussed and compared. The cautious conclusion is that the the third option is the most plausible one.

See the article Three alternative generalizations of Nottale's hypothesis in TGD framework or the chapter About the Nottale's formula for hgr and the relation between Planck length and CP2 length.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Friday, April 02, 2021

Does Goedel's incompleteness theorem hold true for reals?

I have many times wondered whether the incompleteness theorem extends to real numbers, which are usually the stuff used by physics. There is a very nice discussion of this point here. Strongly recommended.

Real numbers and all algebraically closed number fields such as complex numbers and algebraic numbers are complete. All truths are provable. If physics is based on complex numbers or algebraic numbers, Goedel's theorem has no direct implications for physics.This however implies that integers cannot be characterized using the axiomatics of these number fields since if this were the case, Gdel's incompleteness theorem would not hold true for integer arithmetics. One can also say that Goedel numbers for unprovable theorems are not expressible as a natural number but are more general reals or complex numbers.

Since algebraic numbers are complete, a good guess is that algebraic numbers label all true statements about integer arithmetics and also about arithmetics of algebraic integers for extensions of rationals.

In TGD adelic physics definescorrelates for cognition. Adeles for the hierarchy labelled by algebraic extensions (perhaps also extensions involving roots of e since ep is p-adic number). These are not complete and Goedel's incompleteness theorem applies to them. Only at the never achievable limit of algebraic numbers the system becomes complete. This would strongly suggest a generalization of Turing's view about computation by replacing integer arithmetics with a hierarchy of arithmetics of algebraic integers associated with extensions of rationals. See this article .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 30, 2021

Does the quantal gravitational force vanish below critical radius in average sense?

Nottale's gravitational constant hbargr= GMDm/v0 contains dark mass MD as a parameter. At the surface of Earth MD much smaller than MD and for the planets  one has MD=MSun. It turns out that in the  average sense  MD must grow to M.   This is  required by  the condition that  Bohr radii correspond to the classical radii in the average sense. The actual dependence of MD on r  is expected to  be a staircase like function.

At the quantum level, this   effectively eliminates  the average  gravitational force in the scales below the critical radius rcr above  which MD=M is true.   Indeed, due to the average MD∝ r dependence,  gravitational potential would be constant on the average. 

 Could one regard this   effective elimination of  the gravitational force as a kind of    Quantum Equivalence Principle or   as an analog of asymptotic freedom?

See the article Two alternative generalizations of Nottale's hypothesis or the chapter About the Nottale's formula for hgr and the relation between Planck length and CP2 length.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.