Friday, May 07, 2021

A chordate able to regenerate copies of it when dissected into 3 parts

The popular article Polycarpa mytiligera can regrow all of its organs if dissected into three pieces tells about  an extraordinary biological discovery.

The creature known as  Polycarpa mytiligera  is a marine animal commonly found in   Gulf of Eilat that is capable of regenerating its organs. The surprising discovery was that  the animal can regenerate all of its organs even when dissected into three fragments.

Such a high regenerative capacity has not been detected earlier  in a chordate animal that reproduces only by sexual reproduction. In the experiment, the researchers dissected specimens in a method that left part of the body without a nerve center, heart, and part of the digestive system. Not only did each part of the creature survive the dissection on its own, all of the organs regenerated in each of the three sections.

This is highly interesting challenge for TGD.  The information about the  full animal body  was needed for a full generation. How it was preserved in dissection? Was genetic information, as it is understood in standard biology, really enough to achieve  this? 

  1.  In TGD inspired quantum biology magnetic body (MB) carrying dark matter as h_eff/h_0=n phases is the  key notion. h_eff is an effective Planck constant defining the scale of quantum coherence. n is dimension of extension of rationals defined by a polynomial defining space-time region,  and serves  as a measure for algebraic complexity and serves as a kind of IQ. MB with high IQ defined by n serves as  the master of the biological body (BB)  controlling it and receiving information from it. The layers of MB also define  abstracted representations of BB. 
  2. If BB suffers damage, the information about BB is not lost at MB and  MB, which carries abstracted representations about BB and able to control BB, could  restore BB partially. Healing of wounds would be the basic example.  A more dramatic example about healing was discovered by Peoch:  the neurons of the  salamander brain can be shuffled like cards in a package but the animal recovers. 

    Indeed, since nothing happens to the MB of salamander or Polycarpa Mytilera,  recovery is in principle possible. The  new finding gives additional support for MB as a carrier of the biological information.

One can also make questions about  the recovery process itself. Could recovery be seen as a self-organization process of some kind? 
  1. In the TGD framework, quantum measurement theory relies on zero energy ontology (ZEO)  and solves  its  basic problem. The basic prediction is that in the TGD counterparts of ordinary state function reductions ("big" SFRs or BSFRs) time reversal takes place.  In small SFRs (SSFRs) identifiable as analogs of "weak" measurements, the arrow of time is preserved. ZEO  makes it also  possible to understand why the Universe looks classical in all scales although BSFRs occur in all scales at the dark onion-like layers of MB controlling the lower layers with ordinary biomatter at the bottom of the hierarchy.
  2.  Time reversed dissipation after BSFR looks like self-organization from the perspective of the outsider with a standard arrow of time, called it briefly O,  and would bea  basic self-organization process in living systems. In dissipation gradients disappear but in time-reversed dissipation they appear from the perspective of O.  
  3. This  makes possible also self-organized quantum criticality (SOQC), which is impossible in standard thermodynamics because criticality by definition means instability. The change of the arrow of time changes the situation from the perspective of  O  since the  time reversed  system tends to approach the criticality. Homeostasis would rely  SOQC  rather than on extremely complex  deterministic control programs as in the computerism based picture. Change the arrow of time for a subsystem and let it happen. Very Buddhist approach to healing! 
  4. The change of the arrow of time would be also central in the healing processes and also regeneration.
      For a summary of earlier postings see Latest progress in TGD.

      Articles and other material related to TGD.

Sunday, May 02, 2021

AI research may have hit a dead end

I found a link to a very interesting article titled "Artificial intelligence research may have hit a dead end" followed by the comment "Misfired" neurons might be a brain feature, not a bug — and that's something AI research can't take into account" (see this).

Also Philip K. Dick's 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" is mentioned.  Would an intelligent robot  (if it were still a robot) dream?

AI models the brain as a deterministic computer. Computer does not dream:  it does just what  is needed to solve a highly specialized problem (just what a top  specialist does in his job;  computer is the  idol of every professional highflier). 

Computerism assumes  physicalism denying such things as genuine free will but this is not seen as a problem.  Also the mainstream  neuroscientist believes in physicalism. Some computational imperialists   even claim that physics reduces to computerism.

1. Is 95 per cent of brain activity mere noise?

What might be called neuroscience of fluctuations has however  led to a strange conclusion: 95 per cent of brain's activity and therefore metabolic energy seems to be used  to  generate fluctuations,  which in standard neuroscience represents noise.   Neuroscientists have routinely averaged out this "noise" and concentrated on the study of  what can be regarded as  motor actions and sensory input. These contributions seem to represent only ripples in a vast sea of activity.

[Amusingly, junk DNA corresponds to 95 per cent of DNA in the case of humans, as the article observes.]

By the way,  EEG is  still often regarded  as a mere noise. This  represents a similar puzzle: why the brain would use a lot of metabolic energy to send information to outer space: coding of information about contents of consciousness and brain state indeed requires a lot of metabolic energy.   To sum up, the brain seems to be diametrically opposite to a computer in the sense that spontaneous fluctuations are poison for a computer but food for the brain.

What article suggests  is that this 95 per cent could correspond to "dreaming"  that is  imagination. Ability to imagine  would give rise to intelligence rather than the property of being a dead automaton. Dreams would be freely associating cognitive fluctuations - whatever that might mean physically. Interestingly, it is mentioned that newborns dream twice as much as adults: they must learn. One can learn by imaging, not merely by doing all possible mistakes in the real world.

What can one say about these findings in the TGD framework?

2. Could fluctuations be induced by quantum fluctuations in quantum critical Universe of TGD?

Consider first the TGD interpretation of quantum fluctuations.

  1.  TGD Universe is  quantal  in all scales. Zero energy ontology (ZEO) allows to overcome the basic objection that the universe looks classical in long scales: ZEO view about quantum jumps forces the Universe to look classical for  the outsider. The experiments of Minev et al indeed demonstrated this concretely.
  2. TGD Universe is also quantum critical in all scales. Quantum criticality means that the system is maximally complex  and sensitive  for perturbations. Complexity means that the system is ideal for representing the  external world via sensory inputs. By criticality implying maximal sensitivity it is also an ideal sensory receptor  and motor instrument.
  3. The basic characteristic of criticality are long range fluctuations. They are not random noise but highly correlated.  Could the fluctuation in  the brain correspond to quantum fluctuations. 
 Long range quantum fluctuations are not possible for the ordinary value of  Planck constant.  
     
  1. Number theoretical view about TGD, generalizing ordinary physics of sensory experience to the physics of both sensory experience and cognition, leads to the prediction that there is infinite hierarchy of phases of ordinary matter  identifiable as dark matter and labelled by the values of effective Planck constant heff= nh0, n is dimension for an extension of rationals defined by a polynomial determining space-time region.
  2. The value of n serves as a measure for complexity and therefore defines a kind of IQ. The longer the scale of quantum fluctuations, the higher the value of n, and the larger the heff, and the longer the scale of quantum coherence. Fluctuations would make  the brain intelligent. Their  absence would make the brain a complete idiot - an ideal computer.
  3. The higher the value of heff, the larger the energy of  the particle when other parameters are kept as constant. This means that intelligence requires metabolic energy feed to increase heff and keep its values the same, since heff tends to be spontaneously reduced.
One can however argue that since the brain consists of ordinary matter,  brain fluctuations cannot be quantal. 
  1. In TGD they would be induced by quantum fluctuations at the level of  the magnetic body (MB) having a hierarchical onion-like structure. The dark matter would be ordinary particles with heff=nh0 at MB and since heff/h0 serves as a measure of IQ it would be higher for dark matter than for ordinary biomatter. MB containing dark matter would be the "boss" controlling the biological body (BB).  
  2. The quantum coherence of MB would force ordinary coherence of ordinary biomatter as a forced coherence. Ordinary matter would be like soldiers  obeying the  orders and in this manner behaving apparently like a larger coherent unit.
MB would receive sensory input from BB and control it by using EEG realizes as dark photons. This would explain EEG and its probably existing scaled  variants.

3. TGD view about sensory perception, motor actions, and dreaming and imagination

The proposal of the article was that most of the brain activity goes to "dreaming". Dreaming, hallucinations,  and imagination are poorly understood notions in neuroscience.  TGD provides a rather detailed view about these notions.

  1. What  distinguishes TGD from neuroscience is that sensory receptors are assumed to serve as carriers of sensory percepts. Zero energy ontology (ZEO)  providing a new view about time and memory makes it possible  to solve the basic objections related to  the phantom limb phenomenon: pain in  the phantom limb would be sensory memory.   
  2. The assumption that sensory percepts are artworks rather than passive records  of sensory input   requires virtual sensory input from brain to sensory organs and build-up of the final  percept by  pattern recognition -  an iterative procedure involving very many forth-and back signals. Nerve pulse transmission is quite too slow a process to allow this and signals propagating with maximal signal velocity are suggestive.
  3.  Nerve pulses and neurotransmitters  would not represent real communication but give rise to  temporary intra-brain communication lines along which communications as dark photon signals would take place with maximal signal velocity using dark photons  (characterized by heff/h0=n) transforming to biophotons in an  energy conserving manner. 

    Neurotransmitters and also other  information molecules (hormones, messengers) attached to receptors would serve as bridges fusing  permanent but disjoint communication lines along axons to a connected temporary communication line for dark photons to propagate.  Nerve pulses would also generate generalized Josephson radiation allowing communications   between biological body (BB) and magnetic body (MB) using EEG.  Meridian system could be a permanently connected system of communication lines.

    This picture leads to a concrete  proposal about the roles of  DMT and pineal gland  concerning imagination and dreams and hallucinations. 

Returning to the original topic,  the natural question is following: How large fraction  of the 95 percent of brain activity goes to  feedback not present in the brain of the standard neuroscience? This would include  the construction of the feedback to sensory organs as  virtual sensory inputs to build standardized mental images. Dreams are a special case of this. There  is  also the virtual sensory input which does not reach sensory organs and gives rise to imagination, in particular internal speech.

 Similar picture applies to virtual  motor input and the construction of motor output as "standardized motor patterns" - this notion makes sense only in ZEO.  Note that the feedback loop could extend from brain to MB. 

There is also an interesting finding related to motor activities. In the experiments made for rats it is found that the spontaneous brain activity increases dramatically as the rat moves. This brings in mind a lecturer who moves forth and back  as he talks. This rhythmic motion could give rise to a brain/body rhythm  coupling the lecturer to a layer of MB with large heff. Its   quantum coherence of MB would induce ordinary coherence of BB in body scale and with large heff and raise the "IQ" of the lecture.  Thinking requires motion!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


The notion of holography in TGD framework

Thanks to Bob Tang for the link to Sabine Hossenfelder's article about holography. I will not comment about the content of the link but about TGD view of holography.

What "Universe as a hologram" does really mean must be first defined. In pop physics this notion has remained very loose. In the following I summarize the TGD based view about what holography means in the geometric sense.

In TGD 3-D surfaces are basic objects and replace 3-space. Holography is not a new principle but reduces to general coordinate invariance.

1. "Ordinary" holography

General coordinate invariance in 4-D sense requires that they correspond to single 4-D surface-space-time - at which general coordinate transformations act. Space-time surface is like Bohr orbit, preferred extremal for the action defining the space-time surface.

This is nothing but holography in standard sense and leads to zero energy ontology (ZEO) meaning that quantum states are superpositions of 3-surfaces or equivalently, of 4-D surfaces.

[The alternative to ZEO would be path integral approach, which is mathematically ill-defined and makes no sense in TGD framework due to horrible divergence difficulties.]

ZEO has profound implications for quantum theory itself and solves the measurement problem and also implies that the arrow of time changes in "big" (ordinary) state function reductions as opposed to "small" SFRs ("weak" measurements). Also the question at which length scale quantum behavior transforms to classical, becomes obsolete.

2. Strong form of holography (SH)

Besides space-time-like 3-surfaces at the boundaries of causal diamond CD serving as ends of space-time surface (initial value problem) there are light-like surfaces at which the signatures of the metric changes from Minkowskian to Euclidian (boundary value problem). Euclidian regions correspond to fundamental particles from which elementary particles are made of.

If either space-like or light-like 3-surfaces are assumed to be enough as data for holography (initial value problem is equivalent with boundary value problem), the conclusion is that their interactions as partonic 2-surfaces are enough as data. This would give rise to a strong form of holography, SH.

Intuitive arguments suggest several alternative mathematical realizations for the holography for space-time surfaces in H=M4×CP2. They should be equivalent.

  1. Space-time surfaces are extremals of both volume action (minimal surfaces) having interpretation in terms of length scale dependent cosmological constant and of Kähler action. This double extremal property reduces the conditions to purely algebraic ones with no dependence on coupling parameters. This corresponds to the universality of quantum critical dynamics. Space-time surfaces are analogs of complex sub-manifolds of complex imbedding space.
  2. Second realization is in terms of analogs of Kac-Moody and Virasoro gauge conditions for a sub-algebra of super-symplectic algebra (SSA) isomorphic with the entire SSA and acting as isometries of the "world of classical worlds" (WCW). SSA has non-negative conformal weights and generalizes Kac-Moody algebras in that there are two variables instead of single complex z coordinate: the complex coordinate z of sphere S2(to which light-one boundary reduces metrically) and light-like radial coordinate r of the light-cone boundary. Also the Kac-Moody type algebra assignable to isometries of H at light-like partonic orbits involve the counterparts of z and r. A huge generalization of symmetries of string theory is in question.

3. Number theoretic holography

M8-H duality leads to number theoretic holography, which is even more powerful than SH.

  1. In complexified M8 - complexified octonions - space-time surface would be "roots" of octonionic polynomials guaranteeing that the normal space of space-time surface is associative/quaternionic. Associativity in this sense would fix the dynamics. These surfaces would be algebraic whereas at the level of H surfaces satisfy partial differential equations reducing to algebraic equations due to the complex surface analogy.
  2. M8 would be analogous to momentum space and space-time surface in it analogous to Fermi ball. M8-H duality would generalize the q-p duality of wave mechanics having no generalization in quantum field theories and string models.
  3. M8-H duality would map the 4-surfaces in M8 to H=M4× CP2. Given region of space-time surface would be determined by the coefficients of a rational polynomial. Number theoretic holography would reduce the data to a finite number rational numbers - or n points of the space-time region (n is the degree of the polynomial). The polynomials would give rise to an evolutionary hierarchy with n as a measure for complexity and having interpretations in terms of effective Planck constant heff/h0=n.
For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Thursday, April 22, 2021

The rational and intuitive modes of problem solving from the TGD point of view

Reza Rastmanesh sent me a link to an article with title "The Impact of the Mode of Thought in Complex Decisions: Intuitive Decisions are Better". The following musings are inspired by this article.

As one learns from the article, it seems that problem solving and decision making rely on two basic approaches which correspond to right-left brain dichotomy.

  1. Rational thinking in the ideal case error free provided the basic assumptions are correct and data are reliable. It is however mechanical and cannot lead to "eurekas". Computers can nowadays do it more reliably than humans. In mathematics Goedel's theorem tells that mere rational deduction is not enough for basic arithmetics: in a given axiomatic system there is an infinite number of non-provable truths.
  2. Intuitive approach is less reliable but can be much faster, is holistic, based on affect rather than cold rationality, and can lead to new insights which only afterwards can be deduced but possibly only by adding some new basic assumption. In this case one can speak of a discovery.

What looks paradoxical is that besides induction of affective mood favoring intuitive problem solving, distraction is one way to induce intuitive thought. In TGD framework, the interpretation would be that distraction forces to give up the attempt to solve the problem at the level conscious to me - I am simply too stupid- , and delegates the problem to a higher level of the hierarchy (layers of magnetic body) representing higher level of abstraction (see this) and a more holistic view. This would make it possible to solve the problem.

A real life example about the connection with distraction is in order. In problem solving mood, I find that simple tasks of everyday life become difficult. I decide to do something, start to do this but decide to do also something at the same time, do it, and then realize that I do not remember what I had decided to do primarily, and even that I had decided to do something. I have seriously asked myself whether these are the first signs of dementia. The fact however is that this has been always the case - more or less.

My friends however tell me that there is no reason to worry, I am just what is called "absent minded professor". Perhaps I am indeed just absent minded - or almost permanently distracted - but certainly never a professor if this depends on colleagues.

I have many times experienced in real life that intuitive approach is more reliable than rational thinking when one must make decisions. I still find it difficult to confess that I have been cheated many times but I must do it now. I have felt from the beginning that this is happening but my rational mind has forced myself to believe that this is not the case. I have not wanted to insult the swindlers by somehow suggesting that I am not quite sure about their real motives.

Sleeping over night would be a basic example of this delegation of the problem to a higher intelligence. From personal experience sleeping over night is for me almost the only manner to get new ideas and solve problems which do not reduce to mere mechanical calculations. Often the problem and its solution pop up simultaneously during morning hours and going to the computer makes it possible to write out the details. The attempt to solve a problem by hard thinking later during the day does not lead anywhere.

An example about this relates to my own work. As some new idea has emerged, I have sometimes given it up after some rational thought. Later it has however turned out that the idea made sense after all but for different reasons that I had thought.

A concrete example relates to dark matter idenfied as heff=n×h0≥h phases of ordinary matter at magnetic body in the recent TGD based model.The problem was the following.

Blackman and many others observed at seventies that ELF radiation in EEG range has strange effects on the behavior of vertebrates visible also physiologically. These effects looked quantal. This however does not make sense in standard quantum theory since energies are incredibly small and far below the thermal energies. For this reason mainstream refused to take the effects seriously and it was forgotten.

  1. My first proposal was based on the notion of many-sheeted space-time. Perhaps the photons and ions responding to them were at space-time sheets at which the temperature is extremely low so that the thermal objection does not bite.
  2. Then I entered a different idea. Perhaps the value of Planck constant varies and one has a very large value heff=n×h0 of the effective Planck constant. n would correspond to the number of identical space-time sheets for the space-time surfaceas a covering space. This led to a nice theory and later I could deduce it from a number theoretic vision unifying real and various p-adic physics to adelic physics describing correlates of both sensory experience and cognition.

As I thought about this during last night, a question popped up. Could this original approach be correct after all? Could the heff approach be wrong? This would destroy 15 years of work: horrible! Or could these two approaches be consistent? This turned out to be the case!

  1. The temperature at flux tubes and flux quanta of the magnetic body (MB) is in general below Hagedorn temperature TH dictated by the flux tube thickness: the reason is that the number of geometric degrees of freedom is infinite. Flux tube behaves in good approximation like string and the notion of TH emerged in string models. For instance, in living matter TH corresponds to the physiological temperature, around 37 degrees Celsius for humans.
  2. TH is associated with dark matter with heff=n×h0, and n is the number of space-time sheets of the covering. TH characterizes n-sheeted structure. What is the temperature at a single sheet of covering?
  3. Thermal energy is proportional to the temperature. For an n-sheeted structure one has by the additivity of thermal energy for different identical sheets TH =n×TH(sheet) implying

    TH(sheet) =TH/n.

    For the huge values of heff and thus of n, T(sheet)H(sheet) is indeed extremely small! The original explanation is consistent with the number theory based explanation! Trust your intuition! But be however cautious!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, April 13, 2021

Does muon's anomalous anomalous magnetic moment imply new physics?

Lepton universality predicts that the magnetic moments of leptons should be the same apart from the corrections due to different masses. Leptons have besides the magnetic moment predicted by Dirac equation also anomalous magnetic moment which is predicted to come from various radiative corrections.

The standard model predictions for the anomalous magnetic moments of the electron are ae= (ge-2)/2= .00115965218091 and aμ =(gμ-2)/2= .00116591804.

The anomalous magnetic moments of electron and muon differ by .1 per cent. This breaking of universality is however due to the different masses of electron and muon rather than different interactions.

1. The finding of the Fermilab experiment

The breaking of universality could also come from interactions and the Fermilab experiment (see this) and earlier experiments suggest this. The experiment shows that in the case of muon the magnetic moment differs by from the predicted: the deviation from the standard model prediction is 2.5×10-4 per cent. This indicates that there might be interactions violating the lepton universality. Besides the problem with the muon's magnetic moment, which differs from that of the electron, there is also a second problem. The decays of B mesons seem to break universality of fermion interactions: indications for the breaking of universality have emerged during years so that this is not new.

The measurement result involves various sources of error and one can estimate the probability that the measurement outcome is due to this kind of random fluctuations. The number of standard deviations tells how far the measurement result is from the maximum of the probability distribution. The deviation is expressed using standard deviation as a unit. Standard deviation is essentially the width of the distribution. For instance, 4 standard deviations tells that the probability that the result is random fluctuation is .6 per cent. For 5 standard deviations from predicted is .0001 per cent and is regarded as the discovery limit.

2. Theoretical uncertainties

There are also theoretical uncertainties related to the calculation of magnetic moment. There are 3 contributions: electroweak, QCD, and hadronic contributions. The electroweak and QCD corrections are "easily" calculable. The hadronic contributions are difficult to estimate since perturbative QCD does not apply at the hadronic energies. There are groups which claim that their estimation of hadronic contributions produces a prediction consistent with the Fermilab finding and the earlier findings consistent with the Fermilab finding.

The prediction based on experimentally deduced R ratio characterizing the rate for the decay of a virtual photon  to  a qquark pair allows to estimate the hadronic contribution and gives a prediction for hadronic contributions which is in conflict with experimental findings. On the other hand, the calculations based on lattice QCD give a result consistent with the experimental value (see this). Should one trust experiment or theory?

3. Is a wider perspective needed?

To my opinion, one should see the problem from a bigger perspective than a question about how accurate the standard model is.

  1. Standard Model does not explain fermion families. Also GUTs fail in this respect: the mass ratios of fermions vary in the range spanned by 11 orders of magnitude. This is not a small gauge symmetry breaking but something totally different: mass scale is the appropriate notion and p-adic length scale hypothesis provides it.
  2. One must also challenge the belief that lattice QCD can describe low energy hadron physics. There might be much deeper problems than the inability to compute hadronic contributions to g-2. Perturbative QCD describes only high energy interactions and QCD might exist only in the perturbative sense.The fact is that low energy hadron physics is virtually existent. Saying this aloud of course irritates lattice QCD professionals but the reduction of QCD to thermodynamics in the Euclidian space-time looks to me implausible. There are deep problems with Wick rotation.

    For instance, massless dispersion relation E2-p2= 0 in M4 translates to E2+p2 =0 in E4: massless fields disappear completely since one has only E=0,p=0 zero mode. There are similar problems with the massless Dirac equation. For the massive case the situation is not so bad as this. There is the strong CP problem caused by instantons and a problem with multiplication of spinor degrees of freedom since the 4-D cube has the topology of 4-torus and allows 16 spinor structures.

    Quarks explain only a few per cent of hadron mass just as ordinary matter explains only a few percent of mass in cosmology. Hadron physics might therefore involve something totally new and color interaction could differ from a genuine gauge interaction.

    4. What TGD can say about family replication phenomenon?

    In TGD framework, the topological explanation of family replication phenomenon identifying partonic 2-surfaces as fundamental building blocks of elementary particles provides the needed understanding and predicts 3 different fermion generations corresponding to 3 lowest general: sphere, torus, and sphere with two handles (see this).

    Conformal Z2 symmetry for partonic 2-surfaces is present for the lowest 3 genera but not for the higher ones for which one must talk about many handle states with continuous mass spectrum. p-Adic thermodynamics allows to estimate the masses of new boson by simple scaling arguments and Mersenne prime hypothesis.

    In the TGD framework the two findings can be seen as indications for the failure of lepton universality. Besides 3 light fermion generations TGD also predicts 3 light generations for electroweak bosons, gluons, and Higgs. These generations are more massive than weak bosons and p-adic length scale hypothesis also allows to estimate their masses.

    The couplings of the lightest generations to the gauge bosons obey fermion universality (are identical) but the couplings of the 2 higher generations cannot do so since the charge matrices of 3 generations must be orthogonal to each other. This predicts breaking of fermion universality which in quantum field theory approximation comes from the loops coupling fermions to the 2 higher boson generations.

    This prediction is a test for TGD based topological view about family replication phenomenon in terms of the genus of partonic 2-surface: partonic 2-surface can be sphere, torus or sphere with two handles. TGD also explains why higher generations are experimentally absent.

    5. What does TGD say about low energy hadron physics?

    There is also the question about whether QCD catches all aspects of strong interactions. In TGD color magnetic flux tubes carry Kaehler magnetic energy and volume energy parametrized by length scale dependent cosmological constant so that a connection with cosmology indeed emerges. The reconnections of U-shaped flux tubes give rise to the TGD counterparts of meson exchanges of old-fashioned hadron physics. See this .

    Color group need not be a gauge group but analogous to a Kac-Moody group or Yangian group (only non-negative conformal weights). In TGD framework SU(3) at the level of M4xCP2 is not a gauge symmetry but acts as isometries of CP2 and fermions do not carry color as analog of spin but as angular momentum like quantum number. At the level of compelexified M8 SU(3) is a subgroup of G2 acting as octonion automorphism and defines Yangian replacing the local gauge group.

    For the TGD based model see this and this.

    For a summary of earlier postings see Latest progress in TGD.

    Articles and other material related to TGD.

Friday, April 09, 2021

EEG and the structure of magnetosphere

Roughly 15 years  ago I proposed the idea that Earth's  magnetosphere (MS) could serve as a sensory canvas in the sense that biological systems, in particular the vertebrate brain, could have sensory representations realized at the "personal" magnetic body (MB)  closely associated with the MS of the Earth. EEG would make communications to  and control by MB possible. 

 At that time I did not yet  have  the idea about number theoretical realization of the  hierarchy of Planck constants heff=nh0 in the framework of adelic physics fusing the physics of sensory experience and cognition. This hierarchy is crucial for understanding the basic aspects of living matter such as metabolism, coherence in long scales, correlates of cognition, and even evolution.

Also the concept of zero energy ontology (ZEO) forming now the basis of the quantum TGD was missing although there was already the about communication to past using negative energy signals. ZEO is now in a central role in the understanding of self-organization - not only the biological one. The new view about time predicting that time reversal occurs in ordinary state function reductions (SFRs) allows to understand homeostasis as self-organized quantum criticality. 

For these reasons it is interesting to consider the notion of sensory canvas from the new perspective. This article discusses besides  the earlier ideas about the MS  also the proposal that it is possible to associate EEG bands to the regions of MS via the correspondence between EEG   frequency with the distance of the region from Earth.   Also the idea  that the structure of MS could be a  fractal analog of the vertebrate body is tested quantitatively by comparing various scales involved.

See the article EEG and the structure of magnetosphere or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD . 


Three alternative generalizations of Nottale's hypothesis in TGD framework

Nottale's gravitational Planc constant ℏgr= GMm/v0 contains  the velocity parameter v0 as the only parameter. In the perturbative expansion  of  the scattering amplitudes β0=v0/c appears  in the role of fine structure constant.    

There is however a problem.

  1. The model  for the effects of ELF radiation on vertebrate brain  inspired by  a generalization of Nottale's hypothesis by replacing the total mass M in the case of Earth by MD≈ 10-4ME suggests that in this case the dark particles involved couple only to a part of mass identifiable as dark mass MD.
  2.   Since only GM appears in the basic formulas, the  alternative option is that the value of G is reduced to GD. This conforms with the fact that in the  TGD framework CP2 length is the fundamental parameter  G is a prediction of the theory and therefore can vary. 
  3. A further option is that the parameter β0=v0/c≤ 1 is variable and equals to β0=1 or to a value not much smaller than 1, say β0=1/2.
These three options are  critically discussed and compared. The cautious conclusion is that the the third option is the most plausible one.

See the article Three alternative generalizations of Nottale's hypothesis in TGD framework or the chapter About the Nottale's formula for hgr and the relation between Planck length and CP2 length.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Friday, April 02, 2021

Does Goedel's incompleteness theorem hold true for reals?

I have many times wondered whether the incompleteness theorem extends to real numbers, which are usually the stuff used by physics. There is a very nice discussion of this point here. Strongly recommended.

Real numbers and all algebraically closed number fields such as complex numbers and algebraic numbers are complete. All truths are provable. If physics is based on complex numbers or algebraic numbers, Goedel's theorem has no direct implications for physics.This however implies that integers cannot be characterized using the axiomatics of these number fields since if this were the case, Gdel's incompleteness theorem would not hold true for integer arithmetics. One can also say that Goedel numbers for unprovable theorems are not expressible as a natural number but are more general reals or complex numbers.

Since algebraic numbers are complete, a good guess is that algebraic numbers label all true statements about integer arithmetics and also about arithmetics of algebraic integers for extensions of rationals.

In TGD adelic physics definescorrelates for cognition. Adeles for the hierarchy labelled by algebraic extensions (perhaps also extensions involving roots of e since ep is p-adic number). These are not complete and Goedel's incompleteness theorem applies to them. Only at the never achievable limit of algebraic numbers the system becomes complete. This would strongly suggest a generalization of Turing's view about computation by replacing integer arithmetics with a hierarchy of arithmetics of algebraic integers associated with extensions of rationals. See this article .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 30, 2021

Does the quantal gravitational force vanish below critical radius in average sense?

Nottale's gravitational constant hbargr= GMDm/v0 contains dark mass MD as a parameter. At the surface of Earth MD much smaller than MD and for the planets  one has MD=MSun. It turns out that in the  average sense  MD must grow to M.   This is  required by  the condition that  Bohr radii correspond to the classical radii in the average sense. The actual dependence of MD on r  is expected to  be a staircase like function.

At the quantum level, this   effectively eliminates  the average  gravitational force in the scales below the critical radius rcr above  which MD=M is true.   Indeed, due to the average MD∝ r dependence,  gravitational potential would be constant on the average. 

 Could one regard this   effective elimination of  the gravitational force as a kind of    Quantum Equivalence Principle or   as an analog of asymptotic freedom?

See the article Two alternative generalizations of Nottale's hypothesis or the chapter About the Nottale's formula for hgr and the relation between Planck length and CP2 length.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, March 19, 2021

The idea of Connes about inherent time evolution of certain algebraic structures from TGD point of view

Alan Connes has proposed that certain mathematical structures known as hyperfinite factors contain in their structure inherent time evolution.This time evolution is determined only modulo unitary automorphism analogous to a time evolution determined by Hamiltonian so that this time evolution seems to be too general for the purposes of a physicist.

Zero energy ontology of TGD combined with adelic physics leads to a vision that the sequences of state function reductions implies a mathematical evolution in the sense that the extensions of rationals characterizing the space-time region increases gradually. This induces the increase of algebraic complexity implying time evolution as the analog of biological evolution.

The dimension of extension corresponds to an effective Planck constant assumed to label dark matter as phases of ordinary matter. Therefore quantum coherence lengths increase in this evolution.

This generalization of the idea of Connes is discussed in the framework provided by the recent view about TGD. In particular, the inclusion hierarchies of hyper-finite factors, the extension hierarchies of rationals, and fractal inclusion hierarchies of subalgebras of supersymplectic algebra isomorphic with the entire algebra are proposed to be more or less one and the same thing in TGD framework.

See the article The idea of Connes about inherent time evolution of certain algebraic structures from TGD point of view.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Pomeron and Odderon as direct support for the notion of color magnetic body

The following comments were inspired by a popular article"> telling about the empirical support for a particle christened Odderon. As the name tells, Odderon is not well-understood in QCD framework.

Odderon is a cousin of Pomeron which emerged already about half century ago in the so called Regge theory to explain the logarithmically rising (rather than decreasing) cross sections in proton-proton and proton-antiproton collisions. Pomeron is part of low energy phenomenology and perturbative QCD cannot say much about it. Since the charge parity is C=1 for Pomeron C=-1 for Odderon, these states are analogous to pion with spin 0 and ρ meson with spin 1.

Pomeron and Odderon have not been in the interests of the frontier of theoretical physics: they represent for an M-theorist a totally uninteresting and primitive low energy phenomenology - as all that we used to call physics before the first superstring revolution -, and does not therefore deserve the attention of an ambitious superstring theorist more interested in the marvels of brane words, landscape, swampland, and multiverse.

I have written about Pomeron for years ago. The following is something different since the view about low energy strong interactions according to TGD (see this) has developed considerably (see for instance this and this)

One can go first to Wikipedia to learn about Pomeron.

  1. Pomeron exchange in the t-channel was postulated to explain the slowly (logarithmically) rising scattering cross sections in proton-proton and proton-antiproton collisions. For quarks and gluons the scattering cross sections fall down rather rapidly with energy (by dimensional argument like inverse 1/s of cm energy squared) so that something else would be in question.
  2. The cross sections did not depend on the charges of the colliding baryons. The usual shower of Cerenkov radiation was missing from Pomeron exchange events. The absence of pions usually present was interpreted as absence of color charge and therefore. This suggests that quarks and gluons do not participate the Pomeron events. There is often also a large rapidity gap in which no outgoing particles are observed.
  3. In the Regge theory which later was concretized in terms of the hadronic string model. Pomeron would correspond to a Regge trajectory for which the Reggeon would have quantum numbers of vacuum except for mass and angular momentum. Regge trajectory would satisfy the formula M2= M02 =α(s) J, M mass, J angular momentum. Odderon would be Pomeron like state with an odd charge parity C=-1 instead of C=1.
  4. In the QCD picture Pomeron and Odderon are assumed to be associated with the gluonic exchanges. Pomeron would be a many-gluon state.
In the many-sheeted space-time of TGD, hadrons are many-sheeted objects.
  1. There is a hadronic space-time sheet and quark and gluon space-time sheets are glued to this. There is a magnetic body (MB) of hadron having a layered structure. In particular, there are em/color/weak MBs consisting of flux tubes and "tentacles", which are U-shaped flux tubes.

    Low energy hadron physics would be described in terms of reconnections of these tentacles. This is a rather new element in the picture. In a reasonable approximation, flux tubes are strings and the reconnection of closed strings appears as a basic reaction vertex for closed strings. This gives a connection with the hadronic string model. TGD indeed emerged as a generalization of the hadronic string model 43 years ago (and also as a solution of the energy problem of GRT).

  2. Most of the energy of hadron is assumed to be carried by color MB: quarks and gluons carry only a small part of energy. In QCD space-time dynamics is not present and the analog of hadron as space-time surfaces would be a gluon condensate of some kind.
  3. Low energy hadron reactions would consist of reconnections of the U-shaped flux tubes of the colliding color MBs. Besides this there are also the collisions of quarks and gluons having approximate description in terms of QCD. The already mentioned connection with hadronic string model suggests a connection with Regge and string model descriptions of Pomeron/Odderon.
  4. Hadrons have U-shaped flux tubes acting like tentacles and reconnect to form a bridge of two flux tubes between colliding hadrons. This topological interaction mechanism would be universal and occur in all scales. In biology the ability of reacting biomolecules to magically find each other in the dense molecular soup would rely on this mechanism. It would be also a mechanism of high Tc - and biological superconductivity.
Could this explain the basic properties of the Pomeron?
  1. Charge independence and absence of pion emission assignable to quark-gluon reactions can be understood. Gluons and quarks of colliding hadrons would not meet each other at all. The two colliding hadrons would just touch each other with their "tentacles" which would transfer some momentum between them in elastic collisions. This would explain the rapidity gap.
  2. What about the slow dependence on collision energy? Why the cross section describing the probability of the formation of reconnection would not depend on collision energy?
    1. One could visualize the cross section in cm frame geometrically as the area of a 2-D surface cylinder parallel to the line connecting the colliding particles. The area of this cylinder would tell the probability for the formations of reconnection. If I try to touch some object in darkness, its area tells how probable the success is.
    2. In elastic scattering the t-channel momentum exchange would be orthogonal to this cylinder and have vanishing energy component. It would not change in Lorentz boosts increasing the cm collision energy. If the contribution to the cross section depends only on t, it would be independent of collision energy.
The TGD view about this finding is described in the article Some unexpected findings in hadron and nuclear physics from TGD point of view and in a chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 16, 2021

MeshCODE theory from TGD point of view

Benjamin Goult has made an interesting proposal in an article The Mechanical Basis of Memory the MeshCODE Theory published in Frontiers of Molecular Neuroscience (see this).

The proposal is that the cell - or at least synaptic contacts - realize mechanical computation in terms of adhesive structures consisting of hundreds of proteins known as talins, which act as force sensors. Talins are connected to integrins in the extracellular matrix, to each other, and to the actins in the cell interior. This proposal has far reaching consequences for understanding formation of memomies as behaviors at the synaptic level.

This proposal does not conform with the TGD vision but inspires a series of questions leading to a rather detailed general vision for how magnetic body (MB) receives sensory input from biological body (BB) coded into dark 3N-photons a representing genes with N codons and as a response activates corresponding genes, RNA or proteins as a reaction. Sensory input and the response to it would be coded by the same dark genes.

See the article MeshCODE theory from TGD point of view or the chapter An Overall View about Models of Genetic Code and Bio-harmony.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Saturday, March 13, 2021

Zero energy states as scattering amplitudes and subjective time evolution as sequence of SSFRs

Zero energy states code for the ordinary time evolution in the QFT sense described by the S-matrix. Construction of zero energy is reasonably well understood (see this, this, and this ).

This is not yet the whole story. One should also understand the subjective time evolution defined by a sequence of "small" state function reductions (SSFRs) as analogs of "weak" measurements followed now and then by BSFRs. How does the subjective time evolution fit with the QFT picture in which single particle zero energy states are planewaves associated with a fixed CD?

  1. The size of CD increases at least in statistical sense during the sequence of SSFRs. This increase cannot correspond to M4 time translation in the sense of QFTs. Single unitary step followed by SSFR can be identified asa scaling of CD leaving the passive boundary of the CD invariant. One can assume a formation of an intermediate state which is quantum superposition over different size scales of CD: SSFR means localization selecting single size for CD. The subjective time evolution would correspond to a sequence of scalings of CD.
  2. The view about subjective time evolution conforms with the picture of string models in which the Lorentz invariant scaling generator L0 takes the role of Hamiltonian identifiable in terms of mass squared operator allowing to overcome the problems with Poincare invariance. This view about subjective time evolution also conforms with super-symplectic and Kac-Moody symmetries of TGD.

    One could perhaps say that the Minkowski time T as distance between the tips of CDs corresponds to exponentiated scaling: T= exp(L0t). If t has constant ticks, the ticks of T increase exponentially.

The precise dynamics of the unitary time evolutions preceding SSFRs has remained open.
  1. The intuitive picture that the scalings of CDs gradually reveal the entire 4-surface determined by polynomial P in M8: the roots of P as "very special moments in the life of self" would correspond to the values of time coordinate for which SSFRs occur as one new root emerges. These moments as roots of the polynomialdefining the space-time surface would correspond to scalings of the size of both half-cones for which the space-time surfaces are mirror images. Only the upper half-cone would be dynamical in the sense that mental images as sub-CDs appear at "geometric now" and drift to the geometric future.
  2. The scaling for the size of CD does not affect the momenta associated with fermions at the points of cognitive representation in X4⊂ M8 so that the scaling is not a genuine scaling of M4 coordinates which does not commute with momenta. Also the fact that L0 for super symplectic representations corresponds to mass squared operator means that it commutes with Poincare algebra so that M4 scaling cannot be in question.
  3. The Hamiltonian defining the time evolution preceding SSFR could correspond to an exponentiation of the sum of the generators L0 for super-symplectic and super-Kac Moody representations and the parameter t in exponential corresponds to the scaling of CD assignable to the replaced of root rn with root rn+1 as value of M4 linear time (or energy in M8). L0 has a natural representation at light cone boundaries of CD as scalings of light-like radial coordinate.
  4. Does the unitary evolution create a superposition over all over all scalings of CD and does SSFR measure the scale parameter and select just a single CD?

    Ordoes the time evolution correspond to scaling? Is it perhaps determined by the increase of CD from the size determinedby the root rn as "geometric now" to the root rn+1 so that one would have a complete analogy with Hamiltonian evolution? The scaling would be the ratio rn+1/rn which is an algebraic number.

    Hamiltonian time evolution is certainly the simplest option and predicts a fixed arrow of time during SSFR sequence. L0 identifiable essentially as a mass squared operator acts like conjugate for the logarithm of the logarithm of light-cone proper time for a given half-cone.

    One can assume that L0 as the sum of generators associated with upper and lower half-cones if the fixed state at the lower half-cone is eigenstate of L0 not affect in time evolution by SSFRs.

How does this picture relate to p-adic thermodynamics in which thermodynamics is determined by partition function which would in real sector be regarded as a vacuum expectation value of an exponential exp(iL0t) of a Hamiltonian for imaginary time t=iβ β=1/T defined by temperature? Here L0 is proportional to mass squared operator.
  1. In p-adic thermodynamics temperature T is dimensionless parameter and β=1/T is integer valued. The partition function as exponential exp(-H/T) is replaced with pβ L0), β=n, which has the desired behavior if L0 has integer spectrum. The exponential form eL0/TR), βR= nlog(p) equivalent in the real sector does not make sense p-adically since the p-adic exponential function has p-adic norm 1 if it exists p-adically.
  2. The time evolution operator exp(-iL0t) for SSFRs (t would be the scaling parameter) makes sense for the extensions of p-adic numbers if the phase factors for eigenstates are roots of unity belonging to the extension. t= 2π k/n since L0 has integer spectrum. SSFRs would define a clock. The scalingexp(t)= exp(2π k/n) is however not consistent with the scaling by rn-1/rn.

    Both the temperature and scaling parameter for time evolution by SSFRs would be quantized by number theoretical universality. p-Adic thermodynamics could have its origins in the subjective time evolution by SSFRs.

  3. In the standard thermodynamics it is possible to unify temperature and time by introducing a complex time variable \tau = t+iβ, where β=1/T is inverse temperature. For the space-time surface in complexified M8, M4 time is complex and the real projection defines the 4-surface mapped to H. Could thermodynamics correspond to the imaginary part of the time coordinate?

    Could one unify thermodynamics and quantum theory as I have indeed proposed: this proposal states that quantum TGD can be seen as a "complex square root" of thermodynamics. The exponentials U=exp(\tau L0/2) would define this complex square root and thermo-dynamical partition function would be given by UU= exp(-β L0).

See the article Is M8-H duality consistent with Fourier analysis at the level of M4× CP2?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, March 11, 2021

Still some questions about M8-H duality

There are still on questions about M8-H duality to be answered.
  1. The map pk→ mk= ℏeffpk/p2 defining M8-H duality is consistent with Uncertainty Principle but this is not quite enough. Momenta in M8 should correspond to plane waves in H.

    Should one demand that the momentum eigenstate as a point of cognitive representation associated with X4⊂ M8 carrying quark number should correspond to a plane wave with momentum at the level of H=M4× CP2? This does not make sense since X4⊂ CD contains a large number of momentaassignable to fundamental fermions and one does not know which of them to select.

  2. One can however weaken the condition by assigning to CD a 4-momentum, call it P. Could one identify P as
    1. the total momentum assignable to either half-cone of CD
    2. or the sum of the total momenta assignable to the half-cones?
The first option does not seem to be realistic. The problem with the latter option is that the sum of total momenta is assumed to vanish in ZEO. One would have automatically zero momentum planewave. What goes wrong?
  1. Momentum conservation for a single CD is an ad hoc assumption in conflict with Uncertainty Principle, and does not follow from Poincare invariance. However, the sum of momenta vanishes for non-vanishing planewave when defined in the entire M4 as in QFT, not for planewaves inside finite CDs. Number theoretic discretization allows vanishing in finite volumes but this involves finite measurement resolution.
  2. Zero energy states represent scattering amplitudes and at the limit of infinite size for the large CD zero energy state is proportional to momentum conserving delta function just as S-matrix elements are in QFT. If the planewave is restricted within a large CD defining the measurement volume of observer, four-momentum is conserved in resolution defined by the large CD in accordance with Uncertainty Principle.
  3. Note that the momenta of fundamental fermions inside half-cones of CD in H should be determined at the level of H by the state of a super-symplectic representation as a sum of the momenta of fundamental fermions assignable to discrete images of momenta in X4⊂ H.

M8-H-duality as a generalized Fourier transform

This picture provides an interpretation for M8-H duality as a generalization of Fourier transform.

  1. The map would be essentially Fourier transform mapping momenta of zero energy as points of X4⊂ CD⊂ M8 to plane waves in H with position interpreted as position of CD in H. CD and the superposition of space-time surfaces inside it would generalize the ordinary Fourier transform . A wave function localized to a point would be replaced with a superposition of space-time surfaces inside the CDhaving interpretation as a perceptive field of a conscious entity.
  2. M8-H duality would realize momentum-position duality of wave mechanics. In QFT this duality is lost since space-time coordinates become parameters and quantum fields replace position and momentum as fundamental observables. Momentum-position duality would have much deeper content than believed since its realization in TGD would bring number theory to physics.

How to describe interactions of CDs?

Any quantum coherent system corresponds to a CD. How can one describe the interactions of CDs? The overlap of CDs is a natural candidate for the interaction region.

  1. CD represents the perceptive field of a conscious entity and CDs form a kind of conscious atlas for M8 and H. CDs can have CDs within CDs and CDs can also intersect. CDs can have shared sub-CDs identifiable as shared mental images.
  2. The intuitive guess is that the interactions occur only when the CDs intersect. A milder assumption is that interactions are observed only when CDs intersect.
  3. How to describe the interactions between overlapping CDs? The fact thequark fields are induced from second quantized spinor fields in in H resp. M8 solves this problem. At the level of H, the propagators between the points of space-time surfaces belonging to different CDs are well defined and the systems associated with overlapping CDs have well-defined quark interactions in the intersection region. At the level of M8 the momenta as discrete quark carrying points in the intersection of CDs can interact.

Zero energy states as scattering amplitudes and subjective time evolution as sequence of SSFRs

This is not yet the whole story. Zero energy states code for the ordinary time evolution in the QFT sense described by the S-matrix. What about subjective time evolution defined by a sequence of "small" state function reductions (SSFRs) as analogs of "weak" measurements followed now and then by BSFRs? How does the subjective time evolution fit with the QFT picture in which single particle zero energy states are planewaves associated with a fixed CD.

  1. The size of CD increases at least in statistical sense during the sequence of SSFRs. This increase cannot correspond to M4 time translation in the sense of QFTs. Single unitary step followed by SSFR can be identified asa scaling of CD leaving the passive boundary of the CD invariant. One can assume a formation of an intermediate state which is quantum superposition over different size scales of CD: SSFR means localization selecting single size for CD. The subjective time evolution would correspond to a sequence of scalings.

    The crucial point is that scalings commute with Poincare symmetries. Subjective and Poincare time evolutions commute.

  2. The view about subjective time evolution conforms with the picture of string models in which the Lorentz invariant scaling generator L0 takes the role of Hamiltonian identifiable in terms of mass squared operator allowing to overcome the problems with Poincare invariance. This view about subjective time evolution also conforms with super-symplectic and Kac-Moody symmetries of TGD.

    One could perhaps say that the Minkowski time T as distance between the tips of CDs corresponds to exponentiated scaling: T= exp(L0t). If t has constant ticks, the ticks of T increase exponentially.

See the article Is M8-H duality consistent with Fourier analysis at the level of M4× CP2?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, March 03, 2021

Three unexpected findings in hadron and nuclear physics from TGD point of view

During the same week I learned about 3 unexpected findings related to hadron- and nuclear physics. This inspired 3 articles and a chapter to one of the books about TGD.

The asymmetry of antimatter in proton

The recent experiments of Dove et al (see this and this) confirm that the antiquark sea is asymmmetric in the sense that the ratio anti-d/anti-u is larger than unity. A model assuming that proton is part of time in a state consisting of neutron and virtual pion seems to fit at qualitative level into the picture.

The TGD based model relies on the already existing picture developed by taking seriously the so called X boson as 17.5 MeV particle and the empirical evidence for scaled down variants of pion predicted by TGD. Virtual mesons are replaced with real on mass shell mesons but with p-adically scaled down mass, and low energy strong interactions at the hadronic and nuclear level are described topologically in terms of reconnections of flux tubes.

See the article a The asymmetry of antimatter in proton from TGD point of view.

The strange decays of heavy nuclei

That final state nuclei from the fission of heavy nuclei possess a rather high spin has been known since the discovery of nuclear fission 80 years ago but has remained poorly understood. The recent surprising findings by Wilson et al (see this) was that the final state angular momenta for the final state nuclei are uncorrelated and must therefore emerge after the decays.

The TGD proposal is that the generation of angular momentum is a kind of self-organization process. Zero energy ontology (ZEO) and heff hierarchy indeed predicts self-organization in all scales. Self-organization involves energy feed needed to increase heff/h0= n serving as a measure for algebraic complexity and as a kind of universal IQ in the number theoretical vision about cognition based on adelic physics.

The final state nuclei have angular momenta 6-7 hbar. This suggests that self-organization increases the values of heff to nh, n∈ {6,7}. Quantization of angular momentum with new unit of spin would force the generation of large spins. Zero energy ontology (ZEO) provides a new element to the description of self-organization and a model for quantum tunnelling phenomenon.

See the article The decays of heavy nuclei as support for nuclear string model .

The strange findings of Eric Reiner challenging basic quantum measurement theory

Eric Reiter (see this) has studied the behavior of gamma-rays emitted by heavy nuclei going through a beam splitter splitting the photon beam to two beams. Quantum theory predicts that only one detector fires. Therefore the pulses in the two detectors occur at different times. This has been verified for photons of visible light. The experiment studied the same situation for gamma-rays and the surprise was that one observes mostly half pulses in both detectors and in some cases also full pulses. Reiner has made analogous experiments also with alpha particles with the same conclusion. Also these findings pose a challenge for TGD.

See the article TGD based intepretations for the strange findings of Eric Reiner.

The TGD view about these 3 findings is described in the article Three unexpected findings in hadron and nuclear physics from TGD point of view or in the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 02, 2021

TGD based interpretation for the strange findings of Eric Reiter

I learned of rather interesting findings  claimed by Eric Reiter hosting  a public group " A serious challenge to quantum mechanics" (see this). There is a published article (see this) about  the behavior of gamma-rays emitted by heavy nuclei going through two detectors in tandem.

Quantum theory predicts that only one detector fires. It is however found that both detectors fire with the  same  pulse height and firings are causally related. The pulse height  depends on wavelength and distance between the source and detector and also on  the chemistry of the source, which does not conform with the assumption that nuclear physics and chemistry decouple from each other.    Reiter has made analogous experiments also with alpha particles with the same conclusion. These findings pose a challenge for TGD, and in this article a TGD based model for the findings is developed. 

See the article TGD based intepretations for the strange findings of Eric Reiter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Monday, March 01, 2021

Support for the quantization of Planck constant from the decays of heavy nuclei

That final state nuclei from the fission of heavy nuclei possess a rather high spin has been known since the discovery of nuclear fission 80 years ago but has remained poorly understood.

The recent surprising finding published in Nature article "Angular momentum generation in nuclear fission" (see this) was that the final state angular momenta for the final state nuclei are uncorrelated and must therefore emerge after the decays. This represents a challenge for TGD inspired model of nuclei as nuclear strings, and one ends up to a rather detailed model for what happens in the fissions.

The TGD proposal is that the generation of angular momentum is a kind of self-organization process. Zero energy ontology (ZEO) and heff hierarchy indeed predicts self-organization in all scales. Self-organization involves energy feed needed to increase heff/h0= n serving as a measure for algebraic complexity and as a kind of universal IQ in the number theoretical vision about cognition based on adelic physics.

The observation that the final state nuclei have angular momenta 6-7 hbar suggests that self-organization increase the values of heff to nh, n∈ {6,7}. Quantization of angular momentum with new unit of spin forces the generation of large spins. Also zero energy ontology (ZEO) is involved: ZEO provides a new element to the description of self-organization and a model for quantum tunnelling phenomenon.

See the article The decays of heavy nuclei as support for nuclear string model .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, February 27, 2021

TGD based explanation for the asymmetry between anti-u and anti-d sea quarks in proton

I encountered in FB a highly interesting popular article "Decades-Long Experiment Finds Strange Mix of Antimatter in The Heart of Every Proton" (see this).

The popular article tells about the article "The asymmetry of antimatter in the proton" of Dove et al published in Nature (see this). This article is behind the paywall but the same issue of Nature has an additional article "Antimatter in the proton is more down than up" (see this) explaining the finding.

What is found is an asymmetry for u and antiquarks in the sense that there are slightly more d-type antiquarks (anti-d) than u type antiquarks (anti-u) in quark sea. This asymmetry does not seem to depend on the longitudinal momentum fraction of the antiquark: the ratio of anti-down and anti-up distribution functions is smaller than one and constant.

A model assuming that proton is part of time in a state consisting of neutron and virtual pion seems to fit at qualitative level into the picture. Unfortunately, the old-fashioned strong interaction theory based on nuclei and pions does not converge by the quite too large value of proton pion coupling constant.

I looked at the situation in more detail and developed a simple TGD based model based on the already existing picture developed by taking seriously the so called X boson as 17.5 MeV particle and the empirical evidence for scaled down variants of pion predicted by TGD (see this). What TGD can give is the replacement of virtual mesons with real on mass shell mesons but with p-adically scaled down mass and a concrete topologicaldescription of strong interactions at the hadronic and nuclear level in terms of reconnections of flux tubes.

1. Basic data about quark and nucleon masses

To get a quantitative grasp about the situation, one can first see what is known about masses of u and d quarks.

  1. One estimate for u and d quark masses (one must taken the proposals very cautiously) can be found here.

    The mass ranges are for u 1.7-3.3 MeV and and for d 4.1-5.8 MeV.

  2. In the first approximation n-p mass difference 1.3 MeV would be just d-u mass difference varying in the range 1.2 MeV-4.1 MeV and has correct sign and correct order of magnitude. 4.1 MeV for d and 3.3 MeV for u would produce the n-p mass difference correctly.
  3. Coulomb interactions give a contribution which is vanishing for p and and negative for neutron

    Ec(n) =-α× ℏ/3Re,

    Re is proton's electromagnetic scale.

    This contribution reduces neutron mass. If Rem is taken to be proton Compton radius this gives about Ec≈ - 3.2 MeV. This would predict mass n-p difference in the range -1.1-0.9 MeV. This favors maximal n-p mass difference 4.1 MeV and m(u)= 1.7 MeV and md =5.8 MeV: d-u mass difference would be 4.1 MeV roughly 4 times electron mass.

2. TGD based picture about hadronic an nuclear interactions

Consider first the TGD inspired topological model for hadronic an nuclear interactions.

  1. The notion of magnetic body (MB) assignable to color and em and electroweak interactions is essential. Interactions are described by virtual particle exchanges in quantum field framework. In TGD they are described by reconnections of U-shaped flux tubes which are like tentacles.

    In interaction these tentacles reconnect and give rise to a pair of flux tubes connecting the particles. The flux tubes would carry monopole flux so that single flux tube cannot be split. These flux tube pairs serve also as correlates of entanglement replacing wormholes as their correlates in ER-EPR picture.

    This picture looks rather biological and was developed first as a model of bio-catalysis. The picture should apply quite generally to short range interactions at least.

  2. The U-shaped flux tubes of color MB replace virtual pion and and rho meson exchanges in the old fashioned picture about strong interactions. They represent in TGD framework real particles but with p-adically scaled down mass. For instance, pions are predicted to have scaled down variants with masses different by a negative power of 2 from pion mass. Same is true for rho. Now the masses would be below MeV range, which is the energy scale of nuclear strong interactions.

    Also nuclear strong interactions would occur in this manner. The fact that flux tubes have much longer length than nuclear size would explain the mysterious finding that in nuclear decay the fragments manage to generate their angular momenta after the reaction: the flux tubes would make possible the exchange of angular momentum required by angular momentum conservation.

3. A model for the anti-quark asymmetry

Consider now a model anti-quark asymmetry for sea quarks.

  1. Quarks and antiquarks would appear at these flux tubes. The natural first guess is meson like states are in question.

    The generation of u-anti-d type pion or rho would transform proton to neutron if the valence u transforms to valence d and W boson with scaled down mass.

    Note that the scaling down would make weak interaction stronger since weak boson exchange amplitude is proportional to 1/mW2).

    This would give the analog of neuron plus charge virtual pion. Taking two sea quarks would lead to trouble with the too large Coulomb interaction energy about -10 MeV of negatively charged sea with positively charged valence part of proton if the sea is of the same size as proton.

  2. Does the scaled down W decay to u-anti-d forming a scaled down meson? Or should one regard u-anti-d as a scaled down W having also the spin zero state analogous to pion since it is massive?
  3. Here comes a connection with old-fashioned and long ago forgotten hadron physics. Thepartially conserved axial current hypothesis (PCAC) gives a connection between strong and weak interactions forgotten when QCD emerged as the final theory. PCAC says that the divergence of axial weak currents associated with weak bosons are proportional to pions.

    Are the two pictures more or less equivalent? Virtual pion exchange could be regarded as a weak interaction! Also conserved vector current hypothesis (CVC) is part of this picture. This is not new: I have developed this picture earlier in an attempt to understand what the reported X boson with 17.5 MeV mass is in the TGD framework. Scaled down pion would be in question (see this).

  4. What about masses? Since the flux loop would have considerably greater size than proton, the mass scale of udbar state would be smaller than say MeV, and the contribution to mass of proton would be small.
  5. Why the asymmetry for anti-quarks of sea? The generation u-anti-d loop would increase the charge of the core region by two 2 units and transform it to Δ. This looksneither plausible nor probable. Proton would be a superposition consisting mostly of the proton of good old QCD and neutron plus flux loop with quantum numbers of a scaled down pion.
  6. Also the presence of scaled down ρ meson loops can be considered. Their presence would turn the spin of the core part of the proton opposite for some fraction of time. One can wonder whether this could relate to the spin puzzle of proton.
For the TGD based model of X boson see the article "X boson as evidence for nuclear string model".

See the article a The asymmetry of antimatter in proton from TGD point of view.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, February 25, 2021

Is analysis as a cognitive process analogous to particle reaction?

The latest work with number theoretical aspects of M8-H duality (see this) was related to the question whether it allows and in fact implies Fourier analysis in number theoretical universal manner at the level of H= M4× CP2.

The problem is that at the level of M8 analogous to momentum space polynomials define the space-time surfaces and number theoretically periodic functions are not a natural or even possible basis. At the level of H the space-time surfaces can be regarded orbits of 3-surfaces representing particles and are dynamical so that Fourier analysis is natural.

That this is the case, is suggested by the fact that M8-H duality maps normal spaces of space-time surface to points of CP2. Normal space is essentially the velocity space for surface deformations normal to the surface, which define the dynamical degrees of freedom by general coordinate invariance. Therefore dynamics enters to the picture. It turns out that the conjecture finds support.

Consider now the topic of this posting. Number theoretic vision about TGD is forced by the need to to describe the correlates of cognition. It is not totally surprising that these considerations lead to new insights related to the notion of cognitive measurement involving a cascade of measurements in the group algebra of Galois group as a possible model for analysis as a cognitive process (see this, this and this).

  1. The dimension n of the extension of rationals as the degree of the polynomial P=Pn1∘ Pn2∘ ... is the product of degrees of degrees ni: n=∏ini and one has a hierarchy of Galois groups Gi associated with Pni∘.... Gi+1 is a normal subgroup of Gi so that the coset space Hi=Gi/Gi+1 is a group of order ni. The groups Hi are simple and do not have this kind of decomposition: simple finite groups appearing as building bricks of finite groups are classified. Simple groups are primes for finite groups.
  2. The wave function in group algebra L(G) of Galois group G of P has a representation as an entangled state in the product of simple group algebras L(Hi). Since the Galois groups act on the space-time surfaces in M8 they do so also in H. One obtains wave functions in the space of space-time surfaces. G has decomposition to a product (not Cartesian in general) of simple groups. In the same manner, L(G) has a representation of entangled states assignable to L(Hi) (see this and this).
This picture leads to a model of cognitive processes as cascades of "small state function reductions" (SSFRs) analogous to "weak" measurements.
  1. Cognitive measurement would reduce the entanglement between L(H1) and L(H2), the between L(H2) and L(H3) and so on. The outcome would be an unentangled product of wave functions in L(Hi) in the product L(H1)× L(H2)× .... This cascade of cognitive measurements has an interpretation as a quantum correlate for analysis as factorization of a Galois group to its prime factors defined by simple Galois groups. Similar interpretation applies in M4 degrees of freedom.
  2. This decomposition could correspond to a replacement of P with a product ∏i Pi of polynomials with degrees n= n1n2..., which is irreducible and defines a union of separate surfaces without any correlations. This process is very much analogous to analysis.
  3. The analysis cannot occur for simple Galois groups associated with extensions having no decomposition to simpler extensions. They could be regarded as correlates for irreducible primal ideas. In Eastern philosophies the notion of state empty of thoughts could correspond to these cognitive states in which SSFRs cannot occur.
  4. An analogous process should make sense also in the gravitational sector and would mean the splitting of K=nA appearing as a factor ngr=Kp to prime factors so that the sizes of CDs involved with the resulting structure would be reduced. Note that ep(1/K) is the root of e defining the trascdental infinite-D extension rationals which has finite dimension Kp for p-adic number field Qp. This process would reduce to a simultaneous measurement cascade in hyperbolic and trigonometric Abelian extensions. The IR cutoffs having interpretation as coherence lengths would decrease in the process as expected. Nature would be performing ordinary prime factorization in the gravitational degrees of freedom.
This cognitive process would also have a geometric description.
  1. For the algebraic EQs, the geometric description would be as a decay of n-sheeted 4-surface with respect to M4 to a union of ni-sheeted 4-surfaces by SSFRs. This would take place for flux tubes mediating all kinds of interactions.

    In gravitational degrees of freedom, that is for trascendental EQs, the states with ngr=Kp having bundles of Kp flux tubes would deca to flux tubes bundles of ngr,i=Kip, where Ki is a prime dividing K. The quantity log(K) would be conserved in the process and is analogous to the corresponding conserved quantity in arithmetic quantum field theories and relates to the notion of infinite prime inspired by TGD \citeallbvisionc.

  2. This picture leads to ask whether one could speak of cognitive analogs of particle reactions representing interactions of "thought bubbles" i.e. space-time surfaces as correlates of cognition. The incoming and outgoing states would correspond to a Cartesian product of simple subgroups: G=∏×i Hi. In this composition the order of factors does not matter and the situation is analogous to a many particle system without interactions. The non-commutativity in general case leads to ask whether quantum groups might provide a natural description of the situation.

See the article Is M8-H duality consistent with Fourier analysis at the level of M4× CP2? or the chapter Breakthrough in understanding of M8-H duality.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Is M8-H duality consistent with Fourier analysis at the level of M4× CP2?

Is M8-H duality consistent with Fourier analysis at the level of M4× CP2? M8-H duality predicts that space-time surfaces as algebraic surfaces in complexified M8 (complexified octonions) determined by polynomials can be mapped to H=M4× CP2.

The proposal (see this) is that the strong form of M8-H duality in M4 degrees of freedom is realized by the inversion map pk∈ M4→ ℏeff×pk/p2. This conforms with the Uncertainty Principle. However, the polynomials do not involve periodic functions typically associated with the minimal space-time surfaces in H. Since M8 is analogous to momentum space, the periodicity is not needed.

In contrast to this, the representation of the space-time surfaces in H obey dynamics and the H-images of X4⊂ M8 should involve periodic functions and Fourier analysis for CP2 coordinates as functions of M4 coordinates.

Neper number, and therefore trigonometric and exponential functions are p-adically very special. In particular, ep is a p-adic number so that roots of e define finite-D extensions of p-adic numbers. As a consequence, Fourier analysis extended to allow exponential functions required in the case of Minkowskian signatures is a number theoretically universal concept making sense also for p-adic number fields.

The map of the tangent space of the space-time surface X4⊂ M8 to CP2 involves the analog velocity missing at the level of M8 and brings in the dynamics of minimal surfaces. Therefore the expectation is that the expansion of CP2 coordinates as exponential and trigonometric functions of M4 coordinates emerges naturally.

The possible physical interpretation of this picture is considered. The proposal is that the dimension of extension of rationals (EQ) resp. the dimension of the transcendental extension defined by roots of Neper number correspond to relatively small values of heff assignable to gauge interactions resp. to very large value of gravitational Planck constant ℏgr originally introduced by Nottale.

Also the connections with the quantum model for cognitive processes as cascades of cognitive measurements in the group algebra of Galois group (see this and this) and its counterpart for the transcendental extension defined by the root of e are considered. The geometrical picture suggests the interpretation of cognitive process as an analog of particle reaction emerges.

See the article Is M8-H duality consistent with Fourier analysis at the level of M4× CP2? or the chapter Breakthrough in understanding of M8-H duality.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, February 17, 2021

Can one regard leptons as effectively local 3-quark composites?

The idea about leptons as local composites of 3 quarks (see this) is strongly suggested by the mathematical structure of TGD. Later it was realized that it is enough that leptons look like local composites  in scales longer than CP2 scale defining the scale of the partonic 2-surface assignable to the particle.   

The proposal has profound consequences. One might say that SUSY  in the TGD sense has been   below our nose  for more than a century. The proposal could also solve matter-antimatter asymmetry since the twistor-lift of TGD predicts the analog of Kähler structure for Minkowski space and a small CP breaking,  which could make possible   a cosmological evolution in which quarks prefer to form baryons and antiquarks to form leptons. 

The basic objection is that the leptonic analog of Δ might emerge. One must explain why this state is at least experimentally absent and also develop a detailed model. In the article Can one regard leptons as effectively local 3-quark composites?  the construction of leptons as effectively local 3 quark states allowing effective description in terms of the   modes of leptonic spinor field in H=M4× CP2 having H-chirality opposite to quark spinors is discussed in detail.

See the article Can one regard leptons as effectively local 3-quark composites? and the chapter The Recent View about SUSY in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 


Monday, February 08, 2021

Quantum asymmetry between space and time

I received a link to a popular article about a test the proposal of Joan Vaccaro that if time reversal symmetry T were exact, our Universe would be radically different (thanks for Reza Rastmanesh and Gary Ehlenberg) For instance, wave functions would be wave packets in 4-D sense and conservation laws would be lost. Breaking of T would however come in rescue and give rise to the world in which we live. This proposal does not make sense in standard quantum theory but JV proposes a modification of path integral for single particle wave mechanism leading to the result.

I found that I could not understand anything about the popular article. I however found the article "Quantum asymmetry between space and time" by Joan Vaccaro. I tried to get grasp about the formula jungle of the article but became only confused. I expected clear logical arguments but I found none.

My comments are based mostly on the abstract of the article.

Asymmetry between time and space

[JV] An asymmetry exists between time and space in the sense that physical systems inevitably evolve over time, whereas there is no corresponding ubiquitous translation over space. The asymmetry, which is presumed to be elemental, is represented by equations of motion and conservation laws that operate differently over time and space.

My comments:

  1. One might argue like JV does if one does not keep in mind that Lorentz invariance allows to distinguish between timelike and space-like directions and base the notion of causality on their properties.

    In Euclidian geometry there would be no such special choice of time coordinate. But also now field equations would define slicing of space-time to 3-D slices since initial values at them would fix the time solution. Now however the slices could be chosen in very manner manners - for instance 3-spheres rather than hyperplanes as in Minkowski space.

  2. JV argues that there is an asymmetry in the quantum description of space and time in conventional quantum theory. The spatial coordinates of particle are treated as operators but time is not represented as an operator.

    The first mis-understanding is that the position of operator of particle is not space-time coordinate but specifies position of point-like particle in space time.

    The second mis-understanding is to think that the configuration space of the particle would be 4-D space-time. The configuration space of particle in non-relativistic wave mechanics is 3-space and time is the evolution parameter for unitary time evolution, not space-time coordinate. In the relativistic picture it could correspond to proper time along a world line.

    In quantum field theory (QFT) the spatial and temporal coordinates are in completely symmetric position. Wave mechanics is an approximation in which one considers only singlenon-relativistic particle. One should start from QFT or some more advanced to see whether the idea makes sense.

  3. JV identifies subjective and geometric time as practically all colleagues do. In geometric time time evolution is determined by field equations and conservation laws. In TGD zero energy ontology (ZEO) does not identify these times and resolves the problems caused by identification of these two times. The counterpart of time evolution with respect to subjective time is sequence of small state function reductions.
  4. The asymmetry between the two time directions appears in two manners.
    1. There is the thermo-dynamical arrow of time usually assumed to be the same always. In TGD both arrows are possible and the arrow changes in "big" (ordinary) state function reduction (BSFR). Subjective time correlates with geometric time but is not identical with it is closely related to the thermo-dynamical breaking of time reversal.
    2. The field equations (geometric time) have slightly broken time reflection symmetry T. This breaking is quite different from the generation of the thermo-dynamical arrow of time.

Could we give up the conservation laws and unitary time evolution and could the breaking of time reversal symmetry bring them back?

[JV] If, however, the asymmetry was found to be due to deeper causes, this conventional view of time evolution would need reworking. Here we show, using a sum-over-paths formalism, that a violation of time reversal (T) symmetry might be such a cause. If T symmetry is obeyed, then the formalism treats time and space symmetrically such that states of matter are localized both in space and in time.

In this case, equations of motion and conservation laws are undefined or inapplicable. However, if T symmetry is violated, then the same sum over paths formalism yields states that are localized in space and distributed without bound over time, creating an asymmetry between time and space. Moreover, the states satisfy an equation of motion (the Schrdinger equation) and conservation laws apply. This suggests that the timespace asymmetry is not elemental as currently presumed, and that T violation may have a deep connection with time evolution.

My comments:

  1. JV is ready to give up symmetries and conservation laws altogether in the new definition of path integral but of course brings them implicitly in by choice of Hamiltonian and by using the basic concepts like momentum and energy which are lost if one does not have Poincare symmetry.

    What remains is an attempt to repair the horrible damage done. The hope is that the tiny breaking of T invariance would be capable of this feat.

  2. Author uses a lot of formulas to show that T breaking can save the world. There are however ad hoc assumptions such as coarse graining and assumptions about the difference between Hamiltonian and time reversed Hamiltonian argued to lead to the basic formulas of standard quantum theory.

    The proposed formulas are based on single particle wave mechanics and do not generalize to the level of QFT. If one is really ready to throw away the basic conservation laws and therefore corresponding symmetries also the basic starting point formulas become non-sensible.

    Holistic mathematical thinking would help enormously the recent day theoretical physicists but it is given the label "philosophical" having the same emotional effect as "homeopathic" to the average colleague. What my colleague called formula disease has been the basic problem of theoretical physics for more than half century.

  3. This modification of path integral formula looks rather implausible to me.
    1. Giving up the arrow of time in the sum over paths formalism breaks the interpretation as a representation for Hamiltonian time evolution (path integral is mathematically actually not well-defined and is meant to represent just Schroedinger equation).

      If there were no asymmetry between time and space, quantum states would be wave packets in 4-D sense rather in 3-D sense. This is of course complete nonsense and in conflict with conservation laws: an entire galaxy could appear from nothing and disappear. Author notices this but does not seem to worry about consequences. By the way, in TGD inspired theory of consciousnessthe mental image of the galaxy can do this but this does not mean the disappearance of the galaxy!

      The use of the wave mechanics which is not Lorentz invariant, hides the loss of Lorentz invariance implied by the formalism whereas ordinary Schrödinger equation as non-relativistic approximation of Dirac equation does not break Lorentz invariance in non-relativistic approximation.

    2. It is optimistically assumed that the tiny breaking of T symmetry could change the situation completely so that the predictions of the standard quantum theory would emerge. Somehow the extremely small breaking of T would bring back arrow of time and save conservation laws and unitary time evolution and we could be confident that our galaxy exists also to-morrow.

      Why the contributions to modified path integral for which time arrow is not fixed would magically interfere to zero by a tiny breaking of T invariance?

      The proposal seems to be in conflict with elementary particle physics. The view is that neutral kaon is a superposition of a state and its T mirror image and this means tiny T breaking. Neutrinos also break T symmetry slightly. In this framework all other particles would represent complete T breaking. Usually the interpretation is just the opposite. This does not make sense to me.

    3. The test for proposal would be based on the idea that neutrino from nuclear reaction defined flux diminishing as 1/r2, r the distance from the reactor. This should somehow cause an effect on clocks proportional to 1/r2 due to the incomplete destructive interference of the contributions breaking the ordinary picture. I do not understand the details for how this was thought to take place.

      The small T violation of neutrinos would affect the maximal T violation in environment and somehow affect the local physics and be visible as a modification of clock time - maybe by modification of the Hamiltonian modelling the clock. This is really confusing since just the small T violation is assumed to induce the selection of the arrow of time meaning maximal T violation!

To sum up, I am confused.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.