Saturday, February 27, 2021

TGD based explanation for the asymmetry between anti-u and anti-d sea quarks in proton

I encountered in FB a highly interesting popular article "Decades-Long Experiment Finds Strange Mix of Antimatter in The Heart of Every Proton" (see this).

The popular article tells about the article "The asymmetry of antimatter in the proton" of Dove et al published in Nature (see this). This article is behind the paywall but the same issue of Nature has an additional article "Antimatter in the proton is more down than up" (see this) explaining the finding.

What is found is an asymmetry for u and antiquarks in the sense that there are slightly more d-type antiquarks (anti-d) than u type antiquarks (anti-u) in quark sea. This asymmetry does not seem to depend on the longitudinal momentum fraction of the antiquark: the ratio of anti-down and anti-up distribution functions is smaller than one and constant.

A model assuming that proton is part of time in a state consisting of neutron and virtual pion seems to fit at qualitative level into the picture. Unfortunately, the old-fashioned strong interaction theory based on nuclei and pions does not converge by the quite too large value of proton pion coupling constant.

I looked at the situation in more detail and developed a simple TGD based model based on the already existing picture developed by taking seriously the so called X boson as 17.5 MeV particle and the empirical evidence for scaled down variants of pion predicted by TGD (see this). What TGD can give is the replacement of virtual mesons with real on mass shell mesons but with p-adically scaled down mass and a concrete topologicaldescription of strong interactions at the hadronic and nuclear level in terms of reconnections of flux tubes.

1. Basic data about quark and nucleon masses

To get a quantitative grasp about the situation, one can first see what is known about masses of u and d quarks.

  1. One estimate for u and d quark masses (one must taken the proposals very cautiously) can be found here.

    The mass ranges are for u 1.7-3.3 MeV and and for d 4.1-5.8 MeV.

  2. In the first approximation n-p mass difference 1.3 MeV would be just d-u mass difference varying in the range 1.2 MeV-4.1 MeV and has correct sign and correct order of magnitude. 4.1 MeV for d and 3.3 MeV for u would produce the n-p mass difference correctly.
  3. Coulomb interactions give a contribution which is vanishing for p and and negative for neutron

    Ec(n) =-α× ℏ/3Re,

    Re is proton's electromagnetic scale.

    This contribution reduces neutron mass. If Rem is taken to be proton Compton radius this gives about Ec≈ - 3.2 MeV. This would predict mass n-p difference in the range -1.1-0.9 MeV. This favors maximal n-p mass difference 4.1 MeV and m(u)= 1.7 MeV and md =5.8 MeV: d-u mass difference would be 4.1 MeV roughly 4 times electron mass.

2. TGD based picture about hadronic an nuclear interactions

Consider first the TGD inspired topological model for hadronic an nuclear interactions.

  1. The notion of magnetic body (MB) assignable to color and em and electroweak interactions is essential. Interactions are described by virtual particle exchanges in quantum field framework. In TGD they are described by reconnections of U-shaped flux tubes which are like tentacles.

    In interaction these tentacles reconnect and give rise to a pair of flux tubes connecting the particles. The flux tubes would carry monopole flux so that single flux tube cannot be split. These flux tube pairs serve also as correlates of entanglement replacing wormholes as their correlates in ER-EPR picture.

    This picture looks rather biological and was developed first as a model of bio-catalysis. The picture should apply quite generally to short range interactions at least.

  2. The U-shaped flux tubes of color MB replace virtual pion and and rho meson exchanges in the old fashioned picture about strong interactions. They represent in TGD framework real particles but with p-adically scaled down mass. For instance, pions are predicted to have scaled down variants with masses different by a negative power of 2 from pion mass. Same is true for rho. Now the masses would be below MeV range, which is the energy scale of nuclear strong interactions.

    Also nuclear strong interactions would occur in this manner. The fact that flux tubes have much longer length than nuclear size would explain the mysterious finding that in nuclear decay the fragments manage to generate their angular momenta after the reaction: the flux tubes would make possible the exchange of angular momentum required by angular momentum conservation.

3. A model for the anti-quark asymmetry

Consider now a model anti-quark asymmetry for sea quarks.

  1. Quarks and antiquarks would appear at these flux tubes. The natural first guess is meson like states are in question.

    The generation of u-anti-d type pion or rho would transform proton to neutron if the valence u transforms to valence d and W boson with scaled down mass.

    Note that the scaling down would make weak interaction stronger since weak boson exchange amplitude is proportional to 1/mW2).

    This would give the analog of neuron plus charge virtual pion. Taking two sea quarks would lead to trouble with the too large Coulomb interaction energy about -10 MeV of negatively charged sea with positively charged valence part of proton if the sea is of the same size as proton.

  2. Does the scaled down W decay to u-anti-d forming a scaled down meson? Or should one regard u-anti-d as a scaled down W having also the spin zero state analogous to pion since it is massive?
  3. Here comes a connection with old-fashioned and long ago forgotten hadron physics. Thepartially conserved axial current hypothesis (PCAC) gives a connection between strong and weak interactions forgotten when QCD emerged as the final theory. PCAC says that the divergence of axial weak currents associated with weak bosons are proportional to pions.

    Are the two pictures more or less equivalent? Virtual pion exchange could be regarded as a weak interaction! Also conserved vector current hypothesis (CVC) is part of this picture. This is not new: I have developed this picture earlier in an attempt to understand what the reported X boson with 17.5 MeV mass is in the TGD framework. Scaled down pion would be in question (see this).

  4. What about masses? Since the flux loop would have considerably greater size than proton, the mass scale of udbar state would be smaller than say MeV, and the contribution to mass of proton would be small.
  5. Why the asymmetry for anti-quarks of sea? The generation u-anti-d loop would increase the charge of the core region by two 2 units and transform it to Δ. This looksneither plausible nor probable. Proton would be a superposition consisting mostly of the proton of good old QCD and neutron plus flux loop with quantum numbers of a scaled down pion.
  6. Also the presence of scaled down ρ meson loops can be considered. Their presence would turn the spin of the core part of the proton opposite for some fraction of time. One can wonder whether this could relate to the spin puzzle of proton.
For the TGD based model of X boson see the article "X boson as evidence for nuclear string model".

See the article a The asymmetry of antimatter in proton from TGD point of view.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, February 25, 2021

Is analysis as a cognitive process analogous to particle reaction?

The latest work with number theoretical aspects of M8-H duality (see this) was related to the question whether it allows and in fact implies Fourier analysis in number theoretical universal manner at the level of H= M4× CP2.

The problem is that at the level of M8 analogous to momentum space polynomials define the space-time surfaces and number theoretically periodic functions are not a natural or even possible basis. At the level of H the space-time surfaces can be regarded orbits of 3-surfaces representing particles and are dynamical so that Fourier analysis is natural.

That this is the case, is suggested by the fact that M8-H duality maps normal spaces of space-time surface to points of CP2. Normal space is essentially the velocity space for surface deformations normal to the surface, which define the dynamical degrees of freedom by general coordinate invariance. Therefore dynamics enters to the picture. It turns out that the conjecture finds support.

Consider now the topic of this posting. Number theoretic vision about TGD is forced by the need to to describe the correlates of cognition. It is not totally surprising that these considerations lead to new insights related to the notion of cognitive measurement involving a cascade of measurements in the group algebra of Galois group as a possible model for analysis as a cognitive process (see this, this and this).

  1. The dimension n of the extension of rationals as the degree of the polynomial P=Pn1∘ Pn2∘ ... is the product of degrees of degrees ni: n=∏ini and one has a hierarchy of Galois groups Gi associated with Pni∘.... Gi+1 is a normal subgroup of Gi so that the coset space Hi=Gi/Gi+1 is a group of order ni. The groups Hi are simple and do not have this kind of decomposition: simple finite groups appearing as building bricks of finite groups are classified. Simple groups are primes for finite groups.
  2. The wave function in group algebra L(G) of Galois group G of P has a representation as an entangled state in the product of simple group algebras L(Hi). Since the Galois groups act on the space-time surfaces in M8 they do so also in H. One obtains wave functions in the space of space-time surfaces. G has decomposition to a product (not Cartesian in general) of simple groups. In the same manner, L(G) has a representation of entangled states assignable to L(Hi) (see this and this).
This picture leads to a model of cognitive processes as cascades of "small state function reductions" (SSFRs) analogous to "weak" measurements.
  1. Cognitive measurement would reduce the entanglement between L(H1) and L(H2), the between L(H2) and L(H3) and so on. The outcome would be an unentangled product of wave functions in L(Hi) in the product L(H1)× L(H2)× .... This cascade of cognitive measurements has an interpretation as a quantum correlate for analysis as factorization of a Galois group to its prime factors defined by simple Galois groups. Similar interpretation applies in M4 degrees of freedom.
  2. This decomposition could correspond to a replacement of P with a product ∏i Pi of polynomials with degrees n= n1n2..., which is irreducible and defines a union of separate surfaces without any correlations. This process is very much analogous to analysis.
  3. The analysis cannot occur for simple Galois groups associated with extensions having no decomposition to simpler extensions. They could be regarded as correlates for irreducible primal ideas. In Eastern philosophies the notion of state empty of thoughts could correspond to these cognitive states in which SSFRs cannot occur.
  4. An analogous process should make sense also in the gravitational sector and would mean the splitting of K=nA appearing as a factor ngr=Kp to prime factors so that the sizes of CDs involved with the resulting structure would be reduced. Note that ep(1/K) is the root of e defining the trascdental infinite-D extension rationals which has finite dimension Kp for p-adic number field Qp. This process would reduce to a simultaneous measurement cascade in hyperbolic and trigonometric Abelian extensions. The IR cutoffs having interpretation as coherence lengths would decrease in the process as expected. Nature would be performing ordinary prime factorization in the gravitational degrees of freedom.
This cognitive process would also have a geometric description.
  1. For the algebraic EQs, the geometric description would be as a decay of n-sheeted 4-surface with respect to M4 to a union of ni-sheeted 4-surfaces by SSFRs. This would take place for flux tubes mediating all kinds of interactions.

    In gravitational degrees of freedom, that is for trascendental EQs, the states with ngr=Kp having bundles of Kp flux tubes would deca to flux tubes bundles of ngr,i=Kip, where Ki is a prime dividing K. The quantity log(K) would be conserved in the process and is analogous to the corresponding conserved quantity in arithmetic quantum field theories and relates to the notion of infinite prime inspired by TGD \citeallbvisionc.

  2. This picture leads to ask whether one could speak of cognitive analogs of particle reactions representing interactions of "thought bubbles" i.e. space-time surfaces as correlates of cognition. The incoming and outgoing states would correspond to a Cartesian product of simple subgroups: G=∏×i Hi. In this composition the order of factors does not matter and the situation is analogous to a many particle system without interactions. The non-commutativity in general case leads to ask whether quantum groups might provide a natural description of the situation.

See the article Is M8-H duality consistent with Fourier analysis at the level of M4× CP2? or the chapter Breakthrough in understanding of M8-H duality.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Is M8-H duality consistent with Fourier analysis at the level of M4× CP2?

Is M8-H duality consistent with Fourier analysis at the level of M4× CP2? M8-H duality predicts that space-time surfaces as algebraic surfaces in complexified M8 (complexified octonions) determined by polynomials can be mapped to H=M4× CP2.

The proposal (see this) is that the strong form of M8-H duality in M4 degrees of freedom is realized by the inversion map pk∈ M4→ ℏeff×pk/p2. This conforms with the Uncertainty Principle. However, the polynomials do not involve periodic functions typically associated with the minimal space-time surfaces in H. Since M8 is analogous to momentum space, the periodicity is not needed.

In contrast to this, the representation of the space-time surfaces in H obey dynamics and the H-images of X4⊂ M8 should involve periodic functions and Fourier analysis for CP2 coordinates as functions of M4 coordinates.

Neper number, and therefore trigonometric and exponential functions are p-adically very special. In particular, ep is a p-adic number so that roots of e define finite-D extensions of p-adic numbers. As a consequence, Fourier analysis extended to allow exponential functions required in the case of Minkowskian signatures is a number theoretically universal concept making sense also for p-adic number fields.

The map of the tangent space of the space-time surface X4⊂ M8 to CP2 involves the analog velocity missing at the level of M8 and brings in the dynamics of minimal surfaces. Therefore the expectation is that the expansion of CP2 coordinates as exponential and trigonometric functions of M4 coordinates emerges naturally.

The possible physical interpretation of this picture is considered. The proposal is that the dimension of extension of rationals (EQ) resp. the dimension of the transcendental extension defined by roots of Neper number correspond to relatively small values of heff assignable to gauge interactions resp. to very large value of gravitational Planck constant ℏgr originally introduced by Nottale.

Also the connections with the quantum model for cognitive processes as cascades of cognitive measurements in the group algebra of Galois group (see this and this) and its counterpart for the transcendental extension defined by the root of e are considered. The geometrical picture suggests the interpretation of cognitive process as an analog of particle reaction emerges.

See the article Is M8-H duality consistent with Fourier analysis at the level of M4× CP2? or the chapter Breakthrough in understanding of M8-H duality.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, February 17, 2021

Can one regard leptons as effectively local 3-quark composites?

The idea about leptons as local composites of 3 quarks (see this) is strongly suggested by the mathematical structure of TGD. Later it was realized that it is enough that leptons look like local composites  in scales longer than CP2 scale defining the scale of the partonic 2-surface assignable to the particle.   

The proposal has profound consequences. One might say that SUSY  in the TGD sense has been   below our nose  for more than a century. The proposal could also solve matter-antimatter asymmetry since the twistor-lift of TGD predicts the analog of Kähler structure for Minkowski space and a small CP breaking,  which could make possible   a cosmological evolution in which quarks prefer to form baryons and antiquarks to form leptons. 

The basic objection is that the leptonic analog of Δ might emerge. One must explain why this state is at least experimentally absent and also develop a detailed model. In the article Can one regard leptons as effectively local 3-quark composites?  the construction of leptons as effectively local 3 quark states allowing effective description in terms of the   modes of leptonic spinor field in H=M4× CP2 having H-chirality opposite to quark spinors is discussed in detail.

See the article Can one regard leptons as effectively local 3-quark composites? and the chapter The Recent View about SUSY in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD. 

Monday, February 08, 2021

Quantum asymmetry between space and time

I received a link to a popular article about a test the proposal of Joan Vaccaro that if time reversal symmetry T were exact, our Universe would be radically different (thanks for Reza Rastmanesh and Gary Ehlenberg) For instance, wave functions would be wave packets in 4-D sense and conservation laws would be lost. Breaking of T would however come in rescue and give rise to the world in which we live. This proposal does not make sense in standard quantum theory but JV proposes a modification of path integral for single particle wave mechanism leading to the result.

I found that I could not understand anything about the popular article. I however found the article "Quantum asymmetry between space and time" by Joan Vaccaro. I tried to get grasp about the formula jungle of the article but became only confused. I expected clear logical arguments but I found none.

My comments are based mostly on the abstract of the article.

Asymmetry between time and space

[JV] An asymmetry exists between time and space in the sense that physical systems inevitably evolve over time, whereas there is no corresponding ubiquitous translation over space. The asymmetry, which is presumed to be elemental, is represented by equations of motion and conservation laws that operate differently over time and space.

My comments:

  1. One might argue like JV does if one does not keep in mind that Lorentz invariance allows to distinguish between timelike and space-like directions and base the notion of causality on their properties.

    In Euclidian geometry there would be no such special choice of time coordinate. But also now field equations would define slicing of space-time to 3-D slices since initial values at them would fix the time solution. Now however the slices could be chosen in very manner manners - for instance 3-spheres rather than hyperplanes as in Minkowski space.

  2. JV argues that there is an asymmetry in the quantum description of space and time in conventional quantum theory. The spatial coordinates of particle are treated as operators but time is not represented as an operator.

    The first mis-understanding is that the position of operator of particle is not space-time coordinate but specifies position of point-like particle in space time.

    The second mis-understanding is to think that the configuration space of the particle would be 4-D space-time. The configuration space of particle in non-relativistic wave mechanics is 3-space and time is the evolution parameter for unitary time evolution, not space-time coordinate. In the relativistic picture it could correspond to proper time along a world line.

    In quantum field theory (QFT) the spatial and temporal coordinates are in completely symmetric position. Wave mechanics is an approximation in which one considers only singlenon-relativistic particle. One should start from QFT or some more advanced to see whether the idea makes sense.

  3. JV identifies subjective and geometric time as practically all colleagues do. In geometric time time evolution is determined by field equations and conservation laws. In TGD zero energy ontology (ZEO) does not identify these times and resolves the problems caused by identification of these two times. The counterpart of time evolution with respect to subjective time is sequence of small state function reductions.
  4. The asymmetry between the two time directions appears in two manners.
    1. There is the thermo-dynamical arrow of time usually assumed to be the same always. In TGD both arrows are possible and the arrow changes in "big" (ordinary) state function reduction (BSFR). Subjective time correlates with geometric time but is not identical with it is closely related to the thermo-dynamical breaking of time reversal.
    2. The field equations (geometric time) have slightly broken time reflection symmetry T. This breaking is quite different from the generation of the thermo-dynamical arrow of time.

Could we give up the conservation laws and unitary time evolution and could the breaking of time reversal symmetry bring them back?

[JV] If, however, the asymmetry was found to be due to deeper causes, this conventional view of time evolution would need reworking. Here we show, using a sum-over-paths formalism, that a violation of time reversal (T) symmetry might be such a cause. If T symmetry is obeyed, then the formalism treats time and space symmetrically such that states of matter are localized both in space and in time.

In this case, equations of motion and conservation laws are undefined or inapplicable. However, if T symmetry is violated, then the same sum over paths formalism yields states that are localized in space and distributed without bound over time, creating an asymmetry between time and space. Moreover, the states satisfy an equation of motion (the Schrdinger equation) and conservation laws apply. This suggests that the timespace asymmetry is not elemental as currently presumed, and that T violation may have a deep connection with time evolution.

My comments:

  1. JV is ready to give up symmetries and conservation laws altogether in the new definition of path integral but of course brings them implicitly in by choice of Hamiltonian and by using the basic concepts like momentum and energy which are lost if one does not have Poincare symmetry.

    What remains is an attempt to repair the horrible damage done. The hope is that the tiny breaking of T invariance would be capable of this feat.

  2. Author uses a lot of formulas to show that T breaking can save the world. There are however ad hoc assumptions such as coarse graining and assumptions about the difference between Hamiltonian and time reversed Hamiltonian argued to lead to the basic formulas of standard quantum theory.

    The proposed formulas are based on single particle wave mechanics and do not generalize to the level of QFT. If one is really ready to throw away the basic conservation laws and therefore corresponding symmetries also the basic starting point formulas become non-sensible.

    Holistic mathematical thinking would help enormously the recent day theoretical physicists but it is given the label "philosophical" having the same emotional effect as "homeopathic" to the average colleague. What my colleague called formula disease has been the basic problem of theoretical physics for more than half century.

  3. This modification of path integral formula looks rather implausible to me.
    1. Giving up the arrow of time in the sum over paths formalism breaks the interpretation as a representation for Hamiltonian time evolution (path integral is mathematically actually not well-defined and is meant to represent just Schroedinger equation).

      If there were no asymmetry between time and space, quantum states would be wave packets in 4-D sense rather in 3-D sense. This is of course complete nonsense and in conflict with conservation laws: an entire galaxy could appear from nothing and disappear. Author notices this but does not seem to worry about consequences. By the way, in TGD inspired theory of consciousnessthe mental image of the galaxy can do this but this does not mean the disappearance of the galaxy!

      The use of the wave mechanics which is not Lorentz invariant, hides the loss of Lorentz invariance implied by the formalism whereas ordinary Schrödinger equation as non-relativistic approximation of Dirac equation does not break Lorentz invariance in non-relativistic approximation.

    2. It is optimistically assumed that the tiny breaking of T symmetry could change the situation completely so that the predictions of the standard quantum theory would emerge. Somehow the extremely small breaking of T would bring back arrow of time and save conservation laws and unitary time evolution and we could be confident that our galaxy exists also to-morrow.

      Why the contributions to modified path integral for which time arrow is not fixed would magically interfere to zero by a tiny breaking of T invariance?

      The proposal seems to be in conflict with elementary particle physics. The view is that neutral kaon is a superposition of a state and its T mirror image and this means tiny T breaking. Neutrinos also break T symmetry slightly. In this framework all other particles would represent complete T breaking. Usually the interpretation is just the opposite. This does not make sense to me.

    3. The test for proposal would be based on the idea that neutrino from nuclear reaction defined flux diminishing as 1/r2, r the distance from the reactor. This should somehow cause an effect on clocks proportional to 1/r2 due to the incomplete destructive interference of the contributions breaking the ordinary picture. I do not understand the details for how this was thought to take place.

      The small T violation of neutrinos would affect the maximal T violation in environment and somehow affect the local physics and be visible as a modification of clock time - maybe by modification of the Hamiltonian modelling the clock. This is really confusing since just the small T violation is assumed to induce the selection of the arrow of time meaning maximal T violation!

To sum up, I am confused.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

A test for the quantum character of gravity as a test for quantum TGD

The following represents comments to a popular article describing a proposed experiment possibly conforming the quantum character of gravity. Thanks for the link to Gary Ehlenberg.

The proposal in idealized form is following.

  1. One has two diamonds which are originally in superposition of two opposite spin directions.
  2. One applies magnetic field with gradient. We know that in external magnetic field which is not constant, particle with spin is deflected to a direction which is opposite for opposite sping directions. If one has a superposition of spins then one would obtain superposition of deflected spin states with different spatial positions. The diamond can be said to be at two positions simultaneously. This is the key effect and leads to the question about whether and how it could be described in general relativity.
Consider now the situation in various approaches to quantum gravity.
  1. The situation is problematic from the point of view of classical GRT since the gravitational fields created by the different spin components of superposition would have different distribution of mass. Classical GRT space-time determined by Einstein's equations would not be unique. In order to avoidthis one could argue that state function reduction (SFR) occurs immediatelyso that the spin is well-defined and space-time is unique. This looks very ugly but to my surprise Dyson takes this possibility seriously.
  2. In quantum gravity based on GRTview it seems that one should allow superposition of space-times or at least of 3-spaces. The superposition of 3-geometries would conform with Wheelerian superspace view but leads to the problems with time and conservationlaws quite generally. Wheeler's superspace did not work mathematically.
  3. In the description in terms of quantum gravitons one would give up the GRT picture altogether and string model view is something akin to this. Unfortunately, the attemptsto formulate quantum gravitation as a field theory in empty Minkowski space using Einstein's action fail. The divergences are horrible and renormalization is not possible. String models in turn try to get space-time by spontaneous compactification but this led to the landscape misery.
TGD based picture avoids the problems of these approaches.
  1. In zero energy ontology (ZEO) the quantum states are superpositions of space-time surfaces : not superpositions of general 4-geometries making even not sense in standard QM or of 3-surfaces as they would be in generalization of Wheeler's superspace approach. The space-time surfaces would be preferred extremals of the action defining the theory and one obtains holography from general coordinate invariance: 3-surface determines space-time surface just as the point of Bohr orbit defines the Bohr orbit.
  2. In this picture superposition of states with position of particle depending on spin direction corresponds to superposition of different space-time surfaces. Also the entanglement between various components of the state involving two spin superposed states of the two diamonds involved, is possible and generated in the time evolution. Entanglementwould have as a correlate magnetic flux tubes connecting the diamonds. One would have superposition of space-time surfaces corresponding to different flux tube lengths connecting spin states at various positions. Thereisno reason to expect that entanglement would not be generated. The expected positive outcome of the experiment could be seen as a proof of not only quantum character of gravitations but evidences for the superposition of space-time surfaces and ZEO.
  3. Diamonds are rather large objects and the small value of Planck constant is the trouble in the standard approach. It might be possible to overcome this problem in TGD. TGD predicts that dark matter corresponds to phases of ordinary matter with arbitrarily large values of h_eff at MB glux tubes would have macroscopic quantum coherence in the scale of flux tube and could induce coherence of ordinary matter (diamonds) at their end as controllers of the ordinary matter. This would not be quantum coherence but would make visible the quantum coherence at the level of MB.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.