https://matpitka.blogspot.com/2017/04/

Sunday, April 30, 2017

Phase transition from M107 hadron physics to M89 hadron physics as counterpart for de-confinement phase transition?

Quark gluon plasma assigned to de-confinement phase transition predicted by QCD has turned out to be a problematic notion. The original expectation was that quark gluon plasma (QGP) would be created in heavy ion collisions. A candidate for QGP was discovered already at RHIC but did not have quite the expected properties such as black body spectrum behaving like an ideal liquid with long range correlations between charged particle pairs created in the collision. Then LHC discovered that this phase is created even in proton-heavy nucleus collisions. Now this phase have been discovered even in proton-proton collisions. This is something unexpected and both a challenge and opportunity to TGD.

In TGD framework QGP is replaced with quantum critical state appearing in the transition from ordinary hadron physics characterized by Mersenne prime M107 to dark variant of M89 hadron physics characterized by heff/h=n=512. At criticality partons are hybrids of M89 and M107 partons with Compton length of ordinary partons and mass m(89)≤ 512 m(107). Inequality follows from possible 1/512 fractionization of mass and other quantum numbers. The observed strangeness enhancement can be understood as a violation of quark universality if the gluons of M89 hadron physics correspond to second generation of gluons whose couplings necessarily break quark universality.

The violation of quark universality would be counterpart for the violation of lepton universality and the simplest hypothesis that the charge matrices acting on family triplets are same for quarks and leptons allows to understand also the strangeness enhancement qualitatively.

See the chapter New Physics predicted by TGD: I of "p-Adic length scale hypothesis" and the article Phase transition from M107 hadron physics to M89 hadron physics as counterpart for de-confinement phase transition? .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, April 24, 2017

Two steps towards understanding of the origins of life

Two highly interesting findings providing insights about the origins of life have emerged and it is interesting to see how they fit to the TGD inspired vision.

The group led by Thomas Carell has made an important step in the understanding the origins of life (see this). They have identified a mechanism leading to the generation of purines A and G which besides pyrimidines A,T (U) are the basic building bricks of DNA and RNA. The crucial step is to make the solution involved slightly acidic by adding protons. For year later I learned that a variant of Urey-Miller experiment with simulation of shock waves perhaps generated by extraterrestrial impacts using laser pulses generates formamide and this in turn leads to the generation of all 4 RNA bases (see the popular article and article).

These findings represent a fascinating challenge for TGD inspired quantum biology. The proposal is that formamide is the unique amide, which can form stable bound states with dark protons and crucial for the development of life as dark matter-visible matter symbiosis. Pollack effect would generate electron rich exclusions zones and dark protons at magnetic flux tubes. Dark protons would bind stably with unique amine leaving its chemical properties intact. This would lead to the generation of purines and the 4 RNA bases. This would be starting point of life as symbiosis of ordinary matter and dark matter as large heff/h=n phases of ordinary matter generated at quantum criticality induced by say extraterrestrial impacts. The TGD based model for cold fusion and the recent results about superdense phase of hydrogen identifiable in TGD framework as dark proton sequences giving rise to dark nuclear strings provides support for this picture.

There is however a problem: a reductive environment (with ability to donate electrons) is needed in these experiments: it seems that early atmosphere was not reductive. In TGD framework one can imagine two - not mutually exclusive - solutions of the problem. Either life evolved in underground oceans, where oxygen concentration was small or Pollack effect gave rise to negatively charged and thus reductive exclusion zones (EZs) as protons were transferred to dark protons at magnetic flux tubes. The function of UV radiation, catalytic action, and of shock waves would be generation of quantum criticality inducing the creation of EZs making possible dark heff/h=n phases.

For details and background see the article Two steps towards understanding of the origins of life or the chapter Evolution in Many-Sheeted Space-Time.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, April 23, 2017

Breaking of lepton universality seems to be real

The evidence for the violation of lepton number universality is accumulating at LHC. I have written about the violation of lepton number universality in the decays of B and K mesons already earlier explaining it in terms of two higher generations of electroweak bosons. The existence of free fermion generations having topological explanation in TGD can be regarded formally as SU(3) triplet. One can speak of family-SU(3).

Electroweak bosons and gluons belong to singlet and octet of family-SU(3) and the natural assumption is that only singlet (ordinary gauge bosons) and two SU(3) neutral states of octet are light. One would have effectively 3 generations of electroweak bosons and gluons. There charge matrices would be orthogonal with respect to the inner product defined by trace so that both quark and lepton universality would be broken in the same manner. The strongest assumption is that the charge matrices in flavor space are same for all weak bosons. The CKM mixing for neutrinos complicates this picture by affecting the branching rations of charged weak bosons.

Quite recently I noticed that second generation of Z boson could explain the different values of proton charge radius determined from the hydrogen and muonium atoms as one manifestation of the violation of universality (see this). The concept of charge matrix is discussed in more detail in this post.

I learned quite recently about new data concerning B meson anomalies. The experimental ideas are explained here. It is interesting to look at the results in more detail from TGD point of view..

  1. There is about 4.0 σ deviation from $τ/l$ universality (l=μ,e) in b→ c transitions. In terms of branching ratios ones has:

    R(D*)=Br(B→ D*→τντ)/Br(B→ D*l) =0.316+/- 0.016+/- 0.010 ,

    R(D) =Br(B→ Dτντ)/Br(B→ lνl) =0.397+/- 0.040+/- 0.028 ,

    The corresponding SM values are R(D*)|SM= 0.252+/- 0.003 and R(D)|SM=.300+/- .008. My understanding is that the normalization factor in the ratio involves total rate to D*l, l=μ, e involving only single neutrino in final state whereas the τν decays involve 3 neutrinos due to the neutrino pair from τ implying broad distribution for the missing mass.

    The decays to τ ντ are clearly preferred as if there were an exotic W boson preferring to decay τν over lν , l=e,μ. In TGD it could be second generation W boson. Note that CKM mizing of neutrinos could also affect the branching ratios.

  2. Since these decays are mediated at tree level in the SM, relatively large new physics contributions are necessary to explain these deviations. Observation of 2.6 σ deviation of μ/e universality in the dilepton invariant mass bin 1 GeV2≤ q2≤ 6 GeV2 in b→ s transitions:

    R(K)=Br(B→ Kμ+μ-)/Br(B→ K e+e-) =0.745+0.090/-0.074+/- 0.038

    deviate from the SM prediction R(K)|SM=1.0003+/- 0.0001.

    This suggests the existence of the analog of Z boson preferring to decay to e+e- rather than μ+μ- pairs.

    If the charge matrices acting on dynamical family-SU(3) fermion triplet do not depend on electroweak bosons and neutrino CKM mixing is neglected for the decays of second generation W, the data for branching ratios of D bosons implies that the decays to e+e- and τ+τ- should be favored over the decays to μ+μ-. Orthogonality of the charge matrices plus the above data could allow to fix them rather precisely from data. It might be that one must take into account the CKM mixing.

  3. CMS recently also searched for the decay h→ τμ and found a non-zero result of Br(h→ τμ)=0.84+0.39/-0.37 , which disagrees by about 2.4 σ from 0, the SM value. I have proposed an explanation for this finding in terms of CKM mixing for leptons. h would decay to W+W- pair, which would exchange neutrino transforming to τμ pair by neutrino CKM mixing.

  4. According to the reference, for Z, the lower bound for the mass is 2.9 TeV, just the TGD prediction if it corresponds to Gaussian Mersenne MG,79=(1+i)79 so that the mass would be 32 times the mass of ordinary Z boson! It seem that we are at the verge of the verification of one key prediction of TGD.

For background see the chapter New Physics predicted by TGD: I of "p-Adic length scale hypothesis".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, April 21, 2017

Getting even more quantitative about CP violation

The twistor lift of TGD forces to introduce the analog of Kähler form for M4, call it J. J is covariantly constant self-dual 2-form, whose square is the negative of the metric. There is a moduli space for these Kähler forms parametrized by the direction of the constant and parallel magnetic and electric fields defined by J. J partially characterizes the causal diamond (CD): hence the notation J(CD) and can be interpreted as a geometric correlate for fixing quantization axis of energy (rest system) and spin.

Kähler form defines classical U(1) gauge field and there are excellent reasons to expect that it gives rise to U(1) quanta coupling to the difference of B-L of baryon and lepton numbers. There is coupling strength α1 associated with this interaction. The first guess that it could be just Kähler coupling strength leads to unphysical predictions: α1 must be much smaller. Here I do not yet completely understand the situation. One can however check whether the simplest guess is consistent with the empirical inputs from CP breaking of mesons and antimatter asymmetry. This turns out to be the case.

One must specify the value of α1 and the scaling factor transforming J(CD) having dimension length squared as tensor square root of metric to dimensionless U(1) gauge field F= J(CD)/S. This leads to a series of questions.

How to fix the scaling parameter S?

  1. The scaling parameter relating J(CD) and F is fixed by flux quantization implying that the flux of J(CD) is the area of sphere S2 for the twistor space M4× S2. The gauge field is obtained as F=J/S, where S= 4π R2(S2) is the area of S2.

  2. Note that in Minkowski coordinates the length dimension is by convention shifted from the metric to linear Minkowski coordinates so that the magnetic field B1 has dimension of inverse length squared and corresponds to J(CD)/SL2, where L is naturally be taken to the size scale of CD defining the unit length in Minkowski coordinates. The U(1) magnetic flux would the signed area using L2 as a unit.

How R(S2) relates to Planck length lP? lP is either the radius lP=R of the twistor sphere S2 of the twistor space T=M4× S2 or the circumference lP= 2π R(S2) of the geodesic of S2. Circumference is a more natural identification since it can be measured in Riemann geometry whereas the operational definition of the radius requires imbedding to Euclidian 3-space.

How can one fix the value of U(1) coupling strength α1? As a guideline one can use CP breaking in K and B meson systems and the parameter characterizing matter-antimatter symmetry.

  1. The recent experimental estimate for so called Jarlskog parameter characterizing the CP breaking in kaon system is J≈ 3.0× 10-5. For B mesons CP breading is about 50 times larger than for kaons and it is clear that Jarlskog invariant does not distinguish between different meson so that it is better to talk about orders of magnitude only.

  2. Matter-antimatter asymmetry is characterized by the number r=nB/nγ ∼ 10-10 telling the ratio of the baryon density after annihilation to the original density. There is about one baryon 10 billion photons of CMB left in the recent Universe.

Consider now the identification of α1.
  1. Since the action is obtained by dimensional reduction from the 6-D Kähler action, one could argue α1= αK. This proposal leads to unphysical predictions in atomic physics since neutron-electron U(1) interaction scales up binding energies dramatically.

    U(1) part of action can be however regarded a small perturbation characterized by the parameter ε= R2(S2)/R2(CP2), the ratio of the areas of twistor spheres of T(M4) and T(CP2). One can however argue that since the relative magnitude of U(1) term and ordinary Kähler action is given by ε, one has α1=ε× αK so that the coupling constant evolution for α1 and αK would be identical.

  2. ε indeed serves in the role of coupling constant strength at classical level. αK disappears from classical field equations at the space-time level and appears only in the conditions for the super-symplectic algebra but ε appears in field equations since the Kähler forms of J resp. CP2 Kähler form is proportional to R2(S2) resp. R2(CP2) times the corresponding U(1) gauge field. R(S2) appears in the definition of 2-bein for R2(S2) and therefore in the modified gamma matrices and modified Dirac equation. Therefore ε1/2=R(S2)/R(CP2) appears in modified Dirac equation as required by CP breaking manifesting itself in CKM matrix.

    NTU for the field equations in the regions, where the volume term and Kähler action couple to each other demands that ε and ε1/2 are rational numbers, hopefully as simple as possible. Otherwise there is no hope about extremals with parameters of the polynomials appearing in the solution in an arbitrary extension of rationals and NTU is lost. Transcendental values of ε are definitely excluded. The most stringent condition ε=1 is also unphysical. ε= 22r is favoured number theoretically.

Concerning the estimate for ε it is best to use the constraints coming from p-adic mass calculations.
  1. p-Adic mass calculations predict electron mass as

    me= hbar/R(CP2)(5+Y)1/2 .

    Expressing me in terms of Planck mass mP and assuming Y=0 (Y∈ (0,1)) gives an estimate for lP/R(CP2) as

    lPR(CP2) ≈ 2.0× 10-4 .

  2. From lP= 2π R(S2) one obtains estimate for ε, α1, g1=(4πα1)1/2 assuming
    αK≈ α≈ 1/137 in electron length scale.

    ε = 2-30 ≈ 1.0× 10-9 ,

    α1=εαK ≈ 6.8× 10-12 ,

    g1= (4πα11/2 ≈ 9.24 × 10-6 .

There are two options corresponding to lP= R(S2) and lP =2π R(S2). Only the length of the geodesic of S2 has meaning in the Riemann geometry of S2 whereas the radius of S2 has operational meaning only if S2 is imbedded to E3. Hence lP= 2π R(S2) is more plausible option.

For ε=2-30 the value of lP2/R2(CP2) is lP2/R2(CP2)=(2π)2 × R2(S2)/R2(CP2) ≈ 3.7× 10-8. lP/R(S2) would be a transcendental number but since it would not be a fundamental constant but appear only at the QFT-GRT limit of TGD, this would not be a problem.

One can make order of magnitude estimates for the Jarlskog parameter J and the fraction r= n(B)/n(γ). Here it is not however clear whether one should use ε or α1 as the basis of the estimate

  1. The estimate based on ε gives

    J∼ ε1/2 ≈ 3.2× 10-5 ,

    r∼ ε ≈ 1.0× 10-9 .

    The estimate for J happens to be very near to the recent experimental value J≈ 3.0× 10-5. The estimate for r is by order of magnitude smaller than the empirical value.

  2. The estimate based on α1 gives


    J∼ g1 ≈ 0.92× 10-5 ,

    r∼ α1 ≈ .68× 10-11 .

    The estimate for J is excellent but the estimate for r by more than order of magnitude smaller than the empirical value. One explanation is that αK has discrete coupling constant evolution and increases in short scales and could have been considerably larger in the scale characterizing the situation in which matter-antimatter asymmetry was generated.

Atomic nuclei have baryon number equal the sum B= Z+N of proton and neutron numbers and neutral atoms have B= N. Only hydrogen atom would be also U(1) neutral. The dramatic prediction of U(1) force is that neutrinos might not be so weakly interacting particles as has been thought. If the quanta of U(1) force are not massive, a new long range force is in question. U(1) quanta could become massive via U(1) super-conductivity causing Meissner effect. As found, U(1) part of action can be however regarded a small perturbation characterized by the parameter ε= R2(S2)/R2(CP2). One can however argue that since the relative magnitude of U(1) term and ordinary Kähler action is given by ε, one has α1=ε× αK.

Quantal U(1) force must be also consistent with atomic physics. The value of the parameter α1 consistent with the size of CP breaking of K mesons and with matter antimatter asymmetry is α1= εαK = 2-30αK.

  1. Electrons and baryons would have attractive interaction, which effectively transforms the em charge Z of atom Zeff= rZ, r=1+(N/Z)ε1, ε11/α=ε × αK/α≈ ε for αK≈ α predicted to hold true in electron length scale. The parameter

    s=(1 + (N/Z)ε)2 -1= 2(N/Z)ε +(N/Z)2ε2

    would characterize the isotope dependent relative shift of the binding energy scale.

    The comparison of the binding energies of hydrogen isotopes could provide a stringent bounds of the value of α1. For lP= 2π R(S2) option one would have α1=2-30αK ≈ .68× 10-11 and s≈ 1.4× 10-10. s is by order of magnitude smaller than α4≈ 2.9× 10-9 corrections from QED (see this). The predicted differences between the binding energy scales of isotopes of hydrogen might allow to test the proposal.



  2. B=N would be neutralized by the neutrinos of the cosmic background. Could this occur even at the level of single atom or does one have a plasma like state? The ground state binding energy of neutrino atoms would be α12mν/2 ∼ 10-24 eV for mν =.1 eV! This is many many orders of magnitude below the thermal energy of cosmic neutrino background estimated to be about 1.95× 10-4 eV (see this). The Bohr radius would be hbar/(α1mν) ∼ 106 meters and same order of magnitude as Earth radius. Matter should be U(1) plasma. U(1) superconductor would be second option.

See the new chapter Breaking of CP, P, and T in cosmological scales in TGD Universe of "Physics in Many-Sheeted Space-time" or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, April 20, 2017

Breaking of CP, P, and T in cosmological scales in TGD Universe

The twistor lift of TGD forces the analog of Kähler form for M4. Covariantly constant sef-dual Kähler form J(CD) depends on causal diamond of M4 and defines rest frame and spin quantization axis. This implies a violation of CP, P, and T. By introducing a moduli space for the Kähler forms one avoids the loss of Poincare invariance. The natural question is whether J(CD) could relate to CP breaking for K and B type mesons, to matter antimatter asymmetry and the large scale parity breaking suggested by CMB data.

The simplest guess for the coupling strength of U(1) interaction associated with J(CD) predicts a correct order of magnitude for CP violation for K meson and for the antimatter asymmetry and inspires a more detailed discussion. A general mechanism for the generation of matter asymmetry is proposed, and a model for the formation of disk- and elliptic galaxies is considered. The matter antimatter asymmetry would be apparent in the sense that the CP asymmetry would force matter-antimatter separation: antimatter would reside as dark matter (in TGD sense) inside magnetic flux tubes and matter outside them. Also the angular momenta of dark matter and matter would compensate each other.

See the new chapter Breaking of CP, P, and T in cosmological scales in TGD Universe of "Physics in Many-Sheeted Space-time" or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, April 17, 2017

How the QFT-GRT limit of TGD differs from QFT and GRT?

Yesterday evening I got an intereting idea related to both the definition and conservation of gauge charges in non-Abelian theories. First the idea popped in QCD context but immediately generalized to electro-weak and gravitational sectors. It might not be entirely correct: I have not yet checked the calculations.

QCD sector

I have been working with possible TGD counterparts of so called chiral magnetic effect (CME) and chiral separation effect (CSE) proposed in QCD to describe observations at LHC and RHIC suggesting relatively large P and CP violations in hadronic physics associated with the deconfinement phase transition. See the recent article About parity violation in hadron physics).

The QCD based model for CME and CSE is not convincing as such. The model assumes that the theta parameter of QCD is non-vanishing and position dependent. It is however known that theta parameter is extremal small and seems to be zero: this is so called strong CP problem of QCD caused by the possibility of istantons. The axion hypothesis could make θ(x) a dynamical field and θ parameter would be eliminated from the theory. Axion has not however been however found: various candidates have been gradually eliminated from consideration!

What is the situation in TGD? In TGD instantons are impossible at the fundamental space-time level. This is due to the induced space-time concept. What this means for the QFT limit of TGD?

  1. Obviously one must add to the action density a constraint term equal to Lagrange multiple θ times instanton density. If θ is constant the variation with respect to it gives just the vanishing of instanton number.

  2. A stronger condition is local and states that instanton density vanishes. This differs from the axion option in that there is no kinetic term for θ so that it does not propagate and does not appear in propagators.

Consider the latter option in more detail.
  1. The variation with respect to θ(x) gives the condition that instanton density rather than only instanton number vanishes for the allowed field configurations. This guarantees that axial current having instanton term as divergence is conserved if fermions are massless. There is no breaking of chiral symmetry at the massless limit and no chiral anomaly which is mathematically problematic.

  2. The field equations are however changed. The field equations reduce to the statement that the covariant divergence of YM current - sum of bosonic and fermionic contributions - equals to the covariant divergence of color current associated with the constraint term. The classical gauge potentials are affected by this source term and they in turn affect fermionic dynamics via Dirac equation. Therefore also the perturbation theory is affected.

  3. The following is however still uncertain: This term seems to have vanishing ordinary total divergence by Bianchi identities - one has topological color current proportional to the contraction of the gradient of θ and gauge field with 4-D permutation symbol! I have however not checked yet the details.

    If this is really true then the sum of fermionic and bosonic gauge currents not conserved in the usual sense equals to a opological color current conserved in the usual sense! This would give conserved total color charges as topological charges - in spirit with "Topological" in TGD! This would also solve a problem of non-abelian gauge theories usually put under the rug: the gauge total gauge current is not conserved and a rigorous definition of gauge charges is lost.

  4. What the equations of motion of ordinary QCD would mean in this framework? First of all the color magnetic and electric fields can be said to be orthogonal with respect to the natural inner product. One can have also solutions for which θ is constant. This case gives just the ordinary QCD but without instantons and strong CP breaking. The total color current vanishes and one would have local color confinement classically! This is true irrespective of whether the ordinary divergence of color currents vanishes.

  5. This also allows to understand CME and CSE believed to occur in the deconfinement phase transition. Now regions with non-constant θ(x) but vanishing instanton density are generated. The sum of the conserved color charges for these regions - droplets of quark-gluon plasma - however vanish by the conservation of color charges. One would indeed have non-vanishing local color charge densities and deconfinement in accordance with the physical intuition and experimental evidence. This could occur in proton-nucleon and nucleon-nucleon collisions at both RHIC and LHC and give rise to CME and CSE effects. This picture is however essentially TGD based. QCD in standard form does not give it and in QCD there are no motivations to demand that instanton density vanishes.

Electroweak sector

The analog of θ (x) is present also at the QFT limit of TGD in electroweak sector since instantons must be absent also now. One would have conserved total electroweak currents - also Abelian U(1) current reducing to topological currents, which vanish for θ(x)= constant but are non-vanishing otherwise. In TGD the conservation of em charge and possibly also Z0 charge is understood if strong form of holography (SH) is accepted: it implies that only electromagnetic and possibly also Z0 current are conserved and are assignable to the string world sheets carrying fermions. At QFT limit one would obtain reduction of electroweak currents to topological currents if the above argument is correct. The proper understanding of W currents at fundamental level is however still lacking.

It is now however not necessary to demand the vanishing of instanton term for the U(1) factor and chiral anomaly for pion suggest that one cannot demand this. Also the TGD inspired model for so called leptohadrons is based on the non-vanishing elecromagnetic instanton density. In TGD also M4 Kähler form J(CD) is present and same would apply to it. If one applies the condition empty Minkowski space ceases to be an extremal.

Gravitational sector

Could this generalize also the GRT limit of TGD? In GRT momentum conservation is lost - this one of the basic problems of GRT put under the rug. At fundamental level Poincare charges are conserved in TGD by the hypothesis that the space-time is 4-surface in M4 × CP2. Space-time symmetries are lifted to those of M4.

What happens at the GRT limit of TGD? The proposal has been that covariant conservation of energy momentum tensor is a remnant of Poincare symmetry. But could one obtain also now ordinary conservation of 4- momentum currents by adding to the standard Einstein-YM action a Lagrange multiplier term guaranteing that the gravitational analog of instanton term vanishes?

  1. First objection: This makes sense only if vier-bein is defined in the M4 coordinates applying only at GRT limit for which space-time surface is representable as a graph of a map from M4 to CP2.

  2. Second objection: If metric tensor is regarded as a primary dynamical variable, one obtains a current which is symmetry 2-tensor like T and G. This cannot give rise to a conserved charges.

  3. Third objection: Taking vielbein vectors eAμ as fundamental variable could give rise to a conserved vector with vanishing covariant divergence. Could this give rise to conserved currents labelled by A and having interpretation as momentum components? This does not work. Since eAμ is only covariantly constant one does not obtain genuine conservation law except at the limit of empty Minkowski space since in this case vielbein vectors can be taken to be constant.

Despite this the addition of the constraint term changes the interpretation of GRT profoundly.
  1. Curvature tensor is indeed essentially a gauge field in tangent space rotation group when contracted suitably by two vielbein vectors eAμ and the instanton term is formally completely analogous to that in gauge theory.

  2. The situation is now more complex than in gauge theories due to the fact that second derivatives of the metric and - as it seems - also of vielbein vectors are involved. They however appear linearly and do not give third order derivatives in Einstein's equations. Since the physics should not depend on whether one uses metric or vielbein as dynamical variables, the conjecture is that the variation states that the contraction of T-kG with vielbein vector equals to the topological current coming from instanton term and proportional to the gradient of θ

    (T-kG)μν eAν =j.

    The conserved current j would be contraction of the instanton term with respect to eAμ with the gradient of θ suitably covariantized. The variation of the action with respect to the the gradient of eAμ would give it. The resulting current has only vanishing covariant divergence to which vielbein contributes.

The multiplier term guaranteing the vanishing of the gravitational instanton density would have however highly non-trivial and positive consequences.
  1. The covariantly conserved energy momentum current would be sum of parts corresponding to matter and gravitational field unlike in GRT where the field equations say that the energy momentum tensors of gravitational field and matter field are identical. This conforms with TGD view at the level of many-sheeted space-time.

  2. In GRT one has the problem that in absence of matter (pure gravitational radiation) one obtains G=0 and thus vacuum solution. This follows also from conformal invariance for solutions representing gravitational radiation. Thanks to LIGO we however now know that gravitational radiation carries energy! Situation for TGD limit would be different: at QFT limit one can have classical gravitational radiation with non-vanishing energy momentum density
    thanks the vanishing of instanton term.

See the article About parity violation in hadron physics

For background see the chapters New Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, April 13, 2017

About parity violation in hadron physics

Strong interactions involve small CP violation revealing in the physics of neutral kaon and B meson. An interesting question is whether CP violation and also P violation could be seen also in hadronic reactions. QCD allows strong CP violation due to instantons. No strong CP breaking is observed, and Peccei-Quinn mechanism involving axion as a new but not yet detected particle is hoped to save the situation.

The de-confinement phase transition is believed to occur in heavy nucleus collisions and be accompanied by a phase transition in which chiral symmetry is restored. It has been conjectured that this phase transition involves large P violation assignable to so called chiral magnetic effect (CME) involving separation of charge along the axis of magnetic field generated in collision, chiral separation effect (CSE), and chiral magnetic wave (CMW). There is some evidence for CME and CSE in heavy nucleus collisions at RHIC and LHC. There is however also evidence for CME in proton-nucleus collisions, where it should not occur.

In TGD instantons and strong CP violation are absent at fundamental level. The twistor lift of TGD however predicts weak CP, T, and P violations in all scales and it is tempting to model matter-anti-matter asymmetry, the generation of preferred arrow of time, and parity breaking suggested by CBM anomalies in terms of these violations. The reason for the violation is the analog of self-dual covariantly constant Kähler form J(CD) for causal diamonds CD⊂ M4 defining parallel constant electric and magnetic fields. Lorentz invariance is not lost since one has moduli space containing Lorentz boosts of CD and J(CD). J(CD) induced to the space-time surface gives rise to a new U(1) gauge field coupling to fermion number. Correct order of magnitude for the violation for K and B mesons is predicted under natural assumptions. In this article the possible TGD counterparts of CME, CSE, and CMW are considered: the motivation is the presence of parallel E and B essential for CME.

See the article About parity violation in hadron physics

For background see the chapters New Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, April 08, 2017

Why would primes near powers of two (or small primes) be important?

The earlier posting What could be the role of complexity theory in TGD? was an abstract of an article about how complexity theory based thinking might help in attempts to understand the emergence of complexity in TGD. The key idea is that evolution corresponds to an increasing complexity for Galois group for the extension of rationals inducing also the extension used at space-time and Hilbert space level. This leads to rather concrete vision about what happens and the basic notions of complexity theory helps to articulate this vision more concretely.

Also new insights about how preferred p-adic primes identified as ramified primes of extension emerge. The picture suggests strong resemblance with the evolution of genetic code with conserved genes having ramified primes as their analogs. Category theoretic thinking in turn suggests that the positions of fermions at partonic 2-surfaces correspond to singularities of the Galois covering so that the number of sheets of covering is not maximal and that the singularities has as their analogs what happens for ramified primes.

p-Adic length scale hypothesis states that physically preferred p-adic primes come as primes near prime powers of two and possibly also other small primes. Does this have some analog to complexity theory, period doubling, and with the super-stability associated with period doublings?

Also ramified primes characterize the extension of rationals and would define naturally preferred primes for a given extension.

  1. Any rational prime p can be decomposes to a product of powers Pki of primes Pi of extension given by p= ∏i Piki, ∑ ki=n. If one has ki≠ 1 for some i, one has ramified prime. Prime p is Galois invariant but ramified prime decomposes to lower-dimensional orbits of Galois group formed by a subset of Piki with the same index ki . One might say that ramified primes are more structured and informative than un-ramified ones. This could mean also representative capacity.

  2. Ramification has as its analog criticality leading to the degenerate roots of a polynomial or the lowering of the rank of the matrix defined by the second derivatives of potential function depending on parameters. The graph of potential function in the space defined by its arguments and parameters if n-sheeted singular covering of this space since the potential has several extrema for given parameters. At boundaries of the n-sheeted structure some sheets degenerate and the dimension is reduced locally . Cusp catastrophe with 3-sheets in catastrophe region is standard example about this.

    Ramification also brings in mind super-stability of n-cycle for the iteration of functions meaning that the derivative of n:th iterate f(f(...)(x)== fn)(x) vanishes. Superstability occurs for the iteration of function f= ax+bx2 for a=0.

  3. I have considered the possibility that that the n-sheeted coverings of the space-time surface are singular in that the sheet co-incide at the ends of space-time surface or at some partonic 2-surfaces. One can also consider the possibility that only some sheets or partonic 2-surfaces co-incide.

    The extreme option is that the singularities occur only at the points representing fermions at partonic 2-surfaces. Fermions could in this case correspond to different ramified primes. The graph of w=z1/2 defining 2-fold covering of complex plane with singularity at origin gives an idea about what would be involved. This option looks the most attractive one and conforms with the idea that singularities of the coverings in general correspond to isolated points. It also conforms with the hypothesis that fermions are labelled by p-adic primes and the connection between ramifications and Galois singularities could justify this hypothesis.

  4. Category theorists love structural similarities and might ask whether there might be a morphism mapping these singularities of the space-time surfaces as Galois coverings to the ramified primes so that sheets would correspond to primes of extension appearing in the decomposition of prime to primes of extension.

    Could the singularities of the covering correspond to the ramification of primes of extension? Could this degeneracy for given extension be coded by a ramified prime? Could quantum criticality of TGD favour ramified primes and singularities at the locations of fermions at partonic 2-surfaces?

    Could the fundamental fermions at the partonic surfaces be quite generally localize at the singularities of the covering space serving as markings for them? This also conforms with the assumption that fermions with standard value of Planck constants corresponds to 2-sheeted coverings.

  5. What could the ramification for a point of cognitive representation mean algebraically? The covering orbit of point is obtained as orbit of Galois group. For maximal singularity the Galois orbit reduces to single point so that the point is rational. Maximally ramified fermions would be located at rational points of extension. For non-maximal ramifications the number of sheets would be reduced but there would be several of them and one can ask whether only maximally ramified primes are realized. Could this relate at the deeper level to the fact that only rational numbers can be represented in computers exactly.

  6. Can one imagine a physical correlate for the singular points of the space-time sheets at the ends of the space-time surface? Quantum criticality as analogy of criticality associated with super-stable cycles in chaos theory could be in question. Could the fusion of the space-time sheets correspond to a phenomenon analogous to Bose-Einstein condensation? Most naturally the condensate would correspond to a fractionization of fermion number allowing to put n fermions to point with same M4 projection? The largest condensate would correspond to a maximal ramification p= Pin.

Why ramified primes would tend to be primes near powers of two or of small prime? The attempt to answer this question forces to ask what it means to be a survivor in number theoretical evolution. One can imagine two kinds of explanations.
  1. Some extensions are winners in the number theoretic evolution, and also the ramified primes assignable to them.
    These extensions would be especially stable against further evolution representing analogs of evolutionary fossils.
    As proposed earlier, they could also allow exceptionally large cognitive representations that is large number of points of real space-time surface in extension.

  2. Certain primes as ramified primes are winners in the sense the further extensions conserve the property of being ramified.

    1. The first possibility is that further evolution could preserve these ramified primes and only add new ramified primes. The preferred primes would be like genes, which are conserved during biological evolution. What kind of extensions of existing extension preserve the already existing ramified primes. One could naively think that extension of an extension replaces Pi in the extension for Pi= Qikki so that the ramified primes would remain ramified primes.

    2. Surviving ramified primes could be associated with a exceptionally large number of extensions and thus with their Galois groups. In other words, some primes would have strong tendency to ramify. They would be at criticality with respect to ramification. They would be critical in the sense that multiple roots appear.

      Can one find any support for this purely TGD inspired conjecture from literature? I am not a number theorist so that I can only go to web and search and try to understand what I found. Web search led to a thesis (see this) studying Galois group with prescribed ramified primes.

      The thesis contained the statement that not every finite group can appear as Galois group with prescribed ramification. The second statement was that as the number and size of ramified primes increases more Galois groups are possible for given pre-determined ramified primes. This would conform with the conjecture. The number and size of ramified primes would be a measure for complexity of the system, and both would increase with the size of the system.

    3. Of course, both mechanisms could be involved.


Why ramified primes near powers of 2 would be winners? Do they correspond to ramified primes associated with especially many extension and are they conserved in evolution by subsequent extensions of Galois group. But why? This brings in mind the fact that n=2k-cycles becomes super-stable and thus critical at certain critical value of the control parameter. Note also that ramified primes are analogous to prime cycles in iteration. Analogy with the evolution of genome is also strongly suggestive.

For details see the chapter Unified Number Theoretic Vision or the article What could be the role of complexity theory in TGD?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

heff/h=n hypothesis and Galois group

The previous posting What could be the role of complexity theory in TGD? was an abstract of an article about how complexity theory based thinking might help in attempts to understand the emergence of complexity in TGD. The key idea is that evolution corresponds to an increasing complexity for Galois group for the extension of rationals inducing also the extension used at space-time and Hilbert space level. This leads to rather concrete vision about what happens and the basic notions of complexity theory helps to articulate this vision more concretely.

I ended up to rather interesting information theoretic interpretation about the understanding of effective Planck constant assigned to flux tubes mediating as gravitational/electromagnetic/etc... interactions. The real surprise was that this leads to a proposal how mono-cellulars and multicellulars differ! The emergence of multicellulars would have meant emergence of systems with mass larger than critical mass making possible gravitational quantum coherence. Penrose's vision about the role of gravitation would be correct although Orch-OR as such has little to do with reality!

The natural hypothesis is that heff/h=n equals to the order of Galois group in the case that it gives the number of sheets of the covering assignable to the space-time surfaces. The stronger hypothesis is that heff/h=n is associated with flux tubes and is proportional to the quantum numbers associated with the ends.

  1. The basic idea is that Mother Nature is theoretician friendly. As perturbation theory breaks down, the interaction strength expressible as a product of appropriate charges divided by Planck constant, is reduced in the phase transition hbar→ hbareff.

  2. In the case of gravitation GMm→ = GMm (h/heff). Equivalence Principle is satisfied if one has hbareff=hbargr = GMm/v0, where v0 is parameter with dimensions of velocity and of the order of some rotation velocity associated with the system. If the masses move with relativistic velocities the interaction strength is proportional to the inner product of four-momenta and therefore to Lorentz boost factors for energies in the rest system of the entire system. In this case one must assume quantization of energies to satisfy the constraint or a compensating reduction of v0. Interactions strength becomes equal to β0= v0/c having no dependence on the masses: this brings in mind the universality associated with quantum criticality.

  3. The hypothesis applies to all interactions. For electromagnetism one would have the replacements Z1Z2α→ Z1Z2α (h/ hem) and hbarem=Z1Z2α/&beta0 giving Universal interaction strength. In the case of color interactions the phase transition would lead to the emergence of hadron and it could be that inside hadrons the valence quark have heff/h=n>1. In this case one could consider a generalization in which the product of masses is replaced with the inner product of four-momenta. In this case quantization of energy at either or both ends is required. For astrophysical energies one would have effective energy continuum.

This hypothesis suggests the interpretation of heff/h=n as either the dimension of the extension or the order of its Galois group. If the extensions have dimensions n1 and n2, then the composite system would be n2-dimensional extension of n1-dimensional extension and have dimension n1× n2. This could be also true for the orders of Galois groups. This would be the case if Galois group of the entire system is free group generated by the G1 and G2. One just takes all products of elements of G1 and G2 and assumes that they commute to get G1× G2. Consider gravitation as example.
  1. The order of Galois group should coincide with hbareff/hbar=n= hbargr/hbar= GMm/v0hbar. The transition occurs only if the value of hbargr/hbar is larger than one. One can say that the order of Galois group is proportional the product of masses using as unit Planck mass. Rather large extensions are involved and the number of sheets in the Galois covering is huge.

    Note that it is difficult to say how larger Planck constants are actually involved since by gravitational binding the classical gravitational forces are additive and by Equivalence principle same potential is obtained as sum of potentials for splitting of masses into pieces. Also the gravitational Compton length λgr= GM/v0 for m does not depend on m at all so that all particles have same λgr= GM/v0 irrespective of mass (note that v0 is expressed using units with c=1).

    The maximally incoherent situation would correspond to ordinary Planck constant and the usual view about gravitational interaction between particles. The extreme quantum coherence would mean that both M and m behave as single quantum unit. In many-sheeted space-time this could be understood in terms of a picture based on flux tubes. The interpretation for the degree of coherence is in terms of flux tube connections mediating gravitational flux.

  2. hgr/h would be order of Galois group, and there is a temptation to associated with the product of masses the product n=n1n2 of the orders ni of Galois groups associated masses M and m. The order of Galois group for both masses would have as unit mP01/2, β0=v0/c, rather than Planck mass mP. For instance, the reduction of the Galois group of entire system to a product of Galois groups of parts would occur if Galois groups for M and m are cyclic groups with orders with have no common prime factors but not generally.

    The problem is that the order of the Galois group associated with m would be smaller than 1 for masses m<mP01/2. Planck mass is about 1.3 × 1019 proton masses and corresponds to a blob of water with size scale 10-4 meters - size scale of a large neuron so that only above these scale gravitational quantum coherence would be possible. For v0<1 it would seem that even in the case of large neurons one must have more than one neurons. Maybe pyramidal neurons could satisfy the mass constraint and would represent higher level of conscious as compared to other neurons and cells. The giant neurons discovered by the group led by Christof Koch in the brain of of mouse having axonal connections distributed over the entire brain might fulfil the constraint (see this).

  3. It is difficult to avoid the idea that macroscopic quantum gravitational coherence for multicellular objects with mass at least that for the largest neurons could be involved with biology. Multicellular systems can have mass above this threshold for some critical cell number. This might explain the dramatic evolutionary step distinguishing between prokaryotes (mono-cellulars consisting of Archaea and bacteria including also cellular organelles and cells with sub-critical size) and eukaryotes (multi-cellulars).

  4. I have proposed an explanation of the fountain effect appearing in super-fluidity and apparently defying the law of gravity. In this case m was assumed to be the mass of 4He atom in case of super-fluidity to explain fountain effect. The above arguments however allow to ask whether anything changes if one allows the blobs of superfluid to have masses coming as a multiple of mP01/2. One could check whether fountain effect is possible for super-fluid volumes with mass below mP01/2.

What about hem? In the case of super-conductivity the interpretation of hem/h as product of orders of Galois groups would allow to estimate the number N= Q/2e of Cooper pairs of a minimal blob of super-conducting matter from the condition that the order of its Galois group is larger than integer. The number N=Q/2e is such that one has 2N(α/β0)1/2=n. The condition is satisfied if one has α/β0=q2, with q=k/2l such that N is divisible by l. The number of Cooper pairs would be quantized as multiples of l. What is clear that em interaction would correspond to a lower level of cognitive consciousness and that the step to gravitation dominated cognition would be huge if the dark gravitational interaction with size of astrophysical systems is involved \citeallbhgrprebio. Many-sheeted space-time allows this in principle.

These arguments support the view that quantum information theory indeed closely relates not only to gravitation but also other interactions. Speculations revolving around blackhole, entropy, and holography, and emergence of space would be replaced with the number theoretic vision about cognition providing information theoretic interpretation of basic interactions in terms of entangled tensor networks (see this). Negentropic entanglement would have magnetic flux tubes (and fermionic strings at them) as topological correlates. The increase of the complexity of quantum states could occur by the "fusion" of Galois groups associated with various nodes of this network as macroscopic quantum states are formed. Galois groups and their representations would define the basic information theoretic concepts. The emergence of gravitational quantum coherence identified as the emergence of multi-cellulars would mean a major step in biological evolution.

For details see the chapter Unified Number Theoretic Vision or the article What could be the role of complexity theory in TGD?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

What could be the role of complexity theory in TGD?

Chaotic (or actually extremely complex and only apparently chaotic) systems seem to be the diametrical opposite of completely integrable systems about which TGD is a possible example. There is however also something common: in completely integrable classical systems all orbits are cyclic and in chaotic systems they form a dense set in the space of orbits. Furthermore, in chaotic systems the approach to chaos occurs via steps as a control parameter is changed. Same would take place in adelic TGD fusing the descriptions of matter and cognition.

In TGD Universe the hierarchy of extensions of rationals inducing finite-dimensional extension of p-adic number fields defines a hierarchy of adelic physics and provides a natural correlate for evolution. Galois groups and ramified primes appear as characterizers of the extensions. The sequences of Galois groups could characterize an evolution by phase transitions increasing the dimension of the extension associated with the coordinates of "world of classical worlds" (WCW) in turn inducing the extension used at space-time and Hilbert space level. WCW decomposes to sectors characterized by Galois groups G3 of extensions associated with the 3-surfaces at the ends of space-time surface at boundaries of causal diamond (CD) and G4 characterizing the space-time surface itself. G3 (G4) acts on the discretization and induces a covering structure of the 3-surface (space-time surface). If the state function reduction to the opposite boundary of CD involves localization into a sector with fixed G3, evolution is indeed mapped to a sequence of G3s.

Also the cognitive representation defined by the intersection of real and p-adic surfaces with coordinates of points in an extension of rationals evolve. The number of points in this representation becomes increasingly complex during evolution. Fermions at partonic 2-surfaces connected by fermionic strings define a tensor network, which also evolves since the number of fermions can change.

The points of space-time surface invariant under non-trivial subgroup of Galois group define singularities of the covering, and the positions of fermions at partonic surfaces could correspond to these singularities - maybe even the maximal ones, in which case the singular points would be rational. There is a temptation to interpret the p-adic prime characterizing elementary particle as a ramified prime of extension having a decomposition similar to that of singularity so that category theoretic view suggests itself.

One also ends up to ask how the number theoretic evolution could select preferred p-adic primes satisfying the p-adic length scale hypothesis as a survivors in number theoretic evolution, and ends up to a vision bringing strongly in mind the notion of conserved genes as analogy for conservation of ramified primes in extensions of extension. heff/h=n has natural interpretation as the order of Galois group of extension. The generalization of hbargr= GMm/v0=hbareff hypothesis to other interactions is discussed in terms of number theoretic evolution as increase of G3, and one ends up to surprisingly concrete vision for what might happen in the transition from prokaryotes to eukaryotes.

For details see the chapter Unified Number Theoretic Vision or the article What could be the role of complexity theory in TGD?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, April 05, 2017

Missing dark matter

One problem of ΛCDM scenario is missing of matter and dark matter in some places (see this). There missing dark matter in the scale of R=.2 Gy and also in the vicinity of solar system in the scale 1.5-4 kpc.

In the work titled "Missing Dark Matter in the Local Universe", Igor D. Karachentsev studied a sample of 11,000 galaxies in the local Universe around the MW (see this). He summed up the masses of individual galaxies and galaxy-groups and used this to test a very fundamental prediction of ΛCDM.

  1. Standard cosmology predicts the average fraction of matter density to be Ωm,glob=28+/- 3 per cent of critical mass density (83 percent of this would be dark and 17 per cent visible matter). 72 per cent would be dark energy, 28 per cent dark matter, and 4.8 per cent visible matter.

    To test this one can simply sum up all the galactic masses in some volume Karachentsev chose the volume to be a sphere of radius R= .2 Gy surrounding Milky Way and containing 11,000 galaxies. In this scale the density is expected to fluctuate only 10 per cent. Note that horizon radius is estimated to be about RH=14 Gly giving RH= 70 R.

  2. The visible galactic mass in certain large enough volume of space was estimated as also the sum of galactic dark masses estimated as so called virial mass (see this). The sum of these masses gave the estimate for the total mass.

  3. The estimate for the total mass (dark matter plus visible matter assuming halo model) in a volume of radius .2 Gy gives Ωm,glob=8+/- 3 per cent, which is only 28 per cent of the predicted fraction. The predicted fraction of visible matter is 4.8 per cent and marginally consistent with 8+/- 3 per cent but it seems plausible that also dark matter is present although its amount is much smaller than expected. The total contribution to the dark matter could be at most of the same size as that of visible matter.

  4. One explanation is that all matter has not been included. Second not very plausible explanation is that the measurement region corresponds to a region with abnormally low density.

Can on understand the finding in TGD framework?
  1. In TGD based model part of dark energy/matter would reside at the long flux tubes with which galaxies form bound states. Constraints come from accelerated expansion and galactic velocity curves allowing to determine string tension for given galaxy. Let us assume that the GRT limit of TGD and its predictions hold true.

    The estimate for the virial mass assumes that galaxy's dark mass forms a halo. The basic observation is that in TGD flux tubes give the dark energy and mass and virial mass would underestimate the the dark mass of the galaxy.

  2. How long length of the flux tube effectively corresponds to the dark and visible mass of disk galaxy? This length should be roughly the length containing the dark mass and energy estimated from cosmology: L=Mdark/T. If GRT limit of TGD makes sense, one has Mdark =xMvis/T, where Mdark is the amount of dark energy + matter associated with the flux tube, Mvis is visible mass, x≈ ρdarkvis≈ 83/17 , and T is string tension deduced from the asymptotic rotation velocity.

    If these segments do not cover the entire flux tubes containing the galaxies along it, the amount of dark matter and energy will be underestimated. By the above argument elliptic galaxies would not have considerable amount of dark matter and energy so that only disk galaxies should contribute unless there are flux tubes in shorter scales inside elliptic galaxies.

    Also larger and smaller scale flux tube structures contribute to the dark energy + dark matter. Fractality suggests the presence of both larger and smaller flux tube structures than those associated with spiral galaxies (even stars couldbe associated with flux tubes).

    One should have estimates for the lengths of various flux tubes involved. Unfortunately this kind of estimate is not available.

  3. If GRT limit makes sense then the total dark mass then the dark energy and matter obtained in this manner should give 95 per cent of critical mass density. The fraction of dark matter included would be at most a fraction 5/28≈ 18 per cent of the total dark matter. 82 per cent of dark matter and energy would be missed in the estimate. This could allow to get some idea about the lengths of flux tubes and density of galaxies along flux tubes.

The amount of dark matter in the solar neighborhood was investigated in the work "Kinematical and chemical vertical structure of the Galactic thick disk II. A lack of dark matter in the solar neighborhood" by Christian Moni Bidin and collaborators (see this). Moni Bidin et al have studied as sample of 400 red giants in the vicinity of solar system at vertical distances 1.5 to 4 kpc and deduce 3-D kinematics for these start. From this data they estimate the surface mass density of the Milky Way within this range of heights from the disk. This surface density should be sum of both visual and dark mass.

According to their analysis, the visible mass is enough to explain the data. No additional mass is needed. Only highly flattened dark matter halo would be consistent with the findings. This conforms with the TGD prediction that dark mass/energy are associated with magnetic flux tubes.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, April 04, 2017

New pseudoscalar meson at LHC?

This posting is an good example of blunders that one cannot avoid when targeted by huge information torrents! The article telling about the bump was one year old. Thanks for "Mitchell"! I however want to leave posting here since it I have a strong suspicion that the M89 physics is indeed there. It might serve as a reminder!

Extremely interesting finding at LHC. Not 5 sigma finding but might be something real. There is evidence for the existence of a meson with mass 750 GeV decaying to gamma pair. The only reasonable candidate is pseudo-scalar or scalar meson.

What says TGD? M89 hadron physics is the basic "almost prediction" of TGD. Mass scale is scaled up from that of ordinary hadron physics characterized by M107 by a factor of 512.

About two handfuls of bumps with masses identifiable in TGD framework as scaled up masses for the mesons of ordinary hadron physics have been reported. See the article. The postings of Lubos trying to interpret the bumps as Higgses predicted by SUSY have been extremely helpful. No-one in the hegemony has of course taken this proposal seriously and the bumps have been forgotten since people have been trying to find SUSY and dark matter particles, certainly not TGD!

What about this new bump? It has mass about 750 GeV. The scaling down by 1/512 gives mass about 1.465 GeV
for the corresponding meson of the ordinary hadron physics. It should be flavorless and have spin 0 and would be most naturally pseudoscalar.

By going to Particle Data Tables and clicking "Mesons" and looking for "Light Unflavored Mesons" one finds that there are several unflavored mesons with mass near 1.465 GeV. Vector mesons do not decay to gamma pair and also most pseudoscalar mesons decay mostly via strong interactions. There is however only one decaying also to gamma pair: η(1475)! The error for the predicted mass is 1.3 per cent.

There are of many other ordinary mesons decaying also to gamma pairs and LHC might make history of science by trying to find them at masses scaled up by 512.

See the article M89 Hadron Physics and Quantum Criticality or the chapter New Particle Physics Predicted by TGD: Part I of p-Adic Physics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, April 03, 2017

Zwicky paradox and models of galactic dark matter

The anomalies of the halo model of dark matter have begun to accumulate rapidly. The problems of the halo model are discussed in detail in the blog "Dark matter crisis" of Prof. Pavel Kroupa and Marcel S. Pawlowski (see this). MOND is the most well-known competitor of the halo model for dark matter but has its own problems. TGD is less known alternative for the halo model. In the following brief comments about Zwicky paradox (see this) implying that neither cold nor warm dark matter particles in the usual sense (different from that in TGD based model) can play a significant role in cosmology.

The standard/concordance model of dark matter relies on two hypothesis formulated originally by Zwicky assuming that a) GRT is correct in all scales and b) all matter is created during Big Bang. Zwicky formulated two hypothesis (for references see the article) leading to the halo model of dark matter and also to Zwicky paradox.

  1. Zwicky noticed (1937) that galaxies must about 500 heavier in the Coma galaxy cluster than judged from their light emission: cold or hot dark matter halo must exist. Note that this does not actually require that the dark matter consists of some exotic particles or that the dark matter forms halos. To get historical perspective note that Vera Rubin published 1976 an article about the constancy of velocity curves for distant stars for Andromeda which is spiral galaxy.

  2. Zwicky noticed (1956) that when galaxies collide, the expelled matter can condense in new regions and form new smaller dwarf galaxies. These so called tidal galaxies are thus formed from the collisional debris of other galaxies.

From these observations one ends up with a computer model allowing to simulate the formation of galaxies (for a detailed discussion see this). The basic elements of the model are collisions of galaxies possibly leading to a fusion and formation of tidal galaxies. The model assumes a statistical distribution of dark matter lumps defining the halos of the dwarf galaxies formed in the process.

The model predicts a lot of dark matter dominated dwarf galaxies formed around the dark matter lumps: velocity spectrum should approach constant. There are also tidal dwarf galaxies formed from collision debris of other galaxies. Unless also now condensation around a dark matter lump is involved, these should not contain dark matter and velocity spectrum for tidal dwarfs should be declining. It turns out that tidal dwarfs alone are able to explain the observed dwarf galaxies, which are typically elliptic. Furthermore, there is no empirical manner to distinguish between tidal dwarfs and other dwarfs.

Do the elliptic galaxies contain dark matter? What does one know about the rotation curves of elliptic galaxies? There is an article "The rotation curves of elliptic galaxies" of J. Binney published around 1979 about the determination of the rotation curves of elliptic galaxies giving also some applications (see this). The velocity curves are declining as if no dark matter were present. Therefore dark matter would not be present in dwarf galaxies so that the prediction of the halo model would be wrong.

Could this finding be also a problem for MOND? Assuming that the laws governing gravitation are modified for small accelerations, shouldn't elliptic and spiral galaxies have similar velocity curves?

What about TGD?

  1. In TGD Universe dark energy and matter reside at flux tubes along which disk galaxies condense like pearls in string.

  2. The observation about velocity curves suggests a TGD based explanation for the difference between elliptic and spiral galaxies. Elliptic galaxies - in particular tidal dwarfs - are not associated with a flux tube containing dark matter. Spiral galaxy can form as elliptic galaxy if it becomes bound with flux tube as the recent finding about declining velocity curves for galaxies with age about 10 Gy suggest. Dark matter would not be present in dwarf galaxies so that the prediction of the halo model is wrong. This also conforms with the fact that the stars in elliptic galaxies are much older than in spiral galaxies (see this).

  3. Dwarf galaxies produced from the collision debris contain only ordinary matter. Elliptic galaxies can later condense around magnetic flux tubes so that velocity spectrum approaches constant at large distances. The breaking of spherical symmetry to cylindrical symmetry might allow to understand why the oblate spheroidal shape is flattened to that of disk.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, April 01, 2017

Is conscious experience without definite causal order possible?

The exciting question is what the superposition of causal orders could mean from the point of view of conscious experience. What seems obvious is that in the superposition of selves with opposite arrows of clock-time there should be no experience about the flow of time in definite direction. Dissipation is associated with the thermodynamical arrow of time. Therefore also the sensory experience about dissipation expected to have unpleasant emotional color should be absent. This brings in mind the reports of meditators about experiences of timelessness. These states are also characterized by words like "bliss" and "enlightenment".

Why I find this aspects so interesting is due to my personal experience for about 32 years ago. I of course know that this kind of personal reminiscences in an article intended to be scientific, is like writing one's own academic death sentence. But I also know I long ago done this so that I have nothing to lose! The priests of the materialistic church will never bother to take seriously anything that I have written so that it does not really matter! This experience - I dared to talk about enlightenment experience - changed my personal life profoundly, and led to the decision to continue work with TGD instead of doing full-day job to make money and keeping TGD as a kind of hobby. The experience also forced to realize that our normal conscious experience is only a dim shadow of what it can be and stimulated the passion to understand consciousness.

In this experience my body went to a kind of light flowing state: liquid is what comes in mind. All unpleasant sensations in body characterizing the everyday life (at least mine!) suddenly disappeared as this phase transition propagated through my body. As a physicist I characterized this as absence of dissipation, and I talked to myself about a state of whole-body consciousness.

There was also the experience about moving in space in cosmic scales and the experience about the presence of realities very different the familiar one. Somehow I saw these different worlds from above, in bird's eye of view. I also experienced what I would call time travel and re-incarnation in some other world.

Decades later I would ask whether my sensory consciousness could have been replaced with that only about my magnetic body only. In the beginning of the experience there was indeed a concrete feeling that my body size had increased with some factor. I even had the feeling the factor was about 137 (inverse of the fine structure constant) but this interpretation was probably forced by my attempt to associate the experience with something familiar to physicist! Although I did all the time my best to understand what I was experiencing, I did not direct my attention to my time experience, and cannot say whether I experienced the presence or absence of time or time flow.

Towards the end of the experience I was clinically unconscious for about day or so. I was however conscious. For instance, I experienced quite concretely how the arrow of time flow started to fluctuate forth and back. I somehow knew that permanent change would mean death and I was fighting to preserve the usual arrow of time. My childhood friend, who certainly did not know much about physics, told about about alternation of the arrow of time during a state that was classified by psychiatrists as an acute psychosis.

See the chapter Topological Quantum Computation in TGD Universe and the article Quantum computations without definite causal structure: TGD view

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.