Tuesday, March 21, 2017

Early galactic collision gives support for TGD based model of galactic dark matter

The discoveries related to galaxies and dark matter emerge with an accelerating pace, and from TGD point of view it seems that puzzle of galactic dark matter is now solved.

The newest finding is described in popylar article This Gigantic Ring of Galaxies Could Bring Einstein's Gravity Into Question. What has been found that in a local group of 54 galaxies having Milky Way and Andromeda near its center the other dwarf galaxies recede outwarts as a ring. The local group is in good approximation in plane and the situation is said to look like having a spinning umbrella from which the water droplets fly radially outwards.

The authors of the article Anisotropic Distribution of High Velocity Galaxies in the Local Group argue that the finding can be understood aif Milky Way and Andromeda had nearly head-on collision about 10 billion light-years ago. The Milky Way and Andromeda would have lost the radially moving dwarf galaxies in this collision during the rapid acceleration turning the direction of motion of both. Coulomb collision is good analog.

There are however problems. The velocities of the dwards are quite too high and the colliding Milky Way and Andromeda would have fused together by the friction caused by dark matter halo.

What says TGD? In TGD galactic dark matter (actually also energy) is at cosmic strings thickened to magnetic flux tubes like pearls along necklace. The finding could be perhaps explained if the galaxies in same plane make a near hit and generate in the collision the dwarf galaxies by the spinning umbrella mechanism.

In TGD Universe dark matter is at cosmic strings and this automatically predicts constant velocity distribution. The friction created by dark matter is absent and the scattering in the proposed manner could be possible. The scattering event could be basically a scattering of approximately parallel cosmic strings with Milky Way and Andromeda forming one pearl in their respective cosmic necklaces.

But were Milky Way and Andromeda already associated with cosmic strings at that time? The time would be about 10 billion years. One annot exclude this possibility. Note however that the binding to strings might have helped to avoid the fusion. The recent finding about effective absence of dark matter about 10 billion light years ago - velocity distributions decline at large distances - suggests that galaxies formed bound states with cosmic strings only later. This would be like formation of neutral atoms from ions as energies are not too high! How fast the things develope becomes clear from the fact that I posted TGD explanation to my blog yesterday and replaced with it with a corrected version this morning!.

See the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time" or the article TGD interpretation for the new discovery about galactic dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 20, 2017

Velocity curves of galaxies flatten for large redshifts

Sabine Hossenfelder gave a link to a popular article "Declining Rotation Curves at High Redshift" (see this) telling about a new strange finding about galactic dark matter. The rotation curves are declining in the early Universe meaning distances about 10 billion light years (see this). In other words, the rotation velocity of distant stars decreases with radius rather than approaching constant - as if dark matter would be absent and galaxies were baryon dominated. This challenges the halo model of dark matter. For the illustrations of the rotation curves see the article. Of course, the conclusions of the article are uncertain.

Some time ago also a finding about correlation of baryonic mass density with density of dark matter emerged: the ScienceDaily article "In rotating galaxies, distribution of normal matter precisely determines gravitational acceleration" can be found here. The original article can be found in arXiv.org (see this). TGD explanation (see this) involves only the string tension of cosmic strings and predicts the behavior of baryonic matter on distance from the center of the galaxy.

In standard cosmology based on single-sheeted GRT space-time large redshifts mean very early cosmology at the counterpart of single space-time sheet, and the findings are very difficult to understand. What about the interpretation of the results in TGD framework? Let us first summarize the basic assumptions behind TGD inspired cosmology and view about galactic dark matter.

  1. The basic difference between TGD based and standard cosmology is that many-sheeted space-time brings in fractality and length scale dependence. In zero energy ontology (ZEO) one must specify in what length scale the measurements are carried out. This means specifying causal diamond (CD) parameterized by moduli including the its size. The larger the size of CD, the longer the scale of the physics involved. This is of course not new for quantum field theorists. It is however a news for cosmologists. The twistorial lift of TGD allows to formulate the vision quantitatively.

  2. TGD view resolves the paradox due to the huge value of cosmological constant in very small scales. Kähler action and volume energy cancel each other so that the effective cosmological constant decreases like inverse of the p-adic length scale squared because these terms compensate each other. The effective cosmological constant suffers huge reduction in cosmic scales and solves the greatest (the "most gigantic" would be a better attribute) quantitative discrepancy that physics has ever encountered. The smaller value of Hubble constant in long length scales finds also an explanation (see this). The acceleration of cosmic expansion due to the effective cosmological constant decreases in long scales.

  3. In TGD Universe galaxies are located along cosmic strings like pearls in necklace, which have thickened to magnetic flux tubes. The string tension of cosmic strings is proportional to the effective cosmological constant. There is no dark matter hallo: dark matter and energy are at the magnetic flux tubes and automatically give rise to constant velocity spectrum for distant stars of galaxies determined solely by the string tension. The model allows also to understand the above mentioned finding about correlation of baryonic and dark matter densities (see this) .

What could be the explanation for the new findings about galactic dark matter?
  1. The idea of the first day is that the string tension of cosmic strings depends on the scale of observation and this means that the asymptotic velocity of stars decreases in long length scales. The asymptotic velocity would be constant but smaller than for galaxies in smaller scales. The velocity graphs show that in the velocity range considered the velocity decreases. One cannot of course exclude the possibility that velocity is asymptotically constant.

    The grave objection is that the scale is galactic scale and same for all galaxies irrespective of distance. The scale characterizes the object rather than its distance for observer. Fractality suggests a hierarchy of string like structures such that string tension in long scales decreases and asymptotic velocity associated with them decreases with the scale.

  2. The idea of the next day is that the galaxies at very early times have not yet formed bound states with cosmic strings so that the velocities of starts are determined solely by the baryonic matter and approach to zero at large distances. Only later the galaxies condense around cosmic strings - somewhat like water droplets around blade of grass. The formation of these gravitationally bound states would be analogous to the formation of bound states of ions and electrons below ionization temperature or formation of hadrons from quarks but taking place in much longer scale. The early galaxies are indeed baryon dominated and decline of the rotation velocities would be real.

See the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time" or the article TGD interpretation for the new discovery about galactic dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Getting quantitative about violations of CP, T, and P

The twistor lift of TGD led to the introduction of Kähler form also in M4 factor of imbedding space M4×CP2. The moduli space of causal diamonds (CDs) introduced already early allow to save Poincare invariance at the level of WCW. One of the very nice things is that the self-duality of J(M4) leads to a new mechanism of breaking for P,CP, and T in long scales, where these breakings indeed take place. P corresponds to chirality selection in living matter, CP to matter antimatter asymmetry and T could correspond to preferred arrow of clock time. TGD allows both arrows but T breaking could make other arrow dominant. Also the hierarchy of Planck constant is expected to be important.

Can one say anything quantitative about these various breakings?

  1. J(M4) is proportional to Newton's constant G in the natural scale of Minkowski coordinates defined by twistor sphere of T(M4). Therefore CP breaking is expected to be proportional to lP2/R2 or to its square root lP/R. The estimate for lP/R is X== lP/R≈ 2-12≈ 2.5× 10-4.

    The determinant of CKM matrix is equal to phase factor by unitarity (UU=1) and its imaginary part characterizes CP breaking. The imaginary part of the determinant should be proportional to the Jarlskog invariant J= +/- Im(VusVcbV*ub V*cs) characterizing CP breaking of CKM matrix (see this).

    The recent experimental estimate is J≈ 3.0× 10-5. J/X≈ 0 .1 so that there is and order of magnitude deviation. Earlier experimental estimate used in p-adic mass calculations was by almost order of magnitude larger consistent with the value of X. For B mesons CP breading is about 50 times larger than for kaons and it is clear that Jarlskog invariant does nto distinguish between different meson so that it is better to talk about orders of magnitude only.

    The parameter used to characterize matter antimatter asymmetry (see this) is the ratio R=[n(B-n(B*)]/n(γ)) ≈ 9× 10-11 of the difference of baryon and antibaryon densities to photon density in cosmological scales. One has X3 ≈ 1.4 × 10-11, which is order of magnitude smaller than R.

  2. What is interesting that P is badly broken in long length scales as also CP. The same could be true for T. Could this relate to the thermodynamical arrow of time? In ZEO state function reductions to the opposite boundary change the direction of clock time. Most physicist believe that the arrow of thermodynamical time and thus also clock time is always the same. There is evidence that in living matter both arrows are possible. For instance, Fantappie has introduced the notion of syntropy as time reversed entropy. This suggests that thermodynamical arrow of time could correspond to the dominance of the second arrow of time and be due to self-duality of J(M4) leading to breaking of T. For instance, the clock time spend in time reversed phase could be considerably shorter than in the dominant phase. A quantitative estimate for the ratio of these times might be given some power of the the ratio X =lP/R.
For background see chapter Some questions related to the twistor lift of TGD of "Towards M-matrix" or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, March 18, 2017

Is there a duality between associative and co-associative space-time surfaces?

A more appropriate title of this posting would be "A new duality or an old duality seen from a number theoretic perspective?". The original proposal turned out to be partially wrong and I can only blame myself for breaking the rule "Wait for a week before posting!".

M8-H duality maps the preferred extremals in H to those M4× CP2 and vice versa. The tangent spaces of an associative space-time surface in M8 would be quaternionic (Minkowski) spaces.

In M8 one can consider also co-associative space-time surfaces having associative normal space. Could the co-associative normal spaces of associative space-time surfaces in the case of preferred extremals form an integrable distribution therefore defining a space-time surface in M8 mappable to H by M8-H duality? This might be possible but the associative tangent space and the normal space correspond to the same CP2 point so that associative space-time surface in M8 and its possibly existing co-associative companion would be mapped to the same surface of H.

This dead idea however inspires an idea about a duality mapping Minkowskian space-time regions to Euclidian ones. This duality would be analogous to inversion with respect to the surface of sphere, which is conformal symmetry. Maybe this inversion could be seen as the TGD counterpart of finite-D conformal inversion at the level of space-time surfaces. There is also an analogy with the method of images used in some 2-D electrostatic problems used to reflect the charge distribution outside conducting surface to its virtual image inside the surface. The 2-D conformal invariance would generalize to its 4-D quaterionic counterpart. Euclidian/Minkowskian regions would be kind of Leibniz monads, mirror images of each other.

  1. If strong form of holography (SH) holds true, it would be enough to have this duality at the informational level relating only 2-D surfaces carrying the holographic information. For instance, Minkowskian string world sheets would have duals at the level of space-time surfaces in the sense that their 2-D normal spaces in X4 form an integrable distribution defining tangent spaces of a 2-D surface. This 2-D surface would have induced metric with Euclidian signature.

    The duality could relate either a) Minkowskian and Euclidian string world sheets or b) Minkowskian/Euclidian string world sheets and partonic 2-surfaces common to Minkowskian and Euclidian space-time regions. a) and b) is apparently the most powerful option information theoretically but is actually implied by b) due to the reflexivity of the equivalence relation. Minkowskian string world sheets are dual with partonic 2-surfaces which in turn are dual with Euclidian string world sheets.

    1. Option a): The dual of Minkowskian string world sheet would be Euclidian string world sheet in an Euclidian region of space-time surface, most naturally in the Euclidian "wall neighbour" of the Minkowskian region. At parton orbits defining the light-like boundaries between the Minkowskian and Euclidian regions the signature of 4-metric is (0,-1,-1,-1) and the induced 3-metric has signature (0,-1,-1) allowing light-like curves. Minkowskian and Euclidian string world sheets would naturally share these light-like curves aas common parts of boundary.

    2. Option b): Minkowskian/Euclidian string world sheets would have partonic 2-surfaces as duals. The normal space of the partonic 2-surface at the intersection of string world sheet and partonic 2-surface would be the tangent space of string world sheets so that this duality could make sense locally. The different topologies for string world sheets and partonic 2-surfaces force to challenge this option as global option but it might hold in some finite region near the partonic 2-surface. The weak form of electric-magnetic duality could closely relate to this duality.
    In the case of elementary particles regarded as pairs of wormhole contacts connected by flux tubes and associated strings this would give a rather concrete space-time view about stringy structure of elementary particle. One would have a pair of relatively long (Compton length) Minkowskian string sheets at parallel space-time sheets completed to a parallelepiped by adding Euclidian string world sheets connecting the two space-time sheets at two extremely short (CP2 size scale) Euclidian wormhole contacts. These parallelepipeds would define lines of scattering diagrams analogous to the lines of Feynman diagrams.
This duality looks like new but as already noticed is actually just the old electric-magnetic duality seen from number-theoretic perspective.

For background see chapter Some questions related to the twistor lift of TGD of "Towards M-matrix" or the article with the same title.

About the generalization of dual conformal symmetry and Yangian in TGD

The discovery of dual of the conformal symmetry of gauge theories was crucial for the development of twistor Grassmannian approach. The D=4 conformal generators acting on twistors have a dual representation in which they act on momentum twistors: one has dual conformal symmetry, which becomes manifest in this representation. These two separate symmetries extend to Yangian symmetry providing a powerful constraint on the scattering amplitudes in twistor Grassmannian approach fo N=4 SUSY.

In TGD the conformal Yangian extends to super-symplectic Yangian - actually, all symmetry algebras have a Yangian generalization with multi-locality generalized to multi-locality with respect to partonic 2-surfaces. The generalization of the dual conformal symmetry has however remained obscure. In the following I describe what the generalization of the two conformal symmetries and Yangian symmetry would mean in TGD framework.

One also ends up with a proposal of an information theoretic duality between Euclidian and Minkowskian regions of the space-time surface inspired by number theory: one might say that the dynamics of Euclidian regions is mirror image of the dynamics of Minkowskian regions. A generalization of the conformal reflection on sphere and of the method of image charges in 2-D electrostatics to the level of space-time surfaces allowing a concrete construction reciple for both Euclidian and Minkowskian regions of preferred extremals is in question. One might say that Minkowskian and Euclidian regions are analogous to Leibnizian monads reflecting each other in their internal dynamics.

See the chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix" or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 14, 2017

Could second generation of weak bosons explain the reduction of proton charge radius?

The discovery by Pohl et al (2010) was that the charge radius of proton deduced from the muonic version of hydrogen atom - is .842 fm and about 4 per cent smaller than .875 fm than the charge radius deduced from hydrogen atom is in complete conflict with the cherished belief that atomic physics belongs to the museum of science (for details see the Wikipedia article). The title of the article Quantum electrodynamics-a chink in the armour? of the article published in Nature expresses well the possible implications, which might actually go well extend beyond QED.

Quite recently (2016) new more precise data has emerged from Pohl et al (see this). Now the reduction of charge radius of muonic variant of deuterium is measured. The charge radius is reduced from 2.1424 fm to 2.1256 fm and the reduction is .012 fm, which is about .8 per cent (see this). The charge radius of proton deduced from it is reported to be consistent with the charge radius deduced from deuterium. The anomaly seems therefore to be real. Deuterium data provide a further challenge for various models.

The finding is a problem of QED or to the standard view about what proton is. Lamb shift is the effect distinguishing between the states hydrogen atom having otherwise the same energy but different angular momentum. The effect is due to the quantum fluctuations of the electromagnetic field. The energy shift factorizes to a product of two expressions. The first one describes the effect of these zero point fluctuations on the position of electron or muon and the second one characterizes the average of nuclear charge density as "seen" by electron or muon. The latter one should be same as in the case of ordinary hydrogen atom but it is not. Does this mean that the presence of muon reduces the charge radius of proton as determined from muon wave function? This of course looks implausible since the radius of proton is so small. Note that the compression of the muon's wave function has the same effect.

Before continuing it is good to recall that QED and quantum field theories in general have difficulties with the description of bound states: something which has not received too much attention. For instance, van der Waals force at molecular scales is a problem. A possible TGD based explanation and a possible solution of difficulties proposed for two decades ago is that for bound states the two charged particles (say nucleus and electron or two atoms) correspond to two 3-D surfaces glued by flux tubes rather than being idealized to points of Minkowski space. This would make the non-relativistic description based on Schrödinger amplitude natural and replace the description based on Bethe-Salpeter equation having horrible mathematical properties.

The basic idea of the original model of the anomaly (see this) is that muon has some probability to end up to the magnetic flux tubes assignable to proton. In this state it would not contribute to the ordinary Schrödinger amplitude. The effect of this would be reduction of |Ψ|2 near origin and apparent reduction of the charge radius of proton. The weakness of the model is that it cannot make quantitative prediction for the size of the effect. Even the sign is questionable. Only S-wave binding energy is affected considerably but does the binding energy really increase by the interaction of muon with the quarks at magnetic flux tubes? Is the average of the charge density seen by muon in S wave state larger, in other words does it spend more time
near proton or do the quarks spend more time at the flux tubes?


In the following a new model for the anomaly will be discussed.

  1. The model is inspired by data about breaking of universality of weak interactions in neutral B decays possibly manifesting itself also in the anomaly in the magnetic moment of muon. Also the different values of
    the charge radius deduced from hydrogen atom and muonium could reflect the breaking of universality. In the original model the breaking of universality is only effective.

  2. TGD indeed predicts a dynamical U(3) gauge symmetry whose 8+1 gauge bosons correspond to pairs of fermion and anti-fermion at opposite throats of wormhole contact. Throats are characterized by genus g=0,1,2, so that bosons are superpositions of states labelled by (g1,g2). Fermions correspond to single wormhole throat carrying fermion number and behave as U(3) triplet labelled by g.

    The charged gauge bosons with different genera for wormhole throats are expected to be very massive. The 3 neutral gauge bosons with same genus at both throats are superpositions of states (g,g) are expected to be lighter. Their charge matrices are orthogonal and necessarily break the universality of electroweak interactions. For the lowest boson family - ordinary gauge bosons - the charge matrix is proportional to unit matrix. The exchange of
    second generation bosons Z01 and γ1 would give rise to Yukawa potential increasing the binding energies of S-wave states. Therefore Lamb shift defined as difference between energies of S and P waves is increased and the charge radius deduced from Lamb shift becomes smaller.

  3. The model thus predicts a correct sign for the effect but the size of the effect from naive estimate assuming only γ2 and α21== α for M=2.9 TeV is almost by an order of magnitude too small. The values of the gauge couplings α2 and αZ,2 are free parameters as also the mixing angles between states (g,g). The effect is also proportional to the ratio (mμ/M(boson)2. It turns out that the inclusion of Z01 contribution and assumption α1 and αZ,1 are near color coupling strength αs gives a correct prediction.

Motivations for the breaking of electroweak universality

The anomaly of charge radius could be explained also as breaking of the universality of weak interactions. Also other anomalies challenging the universality exists. The decays of neutral B-meson to lepton pairs should be same apart from corrections coming from different lepton masses by universality but this does not seem to be the case (see this). There is also anomaly in muon's magnetic moment discussed briefly here. This leads to ask whether the breaking of universality could be due to the failure of universality of electroweak interactions.

The proposal for the explanation of the muon's anomalous magnetic moment and anomaly in the decays of B-meson is inspired by a recent very special di-electron event and involves higher generations of weak bosons predicted by TGD leading to a breaking of lepton universality. Both Tommaso Dorigo (see this) and Lubos Motl (see this) tell about a spectacular 2.9 TeV di-electron event not observed in previous LHC runs. Single event of this kind is of course most probably just a fluctuation but human mind is such that it tries to see something deeper in it - even if practically all trials of this kind are chasing of mirages.

Since the decay is leptonic, the typical question is whether the dreamed for state could be an exotic Z boson. This is also the reaction in TGD framework. The first question to ask is whether weak bosons assignable to Mersenne prime M89 have scaled up copies assignable to Gaussian Mersenne M79. The scaling factor for mass would be 2(89-79)/2= 32. When applied to Z mass equal to about .09 TeV one obtains 2.88 TeV, not far from 2.9 TeV. Eureka!? Looks like a direct scaled up version of Z!? W should have similar variant around 2.6 TeV.

TGD indeed predicts exotic weak bosons and also gluons.

  1. TGD based explanation of family replication phenomenon in terms of genus-generation correspondence forces to ask whether gauge bosons identifiable as pairs of fermion and antifermion at opposite throats of wormhole contact could have bosonic counterpart for family replication. Dynamical SU(3) assignable to three lowest fermion generations labelled by the genus of partonic 2-surface (wormhole throat) means that fermions are combinatorially SU(3) triplets. Could 2.9 TeV state - if it would exist - correspond to this kind of state in the tensor product of triplet and antitriplet? The mass of the state should depend besides p-adic mass scale also on the structure of SU(3) state so that the mass would be different. This difference should be very small.

  2. Dynamical SU(3) could be broken so that wormhole contacts with different genera for the throats would be more massive than those with the same genera. This would give SU(3) singlet and two neutral states, which are analogs of η' and η and π0 in Gell-Mann's quark model. The masses of the analogs of η and π0 and the the analog of η', which I have identified as standard weak boson would have different masses. But how large is the mass difference?

  3. These 3 states are expected top have identical mass for the same p-adic mass scale, if the mass comes mostly from the analog of hadronic string tension assignable to magnetic flux tube. connecting the two wormhole contacts associates with any elementary particle in TGD framework (this is forced by the condition that the flux tube carrying monopole flux is closed and makes a very flattened square shaped structure with the long sides of the square at different space-time sheets). p-Adic thermodynamics would give a very small contribution genus dependent contribution to mass if p-adic temperature is T=1/2 as one must assume for gauge bosons (T=1 for fermions). Hence 2.95 TeV state could indeed correspond to this kind of state.

The sign of the effect is predicted correctly and the order of magnitude come out correctly

Could the exchange of massive MG,79 photon and Z0 give rise to additional electromagnetic interaction inducing the breaking of Universality? The first observation is that the binding energy of S-wave state increases but there is practically no change in the energy of P wave state. Hence the effective charge radius rp as deduced from the parameterization of binding energy different terms of proton charge radius indeed decreases.

Also the order of magnitude for the effect must come out correctly.

  1. The additional contribution in the effective Coulomb potential is Yukawa potential. In S-wave state this would give a contribution to the binding energy in a good approximation given by the expectation value of the Yukawa potential, which can be parameterized as

    V(r)= g2 e-Mr/r ,&g2 = 4π kα .

    The expectation differs from zero significantly only in S-wave state characterized by principal quantum number n. Since the exponent function goes exponentially to zero in the p-adic length scale associated with 2.9 TeV mass, which is roughly by a factor 32 times shorter than intermediate boson mass scale, hydrogen atom wave function is constant in excellent approximation in the effective integration volume. This gives for the energy shift

    Δ E= g2| Ψ(0)|2 × I ,

    Ψ(0) 2 =[22/n2]×(1/a03) ,

    a0= 1/(mα) ,

    I= ∫ (e-Mr/r) r2drdΩ =4π/3M2.

    For the energy shift and its ratio to ground state energy

    En= α2/2n2× m

    one obtains the expression

    Δ En= 64π2 α/n2 α3 (m/M)2 × m ,

    Δ En/En= (27/3) π2α2 k2(m/M)2 .

    For k=1 and M=2.9 one has Δ En/En ≈ 3× 10-11 for muon.

Consider next Lamb shift.

  1. Lamb shift as difference of energies between S and P wave states (see this) is approximately given by

    Δn (Lamb)/En= 13α3/2n .

    For n=2 this gives Δ2 (Lamb)/E2= 4.9× 10-7.

  2. The parameterization for the Lamb shift reads as

    Δ E(rp) =a - brp2 +crp3
    = 209.968(5) - 5.2248 × r2p + 0.0347 × r3p meV ,

    where the charge radius rp=.8750 is expressed in femtometers and energy in meVs.

  3. The reduction of rp by 3.3 per cent allows to estimate the reduction of Lamb shift (attractive additional potential reduces it). The relative change of the Lamb shift is

    x=[Δ E(rp))-Δ E(rp(exp))]/Δ E(rp)

    = [- 5.2248 × (r2p- r2p(exp)) + 0.0347 × ( r3p-r3p(exp))]/[209.968(5) - 5.2248 × r2p + 0.0347 × r3p(th)] .

    The estimate gives x= 1.2× 10-3.

This value can be compared with the prediction. For n=2 ratio of Δ En/Δ En(Lamb) is

x=Δ En/Δ En (Lamb)= k2 × [29π2/3×13α] × (m/M)2 .

For M=2.9 TeV the numerical estimate gives x≈ (1/3)×k2 × 10-4. The value of x deduced from experimental data is x≈ 1.2× 10-3. There is discrepancy of one order of magnitude. For k≈ 5 a correct order of magnitude is obtained. There are thus good hopes that the model works.

The contribution of Z01 exchange is neglected in the above estimate. Is it present and can it explain the discrepancy?

  1. In the case of deuterium the weak isospins of proton and deuterium are opposite so that their contributions to the Z01 vector potential cancel. If Z01 contribution for proton can be neglected, one has Δ rp=Δ rd.

    One however has Δ rp≈ 2.75 Δ rd. Hence Z01 contribution to Δ rp should satisfy Δ rp(Z01)≈ 1.75×Δ rp1). This requires αZ,11, which is true also for the ordinary gauge bosons. The weak isospins of electron and proton are opposite so that the atom is weak isospin singlet in Abelian sense, and one has I3pI3μ= -1/4 and attractive interaction. The condition relating rp and rZ suggests

    αZ,11≈ 286=4+13 .

    In standard model one has αZ/α= 1/[sin2W)cos2W)] =5.6 for sin2W)=.23 . One has upper bound αZ,11 ≥ 4 saturated for sin2W,1) =1/2. Weinberg angle can be expressed as

    sin2W,1)= (1/2)[1 - (1-4( α1Z,1)1/2] .

    αZ,11≈ 28/6 gives sin2W,1) = (1/2)[1 -(1/7)1/2] ≈ .31.

    The contribution to the axial part of the potential depending on spin need not cancel and could give a spin dependent contribution for both proton and deuteron.

  2. If the scale of α1 and αZ,1 is that of αs and if
    the factor 2.75 emerges in the proposed manner, one has k2≈ 2.75× 10= 27.5 rather near to the rough estimate k2≈ 27 from data for proton.

    Note however than there are mixing angles involved corresponding to the diagonal hermitian family charge matrix Q= (a,b,c) satisfying a2+b2+c2=1 and the condition a+b+c=0 expressing the orthogonality with the electromagnetic charge matrix (1,1,1)/31/2 expressing electroweak universality for ordinary electroweak bosons. For instance, one could have (a,b,c)= (0,1,-1)/21/2 for the second generation and (a,b,c)= (2,-1,-1)/61/2 for the third generation. In this case the above estimate would would be scaled down: α1→ 2α1/3≈ 1/20.5.

To sum up, the proposed model is successful at quantitative level allowing to understand the different changes for charge radius for proton and deuteron and estimate the values of electroweak couplings of the second generation of weak bosons apart from the uncertainty due to the family charge matrix. Muon's magnetic moment anomaly and decays of neutral B allow to test the model and perhaps fix the remaining two mixing angles.

See the article Could second generation of weak bosons explain the reduction of proton charge radius?

For background see the chapters New Physics Predicted by TGD: Part I and New Physics Predicted by TGD: Part II.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 13, 2017

What about actual realization of Lorentz invariant synchronization?

I wrote one day ago about synchronization of clocks and found that the clocks distributed at the hyperboloids of light-cone assignable to CD can in principle be synchronized in Lorentz invariant manner (see this). But what about actual Lorentz invariant synchronization of the clocks? Could TGD say something non-trivial about this problem? I received an interesting link relating to this (see this). The proposed theory deals with fundamental uncertainty of clock time due to quantum-gravitational effects. There are of course several uncertainties involved since quantum theory of gravity does not exist (officially) yet!

  1. Operationalistic definition of time is adopted in the spirit with the empiristic tradition. Einstein was also empirist and talked about networks of synchronized clocks. Nowadays particle physicists do not talk much about them. Symmetry based thinking dominates and Special Relativity is taken as a postulate about symmetries.

  2. In quantum gravity situation becomes even rather complex. If quantization attempt tries to realize quantum states as superpositions of 3-geometries one loses time totally. If GRT space-time is taken to be small deformation of Minkowski space one has path integral and classical solutions of Einstein's equation define the background.

    The difficult problem is the identification of Minkowski coordinates unless one regards GRT as QFT in Minkowski space. In astrophysical scales QFT picture one must consider solutions of Einstein's equations representing astrophysical objects. For the basic solutions of Einstein's equations the identification of Minkowski coordinates is obvious but in general case such as many-particle system this is not anymore so. This is a serious obstacle in the interpretation of the classical limit of GRT and its application to planetary systems.

What about the situation in TGD? Particle physicist inside me trusts symmetry based thinking and has been somewhat reluctant to fill space-time with clocks but I am ready to start the job if necessarily! Since I am lazy I of course hope that Nature might have done this already and the following argument suggests that this might be the case!
  1. Quantum states can be regarded as superpositions of space-time surfaces inside causal diamond of imbedding space H= M4× CP2 in quantum TGD. This raises the question how one can define universal time coordinate for them. Some kind of absolute time seems to be necessary.

  2. In TGD the introduction of zero energy ontology (ZEO) and causal diamonds (CDs) as perceptive fields of conscious entities certainly brings in something new, which might help. CD is the intersection of future and past directed light-cones analogous to a big bang followed by big crunch. This is however only analogy since CD represents only perceptive field not the entire Universe.

    The imbeddability of space-time as to CD× CP2 ⊂ H= M4× CP2 allows the proper time coordinate a2 =t2-r2 near either CD bouneary as a universal time coordinate, "cosmic time". At a= constant hyperboloids Lorentz invariant synchronisation is possible. The coordinate a is kind of absolute time near a given boundary of CD representing the perceptive field of a particular conscious observer and serves as a common time for all space-time surfaces in the superposition. Newton would not have been so wrong after all.

    Also adelic vision involving number theoretic arguments selects a as a unique time coordinate. In p-adic sectors of adele number theoretic universality (NTU) forces discretization since the coordinates of hyperboloid consist of hyperbolic angle and ordinary angles. p-Adicallhy one cannot realize either angles nor their hyperbolic counterparts. This demands discretization in terms of roots of unity (phases) and roots of e (exponents of hyperbolic angles) inducing finite-D extension of p-adic number fields in accordance with finiteness of cognition. a as Lorentz
    invariant would be genuine p-adic coordinate which can in principle be continuous in p-adic sense. Measurement resolution however discretizes also a.

    This discretization leads to tesselations of a=constant hyperboloid having interpretation in terms of cognitive representation in the intersection of real and various p-adic variants of space-time surface with points having coordinates in the extension of rationals involved. There are two choices for a. The correct choice
    corresponds to the passive boundary of CD unaffected in state function reductions.

  3. Clearly, the vision about space-time as 4-surface of H and NTU show their predictive power. Even more, adelic physics itself might solve the problem of Lorentz invariant synchronization in terms of a clock network assignable to the nodes of tesselation!

    Suppose that tesselation defines a clock network. What synchronization could mean? Certainly strong correlations between the nodes of the network Could the correlation be due to maximal quantum entanglement (maximal at least in p-adic sense) so that the network of clocks would behave like a single quantum clock? Bose-Einstein condensate of clocks as one might say? Could quantum entanglement in astrophysical scales predicted by TGD via hgr= heff=n× h hypothesis help to establish synchronized clock networks even in astrophysical scales? Could Nature guarantee Lorentz invariant synchronization automatically?

    What would be needed would be not only 3-D lattice but also oscillatory behaviour in time. This is more or less time crystal (see this and this)! Time crystal like states have been observed but they require feed of energy in contrast to what Wilzek proposed. In TGD Universe this would be due to the need to generate large heff/h=n phases since the energy of states with n increases with n. In biological systems this requires metabolic energy feed. Can one imageine even cosmic 4-D lattice for which there would be the analog of metabolic energy feed?

    I have already a model for tensor networks and also here a appears naturally (see this). Tensor networks would correspond at imbedding space level to tesselations of hyperboloid t2-r2=a2 analogous to 3-D lattices but with recession velocity taking the role of quantized position for the point of lattice. They would induce tesselations of space-time surface: space-time surface would go through the points of the tesselation (having also CP2 counterpart). The number of these tesselations is huge. Clocks would be at the nodes of these lattice like structures. Maximal entanglement would be key feature of this network. This would make the clocks at the nodes one big cosmic clock.

    If astrophysical objects serving as clocks tend to be at the nodes of tesselation, quantization of cosmic redshifts is predicted! What is fascinating is that there is evidence for this: for TGD based model for this see (see this and this)! Maybe dark matter fraction of Universe might have taken care of the Lorentz invariant synchronization so that we need not worry about that!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, March 12, 2017

Is Lorentz invariant synchronization of clocks possible?

I participated an FB discussion with several anti-Einsteinians. As a referee I have expressed my opinion about numerous articles claiming that Einstein's special or general relativity contains a fatal error not noticed by any-one before. I have tried to tell that colleagues are extremely eager to find a mistake in the work of colleague (unless they can silence the colleague) so that logical errors can be safely excluded. If something goes wrong it is at the level of basic postulates. In vain.

Once I had a long email discussion with a professor of logic who claimed to have found logical mistake in the deduction of time dilation formula. It was easy to find that he thought in terms of Newtonian space-time and this was of course in conflict with relativistic view. The logical error was his, not Einstein's. I tried to tell this. In vain again.

At this time I was demanded to explain why the 2 page article of Stephen Crothers (see this). This article was a good example of own logical error projected to that of Einstein. The author assumed besides the basic formulas for Lorentz transformation also synchronization of clocks so that they show the same time everywhere (about how this is achieved see this).

Even more: Crothers assumes that Einstein assumed that this synchronization is Lorentz invariant. Lorentz invariant synchronization of clocks is not however possible for the linear time coordinate of Minkowski space as also Crothers demonstrates. Einstein was wrong! Or was he? No!: Einstein of course did not assume Lorentz invariant synchronization!

The assumption that the synchronization of clock network is invariant under Lorentz transformations is of course in conflict with SR. In Lorentz boosted system the clocks are not in synchrony. This expresses just Einstein's basic idea about the relativity of simultaneity. Basic message of Einstein is misunderstood! The Newtonian notion of absolute time again!

The basic predictions of SR - time dilation and Lorentz contraction - do not depend on the model of synchronization of clocks. Time dilation and Lorentz contraction follow from basic geometry of Minkowskian space-time extremely easily.

Draw system K and K' moving with constant velocity with respect to K. The t' and x' axis of K' have angle smaller than π/2 and are in first quadrant.

  1. Assume first that K corresponds to the rest system of particle. You see that the projection of segment=(0,t') t'-axis to t-axis is shorter than the segment (0,t'): time dilation.

  2. Take K to be the system of stationary observer. Project the segment L=(0,x') to segment on x axis. It is shorter than L: Lorentz contraction.

There is therefore no need to build synchronized networks of clocks to deduce time dilation and Lorentz contraction. They follow from Minkowskian geometry.

This however raises a question. Is it possible to find a system in which synchronization is possible in Lorentz invariant manner? The quantity a2=t2-x2 defines proper time coordinate a along time like geodesics as Lorentz invariant time coordinate of light-one. a = constant hyper-surfaces are now hyperboloids. If you have a synchronized network of clocks, its Lorentz boost is also synchronized. General coordinate invariance of course allows this choice of time coordinate.

For Robertson-Walker cosmologies with sub-critical mass time coordinate a is Lorenz invariant so that one can have Lorentz invariant synchronization of clocks. General Coordinate Invariance allows infinitely many choices of time coordinate and the condition of Lorentz invariant synchronization fixes the time coordinate to cosmic time (or its function to be precise). To my opinion this is rather intesting fact.

What about TGD? In TGD space-time is 4-D surface in H=M4×CP2. a2= t2-r2 defines Lorentz invariant time coordinate a in future light-cone M4+ ⊂ M4 which can be used as time-coordinate also for space-time surfaces.

Robertson-Walker cosmologies can be imbedded as 4-surfaces to H=M4×CP2. The empty cosmology would be just the lightcone M4+ imbedded in H by putting CP2 coordinates constant. If CP2 coordinates depend on M4+ proper time a, one obtains more general expanding RW cosmologies. One can have also sub-critical and critical cosmologies for which Lorentz transformations are not isometries of a= constant section. Also in this case clocks are synchronized in Lorentz invariant manner. The duration of these cosmologies is finite: the mass density diverges after finite time.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, March 11, 2017

Are viruses fragments of topological quantum computer code?

I was listening a highly interesting talk about viruses in Helsinki by Dr. Matti Jalasvuori, a molecular biologist working in the University of Jyväskylä as a researcher (see this). He has published a book about viruses in finnish titled "Virus. Elämän synnyttäjä, kuoleman kylväjä, ajatusten tartuttaja" (see this).

I learned an extremely interesting new-to-me fact about viruses. They might be far from a mere nuisance, In TGD Universe they could be quantum memes, short pieces of a code of quantum computer code, wandering around and attaching to the existing quantum computer code represented by DNA! Replication of viruses would be replication of memes. If the infected organism survives the virus attack by taming the virus and making it part of its non-coding DNA, it will gain more strength! If my computer survives the updating of the operating system, it works better!

Some basic facts

Viruses are very small, few nanometers is the size scale. Virus contains short pieces of RNA or DNA coding for the virus, in particular the protein shell around it, which virus must have in the "non-living" state outside the host cell to which it can penerate. Inside its host this shell melts and virus attaches to DNA and is able to to replicate. The copies of virus leave the host cell to search for their own host cells.

Usually viruses are regarded as a nuisance. But a new more holistic vision is evolving about viruses and their actual role. Viruses have been present perhaps even before the cell was present in its recent form, they might have been crucial for the emergence of life as we know it and would be also now. The system would consist of various kinds of cells, not necesary those of single organism. The contain several kinds of DNA and RNA: cell nucleus and mitochondria contain their own genomes; there are circular plasmids, and also viruses.

There is a continual exchange of information between cells including viruses as form form of information exchange. In this framework virus represents a meme represented by its DNA ,which does not code for protein shell. This meme wants to replicate and must use the genetic machinery to achieve this. But does virus do this to only replicate and produce more nuisance?

The organism manages to survive the virus attack if it is able to transform the virus so that it cannot replicate. One manner to achieve this would be transformation of the DNA portion due to the attached virus DNA (possible reverse transcribed from the RNA of virus) to a non-coding DNA often referred to as "junk" DNA. Non-coding DNA includes both intragenic regions - introns - and intergenic regions containing for instance promoters and enhancers crucial for the control of gene expression as proteins (see this). Introns are portions of genes, whose contribution to mRNA is sliced away in translation to proteins. The decomposition to introns and translated regions is dynamical, which gives rise to a rich spectrum of different translations of the gene.

In fact, most of non-coding DNA might be due to viruses! The portion of non-coding DNA increases for speciei at higher evolutionary level. For our species it is estimated to be 98 percent! Most of our genome is "junk" as many biologists still would put it. But can this really be the case? On might think that immune system would have invented some mechanism to prevent the infection of DNA by junk DNA? The size of the trash bin cannot be a measure for evolutionary level! It is also known that virus infections force the organism to change and in some cases to become a better surviver. Viruses would drive evolution.

One can speculate that during the very early period in evolution there were only viruses and proto-cells. There is no need for them to be coded by genes. Self-organization can produce cell membrane like structures: soap films represent an example. The DNA fragments could survive inside these proto-cells but according to simulations done by the Jyäskylä group in which Matti Jalasvuori is working, eventually the evolution would lead to the emergence of parasitic DNA strands, which would soon begin to dominate and kill the protocell.

Viruses might solve the problem. Viruses would attract DNA fragments and replicate with themto build a protein wall around the fragment containg also a piece of DNA of proto-cell. Viruses would leave the proto cell before its death and find another protocell. Gradually genome would be formed as viruses would steal pieces of DNA fragments from protocells. One step in the later evolution could be the elimination of the part of virus coding for the protein shell and the use of the rest as protein coding DNA. For eukariotes the transformation to non-coding DNA including intronic and intergenic DNA becomes possible.

Viruses as pieces of quantum computer code?

Computational thinking would suggest that viruses might make possible the emergence of new biological program modules allowing to use existing program modules coding for proteins more effectively. The different slicings of mRNA dropping some pieces away would correspond to different manners to transform DNA sequences to proteins. But what about intragenic portions of DNA: are they just junk?

Could the non-coding DNA and viruses have a much deeper purpose of existence than mere replication? In TGD Universe this kind of purpose is easy to imagine if the system formed by DNA - say intragenic portions of DNA - and nuclear membrane (or cell membrane) system serves as a topological quantum computer. DNA codons would be connected to lipids of the lipid layer of cell nucleus by magnetic flux tubes carrying dark charged particles. These connections could be also to cell membrane and even to cell membranes of other cells.

The braiding of the flux tubes would define the space-time realization of a quantum computer program. This would represent a new expression of DNA and would explain why so small differences between our DNA and that of our cousins give rise to so huge differences. What is important that genetic code would not be terribly important: it is braiding that matters now. The realization as quantum computer programs would give rise to cultural evolution, the realization as proteins to biological evolution. There would be a transition from the level of genes to that of memes.

Viruses would correspond to pieces of quantum computer code - memes. They would be wandering between cells and infecting them to get fused to the DNA. If DNA is able to transform them to introns it gets the code. Otherwise it dies. Infection is the necessary price for achieving meme replication. Living cells could be seen quantum computer programs updating them continually. Sounds somehow familiar!

See the chapters DNA as topological quantum computer, Three new physics realizations of the genetic code and the role of dark matter in bio-systems, and More Precise TGD Based View about Quantum Biology and Prebiotic Evolution of "Genes and Memes".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, March 09, 2017

About double slit experiments of Dean Radin

Dean Radin and his team have done a very interesting experiment (see this and this) testing the idea that observer induces state function reduction.

Experiment

The experiment is a modified double slit experiment. In double slit experiment a laser beam arrives to the screen via two slits and interference pattern is generated as if photons would behave like waves localized at screen. If one adds detectors at the slits, either detector fires and detects the passing-by by photon, and interference pattern disappears with optimal detection efficiency.

The idea is to add a subject person (S) at distance of two meters. S imagines of measuring that electron passes through either slit. One can say that S intends to add a "detector" to either slit or both of them so that a state function reduction selecting either slit occurs. This experiment differs from experiments in which S tries to affect the ratio of frequencies of 0:s and 1:s in random series of bits: S does not try to force the electrons to pass by either slit. There is a feedback represented as sound/yellow light whose height/intensity coded for the amount of the reduction of the height of the peak. There are two kinds of participants: meditators and those who have no experience in meditation.

The results of the experiment are thoroughly discussed in the Youtube lecture or Radin (see this). To my opinion the results are amazing. In one experiment it was found that the height of the peak of the Fourier transform of the intensity distribution of the diffraction pattern is reduced. In second experiment the depth of bottom of the through of distribution was reduced. As if the intention would induce with some probability to perform the measurement selecting the photon path. The effect was small but appeared systematically for a group consisting of meditators. For persons without experience in meditation the effect averaged out also in this case it was present in the beginning of the experiment when subject person were not bored by the repetitive character of the experiment. The longer attention span of meditators could partially explain this.

Even more amazing finding was that in a variant of the experiment realized in internet the results were also positive although the persons intending to induce the experiment.

Arguments of different skeptic

The standard argument of skeptic is that statistics is poor, that the experiment is even fraud, etc... One can however consider more refined and more imaginative objections. Let us make a digress from the usual behavior of skeptic and assume that the effect was real.

If the meditators could induce the measurement by intention, one expects that also the experimenter could have done it. To how high degree the outcome was due to the experimenters and how much due to the meditators? Experimenter also had the theoretical expectation that meditators are better in inducing the slit detection. Could the wish that the theory is correct have caused subconscious intention about performing the detection in the case of meditators or not doing it in the case of non-meditating subject persons?

In the case of net experiment situation becomes even more problematic. One can imagine that also in this case the intention of experimenter could induce the detection - at least if experimenter is near to the system. Should experimenters have spent the period of experiments in Mars or at least in a distant holiday resort! Experimenters studying remote mental interactions are usually not rich people and presumably they did not do this.

The experimenter effect is well-known in parapsychology. Some experimenters are extremely successful. Could one think that they have strong intentional powers? Ironically, this would demonstrate the reality of paranormal effects of this kind but in a manner that can never convince the skeptics. There is evidence for this kind of effect in the testing of new medicines. Good results are obtained when the testers are enthusiastic and dream of a positive result. When they do same tests after some years, the results are worse.

TGD based model

The challenge is to understand how the S imagining a measurement telling that photon went through either slit could realize this intention. What does the detection mean and what it demands?

  1. The measurement should involve a state function reduction selecting between the slits entangled with observer. In principle it is enough to have an interaction of photon in either slit localing the path of the photon to that slit. It is enough that photon interacts with charged particles in either slit with some probability. This measurement is of course not optimal since the interference diagram is only partially changed. Only some fraction of these measurements take place and produce single slit pattern so that the observer pattern is a weighted average of double slit and single slit patterns. In principle one can estimate the probability for single slit pattern from the data.

  2. Quantum classical correspondence requires that in order that the intention to detect could be realized, one must have a physical connection from the S to both slits or at least either of them. Also charged particles assignable to the connection should be involved to make scattering of photon possible. Also entanglement entangling detector fires/does not fire with corresponding states of some other system, say the S would be needed.

How could one realize these connections in TGD?

  1. In TGD framework the magnetic flux tubes serve as correlates of entanglement and directed attention. To direct attention to a system means to connected with it by flux tubes. Flux tubes carry dark charged particles essential for TGD view about quantum biology.

  2. Every system has U-shaped flux tubes emanating from it and acting as kind of tentacles scanning the environment. As a U-shaped flux tube from system A encounters another similar flux tube from system B, a reconnection takes place if the quantized fluxes are same. The outcome is a pair of flux tubes connection A and B. The flux tube pair can carry Cooper pairs with members of the pair at the flux tubes. The photons from laser could scatter from the charged particles.

  3. The dark particles the flux tube are dark with heff/h=n satisfying an additional condition implying that n is proportional to the mass of the charged particle in turn implying that cyclotron energies Ec= hbareff eB/m are universal and assumed to correspond energies in the range of visible and UV.

    In order that photon scatters from the charged particles it must have the same value of heff as the particles at magnetic flux tubes emanating from the S. Some fraction of laser photons could satisfy this condition. Note that if perturbative quantum theory applies, the classical predictions are same as lowest order quantum predictions so that heff makes it visible only in higher orders assuming that perturbation theory works when heff/h=n holds true. Unfortunately, it is not possible to estimate the probability that photon enters to the flux tube. Note that the probability depends also on the density of the flux tubes.

The effect is reported in net experiments for which distances can be long and there is no visual contact. Can one understand this?
  1. If there quantum entanglement between A and B already exists one can increase the distance without spoiling the entanglement. But how to achieve the entanglement if n the systems are at large distance from beginning?

  2. The length of the magnetic flux tubes is not a problem. The size scale for the layers of magnetic flux tube corresponding to EEG frequency 7.8 Hz is circumference of Earth. The condition that the size of the flux tube is at least of the order of the cyclotron wavelength λ for cyclotron photons at the flux tube implies that length of the flux tube of of the order of the size scale of Earth for EEG frequencies.

    In fact, our MBs could have much larger layers if biological rhythms have cyclotron frequencies as counterparts. The size scales could be of order light-life-time or even longer. This changes totally the view about the role of length scales in biology and consciousness. There is some evidence that galactic day defines the natural rhythm for precognitive phenomena: precognitive phenomena tend to occur at galactic midday. Galactic cyclotron frequencies (the galactic magnetic field is of order nT) could correspond to bio-rhythms up to 12 hours.

In net experiment the problem is how to generate the connection to a correct target. The same problem is encountered in the attempts to explain the claimed results of remote viewing experiments. Could the density of flux tubes of personal magnetic body (MB) be so high that the connection is generated with high enough probability. S receives data through the web. Could this help to build the desired connection.
  1. Skeptic would explain the reported positive result in web experiments by saying that the results were actually induced by the intention of the experimenter who was near to the system. This might of course be the case.

  2. The first possibility is that an entanglement is generated between the camera monitoring the system and slits involving flux tubes. The communication of the image from the camera to computer builds another flux tube bridge. The radiation reflected in satellite to the computer at Earth involves propagation along flux tubes. At the receiver ends similar bridges are build. There is therefore a flux tube connection with the computer of used by S, who generates the last piece of the connection. This kind of flux tube connection would be between all communicating systems. Also the experiments would belong to this entanglement network.

  3. MB has layers with size scale of order Earth size. Could it be able to meet the challenge by using the information coming from web. Could the U-shaped flux tubes be so dense as to be able to build a contact with the experimental arrangement with high enough probability? If they are to represent Maxwellian magnetic field in good approximation, they should be dense. What is important that these flux tubes correspond do different space-time sheets for distinct observers: this is actually the basic distinction between the field concepts of Maxwell and TGD.

    Could it be that the feedback from S at her computer via the net to the computer at the other end generates quantum correlated events and this correlation has as correlates magnetic flux tubes connecting the distant systems.

  4. The hyper-imaginative option is that S can delegate the problem with collective consciousness assignable to the magnetosphere of Earth and having all the engineering knowledge that Earth has! Could we be neurons of a gigantic brain of Mother Gaia, which would help S to realize their intention. Can single neuron realize its intention on a distant neuron in brain in the similar manner? Could some kind of resonance mechanism be involved? Could MB detect correlations between distant events and generate flux tube connection and entanglement between these places? Could brain do the same for neutrons?

See the article About the double slit experiment of Dean Radin or the chapter TGD inspired view about remote mental interactions and paranormal of "TGD based view about living matter and remote mental interactions".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, March 08, 2017

About unitarity of twistor amplitudes

The first question is what one means with S-matrix in ZEO. I have considered several proposals for the counterparts of S-matrix. In the original U-matrix, M-matrix and S-matrix were introduced but it seems that U-matrix is not needed.

  1. The first question is whether the unitary matrix is between zero energy states or whether it characterizes zero energy states themselves as time-like entanglement coefficients between positive and negative energy parts of zero energy states associated with the ends of CD. One can argue that the first option is not sensible since positive and negative energy parts of zero energy states are strongly correlated rather than forming a tensor product: the S-matrix would in fact characterize this correlation partially.

    The latter option is simpler and is natural in the proposed identification of conscious entity - self - as a generalized Zeno effect, that is as a sequence of repeated state function reductions at either boundary of CD shifting also the boundary of CD farther away from the second boundary so that the temporal distance between the tips of CD increases. Each shift of this kind is a step in which superposition of states with different distances of upper boundary from lower boundary results followed by a localization fixing the active boundary and inducing unitary transformation for the states at the original boundary.

  2. The proposal is that the the proper object of study for given CD is M-matrix. M-matrix is a product for a hermitian square root of diagonalized density matrix ρ with positive elements and unitary S-matrix S : M= ρ1/2S. Density matrix ρ could be interpreted in this approach as a non-trivial Hilbert space metric. Unitarity conditions are replaced with the conditions MM= ρ and MM=ρ. For the single step in the sequence of reductions at active boundary of CD one has M→ MS (Δ T) so that one has S→ SS(Δ T). S(Δ T) depends on the time interval Δ T measured as the increase in the proper time distance between the tips of CD assignable to the step.

What does unitarity mean in the twistorial approach?
  1. In accordance with the idea that scattering diagrams is a representation for a computation, suppose that the deformations of space-time surfaces defining a given topological diagram as a maximum of the exponent of Kähler function, are the basic objects. They would define different quantum phases of a larger quantum theory regarded as a square root of thermodynamics in ZEO and analogous to those appearing also in QFTs. Unitarity would hold true for each phase separately.

    The topological diagrams would not play the role of Feynman diagrams in unitarity conditions although their vertices would be analogous to those appearing in Feynman diagrams. This would reduce the unitarity conditions to those for fermionic states at partonic 2-surfaces at the ends of CDs, actually at the ends of fermionic lines assigned to the boundaries of string world sheets.

  2. The unitarity conditions be interpreted stating the orthonormality of the basis of zero energy states assignable with given topological diagram. Since 3-surfaces as points of WCW appearing as argument of WCW spinor field are pairs consisting of 3-surfaces at the opposite boundaries of CD, unitarity condition would state the orthonormality of modes of WCW spinor field. If might be even that no mathematically well-defined inner product assignable to either boundary of CD exists since it does not conform with the view provided by WCW geometry. Perhaps this approach might help in identifying the correct form of S-matrix.

  3. If only tree diagrams constructed using 4-fermion twistorial vertex are allowed, the unitarity relations would be analogous to those obtained using only tree diagrams. They should express the discontinuity for T in S=1+iT along unitary cut as Disc(T)= TT. T and T would be T-matrix and its time reversal.

  4. The correlation between the structure of the fermionic scattering diagram and topological scattering diagrams poses very strong restrictions on allowed scattering reactions for given topological scattering diagram. One can of course have many-fermion states at partonic 2-surfaces and this would allow arbitrarily high fermion numbers but physical intuition suggests that for given partonic 2-surface (throat of wormhole contact) the fermion number is only 0, 1, or perhaps 2 in the case of supersymmetry possibly generated by right-handed neutrino.

    The number of fundamental fermions both in initial and final states would be finite for this option. In quantum field theory with only masive particles the total energy in the final state poses upper bound on the number of particles in the final state. When massless particles are allowed there is no upper bound. Now the complexity of partonic 2-surface poses an upper bound on fermions.

    This would dramatically simplify the unitarity conditions but might also make impossible to satisfy them. The finite number of conditions would be in spirit with the general philosophy behind the notion of hyper-finite factor. The larger the number of fundamental fermions associated with the state, the higher the complexity of the topological diagram. This would conform with the idea about QCC. One can make non-trivial conclusions about the total energy at which the phase transitions changing the topology of space-time surface defined by a topological diagram must take place.

See the article About twistor lift of TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 06, 2017

Kerr effect, breaking of T symmetry, and Kähler form of M4

I encountered in Facebook (thanks to Ulla) a link to a very interesting article Here is the abstract.


We prove an instance of the Reciprocity Theorem that demonstrates that Kerr rotation, also known as the magneto-optical Kerr effect, may only arise in materials that break microscopic time reversal symmetry. This argument applies in the linear response regime, and only fails for nonlinear effects. Recent measurements with a modified Sagnac Interferometer have found finite Kerr rotation in a variety of superconductors. The Sagnac Interferometer is a probe for nonreciprocity, so it must be that time reversal symmetry is broken in these materials.

I had to learn some basic condensed matter physics. Magneto-optic Kerr effect occurs when a circularly polarized plane wave - often with normal incidence - reflects from a sample with planar boundary. In magneto-optic Kerr effect there are many options depending on the relative directions of the reflection plane (incidence is not normal in the general case so that one can talk about reflection plane) and magnetization. Also the incoming polarization can be linear or circular. Reflected circular polarized beams suffers a phase change in the reflection: as if they would spend some time at the surface before reflecting. Linearly polarized light reflects as elliptically polarized light.

Kerr angle θK is defined as 1/2 of the difference of the phase angle increments caused by reflection for oppositely circularly polarized plane wave beams. As the name tells, magneto-optic Kerr effect is often associated with magnetic materials.

Kerr effect has been however observed also for high Tc superconductors and this has raised controversy. As a layman in these issues I can naively wonder whether the controversy is created by the expectation that there are no magnetic fields inside the super-conductor. Anti-ferromagnetism is however important for high Tc superconductivity. In TGD based model for high Tc superconductors the supracurrents would flow along pairs of flux tubes with the members of S=0 (S=1) Cooper pairs at parallel flux tubes carrying magnetic fields with opposite (parallel) magnetic fluxes. Therefore magneto-optic Kerr effect could be in question after all.

The author claims to have proven that Kerr effect in general requires breaking of microscopic time reversal symmetry. Time reversal symmetry breaking (TRSB) caused by the presence of magnetic field and in the case of unconventional superconductors is explained nicely here. Magnetic field is required. Magnetic field is generated by a rotating current and by right-hand rule time reversal changes the direction of the current and also of magnetic field. For spin 1 Cooper pairs the analog of magnetization is generated, and this leads to T breaking.

This result is very interesting from the point of TGD. The reason is that twistorial lift of TGD requires that imbedding space M4× CP2 has Kähler structure in generalized sense. M4 has the analog of Kähler form, call it J(M4). J(M4) is assumed to be self-dual and covariantly constant as also CP2 Kähler form, and contributes to the Abelian electroweak U(1) gauge field (electroweak hypercharge) and therefore also to electromagnetic field.

J(M4) implies breaking of Lorentz invariance since it defines decomposition M4= M2× E2 Implying preferred rest frame and preferred spatial direction identifiable as direction of spin quantization axis. In zero energy ontology (ZEO) one has moduli space of causal diamonds (CDs) and therefore also moduli space of Kähler forms and the breaking of Lorentz invariance cancels. Note that a similar Kähler form is conjectured in quantum group inspired non-commutative quantum field theories and the problem is the breaking of Lorentz invariance.

What is interesting that the action of P,CP, and T on Kähler form transforms it from self-dual to anti-self-dual form and vice versa. If J(M4) is self-dual as also J(CP2), all these 3 discrete symmetries are broken in arbitrarily long length scales. On basis of tensor property of J(M4) one expects P: (J(M2),J(E2)→ (J(M2),-J(E2) and T: (J(M2),J(E2)→ (-J(M2),J(E2). Under C one has (J(M2),J(E2)→ (-J(M2),-J(E2). This gives CPT: (J(M2),J(E2)→ (J(M2),J(E2) as expected.

One can imagine several consequences at the level of fundamental physics.

  1. One implication is a first principle explanation for the mysterious CP violation and matter antimatter asymmetry not predicted by standard model (see the recent blog post).

  2. A new kind of parity breaking is predicted. This breaking is separate from electroweak parity breaking and perhaps closely related to the chiral selection in living matter.

  3. The breaking of T might in turn relate to Kerr effect if the argument of authors is correct. It could occur in high Tc superconductors in macroscopic scales. Also large heff/h=n scaling up quantum scales in high Tc superconductors could be involved as with the breaking of chiral symmetry in living matter. Strontium ruthenate for which Cooper pairs are in S=1 state is is indeed found to exhibit TRSB (for references and explanation see this).

    In TGD based model of high Tc superconductivity the members of the Cooper pair are at parallel magnetic flux tubes with the same spin direction of magnetic field. The magnetic fields and thus the direction of spin component in this direction changes under T causing TRSB. The breaking of T for S=1 Cooper pairs is not spontaneous but would occur at the level of physics laws: the time reversed system finds itself experiences in the original self-dual J(M4)) rather than in (-J(M2),J(E2)) demanded by T symmetry.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Key ideas related to the twistor lift of TGD

The generalization of twistor approach from M4 to H=M4× CP2 involves the replacement of twistor space of M4 with that of H. M8-H duality allows also an alternative approach in which one constructs twistor space of octonionic M8. Note that M4,E4, S4, and CP2 are the unique 4-D spaces allowing twistor space with Kähler structure. This makes TGD essentially unique.

Ordinary twistor approach has two problems.

  1. It applies only if the particles are massless. In TGD particles are massless in 8-D sense but the projection of 8-momentum to given M4 is in general massive in 4-D sense. This solves the problem. Note that the 4-D M4 momenta can be light-like for a suitable choice of M4⊂ H. There exist even a choice of M2 for which this is the case. For given M2 the choices of quaternionic M4 are parametrized by CP2.

  2. The twistor approach has second problem: it works nicely in signature (2,2) rather (1,3) for Minkowski space. For instance, twistor Fourier transform cannot be defined as an ordinary integral. The very nice results by Nima Arkani-Hamed et al about positive Grassmannian follow only in the signature (2,2).

    One can always find M2⊂ M8 in which the 8-momentum lies and is therefore light-like in 2-D sense. Furthermore, the light-like 8-momenta and thus 2-momenta are prediced already at classical level to be complex. M2 as subspace of momentum space M8 effectively extends to its complex version with signature (2,2)!

    At classical space-time level the presence of preferred M2 reflects itself in the properties of massless extremals with M4= M2× E2 decomposition such that light-like momentum is in M2 and polarization in E2.

    4-D conformal invariance is restricted to its 2-D variant in M2. Twistor space of M4 reduces to that of M2. This is SO(2,2)/SO(2,1)=RP3. This is 3-D RP3, the real variant of twistor space CP3. Complexification of light-like momenta replaces RP3 with CP3.

Light-like M8-momenta are in question but they are not arbitrary.
  1. They must lie in some quaternionic plane containing fixed M2, which corresponds to the plane spanned by real octonion unit and some imaginary unit. . This condition is analogous to the condition that the space-time surfaces as preferred extremals in M8 have quaternionic tangent planes.

  2. In particular, the wave functions can be expressed as products of plane waves in M2, wave functions in the plane of transverse momenta in E2⊂ M4, where M4 is quaternionic plane containing M2 and wave function in the space for the choices of M4, which is CP2. One obtains exactly the same result in M4× CP2 if delocalization in transversal E2 momenta taking place of quarks inside hadrons takes place.
    Transversal wave function can also concentrate on single momentum value.

    It should be noticed that quaternionicity forces number theoretical spontaneous compactification. It would be very clumsy to realize the condition that allowed 8-momenta are qiuaternionic. Instead going to M4× CP2, "spontaneously compactifying", description everyting becomes easy.

  3. What is amusing that the geometric twistor space M4× S2 of M4 having bundle projections to M4 and ordinary twistor spaces is nothing but the space of choices of causal diamonds with preferred M2 and fixed rest frame (time axis connecting the tips). M4 point fixes the tip of causal diamond (CD) and S2 the spatial direction fixing M2 plane. In case of CP2 the point of twistor space fixes point of CP2 as analog for tip of CD: the complex CP2 coordinates have origin at this point. The point of twistor sphere of SU(3)/U(1)× U(1) codes for the selection of quantization axis for hypercharge Y and isospin I3. The corresponding subgroup U(1)× U(1) affects only the phases of the preferred complex coordinates transforming linearly under SU(2)× U(1).

    At the level of momentum space M4 twistor codes for the momentum and helicity of particle. For CP2 it codes for the selection of M4⊂ M8 and for em charge as analog of helicity. Now one has actually wave function for the selections of CP2 point labelled by the color numbers of the particle.

Number theoretical vision inspires the idea that scattering ampitudes define representations for algebraic computations leading from initial set of algebraic objects to to final set of objects. If so, the amplitudes should not depend on how the computation is done and there should exist a minimal computation possibly represented by a tree diagram. There would be no summation over the equivalent diagrams: one can choose any-one of them and the best choice is the simplest one.

To develop this idea one must understand what scattering diagrams are. The scattering diagrams involve two kinds of lines.

  1. There are topological "lines" corresponding to light-like orbits of partonic 2-surfaces playing the role of lines of Feynman diagrams. The topological diagram formed by these lines gives boundary conditions for 4-surface: at these light-like partonic orbits Euclidian space-time region changes to Minkowskian one. Vertices correspond to 2-surfaces at which these 3-D lines meet just like line in the case of Feynman diagrams.

  2. There are also fermion lines assignable to fundamental fermions serving as building bricks of elementary particles. They correspond to the boundaries of string world sheets at the orbits of partonic 2-surfaces. Fundamental fermion-fermion scattering takes place via classical interactions at partonic 2-surfaces: there is no 4-vertex in the usual sense (this would lead to non-renormalizable theory).

    The conjecture is that he 4-vertex is described by twistor amplitude fixed apart from over all scaling factor. Fermion lines are along parton orbits. Boson lines correspond to pairs of fermion and antifermion at the same parton orbit.

    As a matter fact, the situation is more complex for elementary particles since they correspond to pairs of wormhole contacts connected by monopole magnetic tubes and wormhole contacts has two wormhole throats - partonic 2-surfaces.

For the idea about diagrams as representations of computations to make sense, there should exist moves which allow to glide the 4-fermion vertex and associated flux tubes along the topological line of scattering diagrams in the vicinity of the second end of the loop. Second move should allow to snip away the loop. Is this possible? The possibility to find M2 for which momentum is light-like is central in the argument claiming that this is indeed possible.

The basic problem is that the kinematics for 4-fermion vertices need not be consistent with the gliding of vertex past another one so that this move is not possible.

  1. Clearly, one must assume something. If all momenta along at vertices along fermion line are in same M2 then they parallel as light-like M2-momenta. Kinematical conditions allow the gliding of two vertices of this kind past each other as is easy to show. The scattering would mean only redistribution of parallel light-like momenta in this particular M2.

    This kind of scattering would be more general than the scattering in integrable quantum field theories in M2: in this case the scattering would not affect the momenta but would induce phase shifts: particles would spend some time in the vertex before continuing. What is crucial for having non-trivial scatterings, is that in the general frame M2⊂ M4 ⊂ M8 the momenta would be massive and also different.

  2. The condition would be that all four-fermion vertices along given fermion line correspond to the same preferred M2. M2:s can differ only for fermionic sub-diagrams which do not have common vertices.

    Note however that tree diagrams for which lines can have different M2s can give rise to non-trivial scattering. One can take tree diagram and assign to the internal lines networks with same M2s as the internal line has. It is quite possible that for general graphs allowing different M2s in internal lines and loops, the reduction to tree graph is not possible.

    At least this idea could define precisely what the equivalence of diagrams, if vertices in which M2:s can be different are allowed. One can of course argue, that there is not deep reason for not allowing more general loopy graphs in which the incoming lines can have arbitrary M2:s.

One implication is that the BCFW recursion formula allowing to generate loop diagrams from those with lower number of loops must be trivial in TGD - this of course only if one accepts that BCFW formula makes sense in TGD. This requires that the entangled removal appearing as second term in the right hand side of BCFW formula and adding loop gives zero. One can develop and argument for why this must be the case in TGD framework. Also the second term corresponding to removal of BCFW bridge should give zero so that allowed diagrams cannot have BCFW bridges.

In TGD Universe allowed diagrams would represent closed objects in what one might call BCFW homology. The operation appearing at the right hand side of BCFW recursion formula is indeed boundary operation, whose square by definition gives zero.

For background see the article Some questions related to the twistor lift of TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, March 05, 2017

Issues related to the precise formulation of twistor lift of TGD

During last two weeks I have worked hardly to deduce the implications of some observations relating to the twistor lift of Kähler action. Some of these observations were very encouraging but some observations were a cold shower forcing a thorough criticism of the first view about the details of the twistor lift of TGD.

New formulation of Kähler action

The first observation was that the correct formulation of 6-D Kähler action in the framework of adelic physics implies that the classical physics of TGD does not depend on the overall scaling of Kähler action.

  1. Kähler form has dimension length squared. Kähler form projected to the space-time surface defines Mawell field, which should be however dimensionless. I had assumed that one can just divide Kähler form by CP2 radius squared to achieve this. The skeptic realizes immediately that this parameter is free coupling parameter albeit CP2 radius is good guess for it. The correct formulation of the action principle must keep Kähler form dimensional and divides Kähler action by a dimensional parameter with dimension 4: this is new coupling contant type parameter besides αK. The classical field equations do not depend at all on this scaling parameter. The exponent of action defining vacuum functional however depends on it.

  2. What is so nice that all couplings disappear from classical field equations in the new formulation, and number theoretical universality (NTU) is automatically achieved. In particular, the preferred extremals need not be minimal surface extremals of Kähler action to achieve this as in the original proposal for the twstor lift. It is enough that they are so asymptotically - near the boundaries of CDs, where they behave like free particles. In the interior they couple to Kähler force. This also nicely conforms with the physical idea that they are 4-D generalizations for orbits of particles in induced Kähler field.

  3. I also realized that the exchange of conserved quantities between Euclidian and Minkowskian space-time regions is not possible for the original version of twistor lift. This does not sound physical: quantal interactions should have classical correlates. The reason for the catastrophe is simple. Metric determinant appearing in action integral is identified as g41/2. In Minkowskian regions it is purely imaginary but real in Euclidian regions. Boundary conditions lead to decoupling of Minkowskian and Euclidian regions.

    This forced to return to an old nagging question whether one should use a) g41/2 (imaginary in Minkowskian regions) or b)|g41/2| in the action. For real αK the option a) is unavoidable and the need to have exponent of imaginary action in Minkowskian regions indeed motivated option a).

    For complex αK forced by other considerations the situation however changes - something that I had not noticed. Complex αK allows |g41/2|. The study of so called CP2 extremals assuming that 1/αK= s, s=1/2+iy zero of Riemann zeta shows that NTU is realized in the sense that the exponent of action exists in some extension of rationals, provided that the imaginary part of zero of zeta satisfies y= qπ, q rational, implying that the exponent of y is root of unity. This possibility has been considered already earlier. This is highly non-trivial hypothesis about zeros of zeta.

  4. Option b) allows transfer of conserved quantities between Minkowskian and Euclidian regions as required. Option a) also predicts separate conservation of Noether charges for Kähler action and volume term. This can make sense only asymptotically. Therefore only Option b) remains under serious consideration. In the new picture the interaction region in particle physics experiences corresponds to the region, where there is coupling between volume and Kähler terms: extrenal particles correpond to minimal surface extremals of Kähler action and all known extremals indeed are such.

Realizing NTU

The independence of the classical physics on the scale of the action in the new formulation inspires a detailed discussion of the number theoretic vision.

  1. Quantum Classical Correspondence (QCC) breaks the invariance with respect to the scalings via fermionic anti-commutation relations and NTU can fix the spectrum of values of the over-all scaling parameter of the action. Fermionic anticommutation relations introduce the constraint removing the projective invariance.

  2. One ends up to a condition guaranteeing NTU of the action exponentiale xp(S). One must have S= q1+iq2π , qi rational. This guarantees that exp(S) is in some extension
    of rationals and therefore number theoretically universal. S itself is however not number theoretically universal.
    The overall scaling parameter for action contrained by fermionic anticommutations must have a value allowing to satisfy the condition.

  3. The vision about scattering amplitude as a representation of computation however suggests the action exponential disappears from twistorial scattering amplitudes altogether as it does in quantum field theories. This would require that one defines scattering amplitude - actually zero energy state - by allowing functional integral
    only around single maximum of action. Whether this makes sense is not obvious but ZEO might allow it. I have not yet discussed seriously the constraints from unitary - or its generalization to ZEO, and these constraints might force sum
    over several maxima.

    This looks at first a catastrophe but the scattering amplitudes depend on the preferred extremal in implicit manner. For instance, the heff/h= n depends on extremal. Also quantum classical correspondence (QCC) realized as boundary conditions stating that the classical Noether charges are equal to the eigenvalues of fermionic charges in Cartan algebra bring in the dependence of scattering amplitudes on preferred extremal. Furhermore, the maxima of Kähler function could correspond to the points of WCW for which WCW coordinates are in the extension of rationals: if the exponent of action is such a coordinate this could be the case.

    One could see the situation in two manners. The standard view in which preferred extremals are maxima of Kähler function, whose exponentials however disappear from the scattering amplitudes, and the number theoretic view in which maxima correspond to WCW points in the intersection of real and various p-adic WCWs defining cognitive representation at the level of WCW similar to that provided by the discretization at the level of space-time surface. Maybe there is a maximization of cognitive information (classical correlate for NMP): say in the sense that the number of points in the intersection of real and p-adic space-time surfaces is maximal for the preferred extremals.

    This kind of connection would mean deep connection between cognition and sensory perception, p-adic physics and real physics, and geometric and number theoretic views about physics.

Trouble with cosmological constant

Also an unpleasant observation about cosmological constant forces to challenge the original view about twistor lift.

  1. The original vision for the p-adic evolution of cosmological constant assumed that αK(M4) and αK(CP2) are different for the twistor lift. This is definitely somewhat ad hoc choice but in principle possible. If one assumes that the Kähler form has also M4 part J(M4) this option becomes very artificial. In fact, the assumption
    that the twistor space M4× S2 associated with M4 allows Kähler structure, J(M4) must be non-vanishing and is completely fixed. It is now clear that
    J(M4) allows to understand both CP breaking and parity breaking (in particular chiral selection in living matter). The introduction of moduli space for CDs means also introduction of moduli space for the choices of J(M4), which is nothing but the twistor space T(M4)!

  2. One indeed finds in the more geometric formulation of 6-D Kähler action that single value of αK is the only natural choice. The nice outcome guaranteeing NTU is that the preferred extremals do not depend on the coupling parameters at all. In the original version one had to assume that extremals of Kähler action are also minimal surfaces to guarantee this.

  3. One however loses the original proposal for the p-adic length scale evolution of cosmological constant explaining why it is so small in cosmological scale. The solution to the problem would be that the entire 6-D action decomposing to 4-D Kähler action and volume term is identified in terms of cosmological constant. The cancellation of Kähler electric contribution and remaining contributions would explain why the cosmological constant is so small in cosmological scales and also allows to understand p-adic coupling constant evolution of cosmological constant.

    One important implication is that there are two kind string like objects. Those for which string tension is very large and which are analogous to the strings of super-string theories and those for which string tension is small due to the cancellation of Kähler action and volume term. These strings appear in all scales and they also mediate gravitational interaction. Also hadronic strings are this kind of strings as also elementary particles as string like objects. In this framework one additional reason for the superstring tragedy becomes manifest: they predict only the strings giving rise to a gigantic cosmological constant.

To sum up, it is fair to say that the twistor lift of TGD has now achieved rather stable form. There are also a lot of details to be polished but this requires only hard work and a lot of counter argumentation. What is so fascinating is that the formalism produces now rather precise predictions and new detailed fresh insights to the basic problems of standard model. The problems of cosmological constant and CP breaking represent only two examples in this respect. There is also an explicit proposal for twistor four-fermion amplitudes and one can understand how the QFT picture with central role played by the loops emerges although there are no loops at the fundamental level: when particles are approximated by point like objects, some tree diagrams are contracted to loop diagrams. Consider only exchange between two particle lines replaced with single line in pointlike this approximation.

See the article About twistor lift of TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.