Wednesday, March 28, 2018

Rydberg polarons as a support for TGD view about space-time

I learned about very weird looking phenomenon involving Bose-Einstein condensate (BEC) of strontium atoms at ultralow temperature corresponding to T=1.5× 10-7 K and thus thermal energy of order 10-11 eV. Experimenters create Rydberg atoms by applying a laser beam to BEC of strontium atoms: second valence electron of Sr is kicked to at an orbital with very large classical radius characterized by the principal quantum number n. This leads to a formation of "molecules" of BEC atoms inside the orbit of Rydberg electron - Rydberg polarons as they are called.

The phenomenon is an excellent challenge for TGD, and in this article I will construct a TGD inspired model for it. The model relies on the notion of many-sheeted space-time distinguishing between Maxwellian electrodynamics and TGD. The model assumers a pair of magnetic flux tubes between electrons of opposite spin associated with the Rydberg atom. The flux tubes are parallel space-time sheets in M4× CP2 (same M4 projection) and are not distinguishable at QFT limit of TGD. They carry monopole fluxes with opposite directions and present in the region between spins, where the sum of the dipole fields vanishes in Maxwellian theory. The members of s2 electron pairs of BEC atoms are assumed to topologically condense at different parallel flux tubes of the pair minimizing ground state energy in this manner.

The model makes predictions surprisingly similar to the model of experimenters based on Born-Oppenheimer potential but there are also differences. An interesting possibility that if the generation of Rydberg involves time reversal as zero energy ontology suggests then the energy spectrum involved can be positive rather than consisting of bound states. Also a possible interpretation for the "endogenous" magnetic fields central in TGD inspired biology emerges.

See the article Rydberg polarons as a support for TGD view about space-time or the chapter Quantum Model for Bio-Superconductivity: II.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, March 22, 2018

Connection between quaternionicity and causality

The notion of quaternionicity is a central element of M8-H duality. At the level of momentum space it means that 8-momenta -, which by M8-H-duality correspond to 4-momenta at level of M4 and color quantum numbers at the level of CP2 - are quaternionic. Quaternionicity means that the time component of 8-momentum, which is parallel to real octonion unit, is non-vanishing. The 8-momentum itself must be time-like, in fact light-like. In this case one can always regard the momentum as momentum in some quaternionic sub-space. Causality requires a fixed sign for the time component of the momentum.

It must be however noticed that 8-momentum can be complex: also the 4-momentum can be complex at the level of M× CP2 already classically. A possible interpretation is in terms of decay width as part of momentum as it indeed is in phenomenological description of unstable particles.

Remark: At space-time level either the tangent space or normal space of space-time surface in M8 is quaternionic (equivalently associative) in the regions having interpretation as external particles arriving inside causal diamond (CD). Inside CD this assumption is not made. The two options correspond to space-time regions with Minkowskian and Euclidian signatures of the induced metric.

Could one require that the quaternionic momenta form a linear space with respect to octonionic sum? This is the case if the energy - that is the time-like part parallel to the real octonionic unit - has a fixed sign. The sum of the momenta is quaternionic in this case since the sum of light-like momenta is in general time-like and in special case light-like. If momenta with opposite signs of energy are allowed, the sum can become space-like and the sum of momenta is co-quaternionic.

This result is technically completely trivial as such but has a deep physical meaning. Quaternionicity at the level of 8-momenta implies standard view about causality: only time-like or at most light-like momenta and fixed sign of time-component of momentum.

Remark: The twistorial construction of S-matrix in TGD framework based on generalization of twistors leads to a proposal allowing to have unitary S-matrix with vanishing loop corrections and number theoretically determined discrete coupling constant evolution. Also the problems caused by non-planar diagrams disappear and one can have particles, which are massive in M4 sense.

The proposal boils down to the condition that the 8-momenta of many-particle states are light-like (in complex sense). One has however a superposition over states with different directions of the projection of light-like 8-momentum to E4 in M8= M4× E4). At the level of CP2 one has massive state but in color representation for which color spin and hypercharge vanish but color Casimir operator can have value of the order of the mass squared for the state. This prediction sharply distinguishes TGD from QCD.

See the article The Recent View about Twistorialization in TGD Framework or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

TGD based model for graphene superconductivity

A highly interesting new effect associated with graphene is discussed in Phys.Org article (see this). The original research articles by Cao et al are published in Nature. There is also a popular article In Nature (see this). What is found that a bilayer formed by parallel graphene sheets becomes superconducting for critical values of twist angle θ. The largest critical value of θ is θ=1.1 degrees.

Basic observations

Consider first basic facts. The surprising discovery was that graphene becomes unconventional superconductor at temperature 1.7 K. It was already earlier discovered that the coupling of graphene to a superconductor can make also graphene superconducting.

  1. The system studied consists of two graphene (see this) layers twisted by angle θ with respect to each other (rotation of the second sheet by angle θ around the axis normal to sheets). For a generic value of θ the graphene layers behave as separate conductors. For certain critical twist angles below 1.1 degrees the two-layered system however behaves like single unit and Mott insulator (see this): this is due to the increase of the conduction band gap. In an applied electric field the system becomes a super conductor. The electric field provides the energy needed to kick the current carries to the conduction band, which for Mott insulators has higher energy than for the corresponding conductor: at the top of the band Cooper pairs are formed as in the case of ordinary superconductors.

  2. A kind of Moire effect (see this) is involved. The twist creates a superlattice with larger unit cell and the electrons associated with periodically occurring C-atom pairs above each other give rise to a narrow band where the superconducting electrons are. Electric field would kick the electrons to this band.

  3. There are intriguing analogies with high Tc superconductivity. Electron density as function of temperature has a pattern similar to that for cuprates. Superconductivity occurs at electron density, which is 10-4 times that for conventional superconductors at the same temperature. The pairing of electrons cannot be due to phonon exchange since the density is so low. Unidentified strong interaction between electrons is believed to be the reason.

TGD based view very briefly

The finding of Cao et al is believed to be highly significant concerning the understanding of high Tc super-conductivity and motivates the development of a model of Mott insulators based on TGD based views about valence bond inspired by the identification of dark matter as heff/h=n phases of ordinary matter emerging naturally in adelic physics (see this). Also a more detailed version about the model of high Tc superconductivity in TGD Universe developed earlier emerges.

The model starts from a model of elementary particles applied to electron.

  1. At space-time level elementary particles are identified as two-sheeted structures involving a pair of wormhole contacts connecting the space-time sheets and magnetic flux tubes connecting the wormhole throats at the two sheets. For the second sheet flux tubes are loop-like and define the magnetic body of the particle. These flux loops are associated with valence bonds and the value of heff/h=n can be large for them implying that the loops become long. In ohmic conductivity the reconnection of the valence loops would be the fundamental mechanism allowing to transfer conduction electrons between neighboring lattice sites.

  2. An essential role is played by TGD based model for valence bond predicting that the value of n increases along the row of the Periodic Table. For group 4 transition metals n for valence bond with O is largest for Ni (NiO is Mott insulator) and for Cu (copper oxides are high Tc superconductors) so that they are predicted to be excellent candidates for Mott insulators and even unconventional superconductors.

  3. TGD space-time time is many-sheeted and flux tubes form a hierarchy. In high Tc superconductivity also anti-ferromagnetic (AFM) flux loops with a shape of flattened and elongated rectangle would be present. The reconnection of valence flux loops with AFM flux loops would allow the transfer of the precursors of Cooper pairs - appearing already for Mott insulators but not yet giving rise to super-conductivity - to the AFM flux loops. In the phase transition leading to high Tc superconductivity in macroscopic scales the AFM flux loops would reconnect to longer flux loops making possible macroscopic supra currents.

See the article TGD based model for graphene superconductivity or the chapter Quantum Model for Bio-Superconductivity: II.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, March 15, 2018

Four new strange effects associated with galaxies

Dark matter in TGD sense corresponds to heff/h=n phases of ordinary matter associated with magnetic flux tubes carrying monopole flux. These flux tubes are n-sheeted covering spaces, and n corresponds to the dimension of the extension of rationals in which Galois group acts. The evidence for this interpretation of dark matter is accumulating. Here I discuss 4 latest galactic anomalies supporting the proposed view.

  1. Standard view about galactic dark matter strongly suggests that the stars moving around so called low surface brightness stars should not have flat velocity spectrum. The surprise has been that they have. It is demonstrated that this provides additional piece of support for the TGD view about dark matter and energy assigning them with cosmic strings having galaxies as knots along them.

  2. The called 21-cm anomaly meaning that there is unexpected absorption of this line could be due to the transfer of energy from gas to dark matter leading to a cooling of the gas. This requires em interaction of the ordinary matter with dark matter but the allowed value of electric charge must be must much smaller than elementary particle charges. In TGD Universe the interaction would be mediated by an ordinary photon transforming to dark photon implying that em charge of dark matter particle is effectively reduced.

  3. The unexpected migration of stars from Milky Way halo would in pearl-in-necklace model for galaxies be due to a cosmic traffic accident: a head-on collision of galaxy arriving along cosmic string having both Milky Way and arriving galaxy along it. The gravitational attraction of the arriving galaxy would strip part of stars from the galactic plane and distributions of stripped stars located symmetrically at the two sides of the galactic plane would be formed.

  4. A further observation is that the rotation period of galaxy identified as the period of rotation at the edge of galaxy seems to be universal. In TGD Universe the period could be assigned to dark matter. The model allows to build a more detailed picture about the interaction of ordinary matter and dark matter identified as a knot in a long string containing galaxies as knots. This knot would have loop like protuberances extending up to the edge of the galaxy and even beyond it. In the region or radius r of few kpc the dark matter knot behaves like a rigid body and rotates with velocity vmax slightly higher velocity vrot of distant stars. The angular rotation velocity of the flux loops extending to larger distances slows down with distance from its value ωmax at ρ=r to ωrot=vrot/R at ρ=R - roughly by a factor r/R. If stars are associated with sub-knots of the galactic knot and have decayed partially (mostly) to ordinary matter, the rotational velocities of stars and dark matter are same, and one can understand the peculiar features of the velocity spectrum.

See the article Four new strange effects associated with galaxies or the chapter TGD and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, March 14, 2018

TGD based model explains why the rotation periods of galaxies are same

I learned in FB about very interesting finding about the angular rotation velocities of stars near the edges of the galactic disks (see this). The rotation period is about one giga-year. The discovery was made by a team led by professor Gerhardt Meurer from the UWA node of the International Centre for Radio Astronomy Research (ICRAR). Also a population of older stars was found at the edges besides young stars and interstellar gas. The expectation was that older stars would not be present.

The rotation periods are claimed to in a reasonable accuracy same for all spiral galaxies irrespective of the size. The constant velocity spectrum for distant stars implies ω ∝ 1/r for r>R. It is important do identify the value of the radius R of the edge of the visible part of galaxy precisely. I understood that outside the edge stars are not formed. According to Wikipedia, the size R of Milky Way is in the range (1-1.8)× 105 ly and the velocity of distant stars is v=240 km/s. This gives T∼ R/v∼ .23 Gy, which is by a factor 1/4 smaller than the proposed universal period of T=1 Gy at the edge. It is clear that the value of T is sensitive to the identification of the edge and that one can challenge the identification Redge=4× R.

In the following I will consider two TGD inspired arguments. The first argument is classical and developed by studying the velocity spectrum of stars for Milky Way, and leads to a rough view about the dynamics of dark matter. Second argument is quantal and introduces the notion of gravitational Planck constant hbargr and quantization of angular momentum as multiples of hbargr. It allows to predict the value of T and deduce a relationship between the rotation period T and the average surface gravity of the galactic disk.

In the attempts understand how T could be universal in TGD framework, it is best to look at the velocity spectrum of Milky Way depicted in a Wikipedia article about Milky Way (see this).

  1. The illustration shows that the v(ρ) has maximum at around r=1 kpc. The maximum corresponds in reasonable approximation to vmax= 250 km/s, which is only 4 per cent above the asymptotic velocity vrot=240 km/s for distant stars as deduced from the figure.

    Can this be an accident? This would suggest that the stars move under the gravitational force of galactic string alone apart from a small contribution from self-gravitation! The dominating force could be due to the straight portions of galactic string determining also the velocity vrot of distant stars.

    It is known that there is also a rigid body part of dark matter having radius r∼ 1 kpc (3.3 × 103 ly) for Milky Way, constant density, and rotating with a constant angular velocity ωdark to be identified as the ωvis at r. The rigid body part could be associated with a separate closed string or correspond to a knot of a long cosmic string giving rise to most of the galactic dark matter.

    Remark: The existence of rigid body part is serious problem for dark matter as halo approach and known as core-cusp problem.

    For ρ<r stars could correspond to sub-knots of a knotted galactic string and vrot would correspond to the rotation velocity of dark matter at r when self-gravitation of the knotty structure is neglected. Taking it into account would increase vrot by 4 per cent to vmax. One would have ωdark= vmax/r.

  2. The universal rotation period of galaxy, call it T∼ 1 Gy, is assigned with the edge of the galaxy and calculated as T= v(Redge)/Redge. The first guess is that the the radius of the edge is Redge=R, where R∈ (1-1.8)× 105 ly (30-54 kpc) is the radius of the Milky Way. For v(R)= vrot∼ 240 km/s one has T∼ .225 Gy, which is by a factor 1/4 smaller that T=1 Gy. Taking the estimate T=1 Gy at face value one should have Redge=4R.

    One could understand the slowing down of the rotation if the dark matter above ρ>r corresponds to long - say U-shaped as TGD inspired quantum biology suggests - non-rigid loops emanating from the rigid body part. Non-rigidy would be due to the thickening of the flux tube reducing the contribution of Kähler magnetic energy to the string tension - the volume contribution would be extremely small by the smallness of cosmological constant like parameter multiplying it.

  3. The velocity spectrum of stars for Milky Way is such that the rotation period Tvis=ρ/vvis(ρ) is quite generally considerably shorter than T=1 Gy. The discrepancy is from 1 to 2 orders of magnitude. The vvis(ρ) varies by only 17 per cent at most and has two minima (200 km/s and 210 km/s) and eventually approaches vrot=240 km/s.

    The simplest option is that the rotation v(ρ) velocity of dark matter in the range [r,R] is in the first approximation same as that of visible matter and in the first approximation constant. The angular rotation ω would decrease roughly like r/ρ from ωmax to ωrot=2π/T: for Milky Way this would mean reduction by a factor of order 10-2. One could understand the slowing down of the rotation if the dark matter above ρ>r corresponds to long - say U-shaped as TGD inspired quantum biology suggests - non-rigid loops emanating from the rigid body part. Non-rigidity would be due to the thickening of the flux tube reducing the contribution of Kähler magnetic energy to the string tension - the volume contribution would be extremely small by the smallness of cosmological constant like parameter multiplying it.

    If the stars form sub-knots of the galactic knot, the rotational velocities of dark matter flux loops and visible matter are same. This would explain why the spectrum of velocities is so different from that predicted by Kepler law for visible matter as the illustration of the Wikipedia article shows (see this). Second - less plausible - option is that visible matter corresponds to closed flux loops moving in the gravitational field of cosmic string and its knotty part, and possibly de-reconnected (or "evaporated") from the flux loops.

    What about the situation for ρ>R? Are stars sub-knots of galactic knot having loops extending beyond ρ=R. If one assumes that the differentially rotating dark matter loops extend only up to ρ=R, one ends up with a difficulty since vvis(ρ) must be determined by Kepler's law above ρ=R and would approach vrot from above rather from below. This problem is circumvented if the loops can extend also to distances longer than R.

  4. Asymptotic constant rotation velocity vrot for visible matter at r>R is in good approximation proportional to the square root of string tension Ts defining the density per unit length for the dark matter and dark energy of string. vrot= (2GTs)1/2 is determined from Kepler's law in the gravitational field of string. In the article R is identified as the size of galactic disk containing stars and gas.

  5. The universality of T (no dependence on the size R of the galaxy) is guaranteed if the ratio R/r is universal for given string tension Ts. This would correspond to scaling invariance. To my opinion one can however challenge the idea about universality of T since its identification is far from obvious. Rather, the period at r would be universal if the angular velocity ω and perhaps also r are universal in the sense that they depend on the string tension Ts of the galactic string only.

The above argument is purely classical. One can consider the situation also quantally.
  1. The notion of gravitational Planck constant hgr introduced first by Nottale is central in TGD, where dark matter corresponds to a hierarchy of Planck constants heff=n × h. One would have

    hbargr= GM2/v0.

    for the magnetic flux tubes connecting masses M and m and carrying dark matter. For flux loops from M back to M one would have

    hbargr= GM2/v0.

    v0 is a parameter with dimensions of velocity. The first guess is v0 =vrot, where vrot corresponds to the rotation velocity of distant stars - roughly vrot=4× 10-3c/5. Distant stars would be associated with the knots of the flux tubes emanating from the rigid body part of dark matter, and T=.25 Gy is obtained for v0= R/vrot in the case of Milky Way. The universality of r/R guaranteeing the universality of T would reduce to the universality of v0.

  2. Assume quantization of dark angular momentum with unit hgr for the galaxy. Using L = Iω, where I= MR2/2 is moment of inertia, this gives

    MR2ω/2= L = m×hbargr =2m×GM2/v0


    ω= 2m×hbargr/MR2 = 2m×GM/(R2v0)= m× 2πggal/v0 , m=1,2,.. ,

    where ggal= GM/πR2 is surface gravity of galactic disk.

    If the average surface mass density of the galactic disk and the value of m do not depend on galaxy, one would obtain constant ω as observed (m=1 is the first guess but also other values can be considered).

  3. For the rotation period one obtains

    T= v0/m×ggal, m=1,2,...

    Does the prediction make sense for Milky Way? For M= 1012MSun represents a lower bound for the mass of Milky Way (see this). The upper bound is roughly by a factor 2 larger. For M=1012MSun the average surface gravity ggal of Milky Way would be approximately ggal ≈ 10-10g for R= 105 ly and by a factor 1/4 smaller for R= 2× 105 ly. Here g=10 m/s2 is the acceleration of gravity at the surface of Earth. m=1 corresponds to the maximal period.

    For the upper bound M= 1.5× 1012MSun of the Milky Way mass (see this) and larger radius R=2× 105 ly one obtains T≈ .23× 109/m years using v0=vrot(R/r), R=180r and vrot=240 km/s.

  4. One can criticize this argument since the rigid body approximation fails. Taking into account the dependence v=vrotR/ρ in the the integral defining total angular momentum as 2π (M/π R2) ∫ v(ρ) ρ2 dρ= Mω R2 rather than Mω R2/2 so that the value of ω is reduced by factor 1/2 and the value of T increases by factor 2 to T=.46/m Gy which is rather near to the claimed value of 1 Gy..

To sum up, the quantization argument combined with the classical argument discussed first allows to relate the value of T to the average surface gravity of the galactic disk and predict reasonably well the value of T.

See the article Four new strange effects associated with galaxies or the chapter TGD and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 13, 2018

Strange finding about galactic halo as a possible further support for TGD based model of galaxies

A team led by Maria Bergemann from the Max Planck Institute for Astronomy in Heidelberg, has studied a small population of stars in the halo of the Milky Way (MW) and found its chemical composition to closely match that of the Galactic disk (see this). This similarity provides compelling evidence that these stars have originated from within the disc, rather than from merged dwarf galaxies. The reason for this stellar migration is thought to be theoretically proposed oscillations of the MW disk as a whole, induced by the tidal interaction of the MW with a passing massive satellite galaxy.

One can divide the stars in MW to the stars in the galactic disk and those in the galactic halo. The halo has gigantic structures consisting of clouds and streams of stars rotating around the center of the MW. These structures have been identified as a kind of debris thought to reflect the violent past of the MW involving collisions with smaller galaxies.

The scientists investigated 14 stars located in two different structures in the Galactic halo, the Triangulum-Andromeda (Tri-And) and the A13 stellar over-densities, which lie at opposite sides of the Galactic disc plane. Earlier studies of motion of these two diffuse structures revealed that they are kinematically associated and could relate to the Monoceros Ring, a ring-like structure that twists around the Galaxy. The position of the two stellar over-densities could be determined as each lying about 5 kiloparsec (14000 ly) above and below the Galactic plane. Chemical analysis of the stars made possible by their spectral lines demonstrated that they must must originate from MW itself, which was a complete surprise.

The proposed model for the findings is in terms of vertical vibrations of galactic disk analogous to those of drum membrane. In particular the fact that the structures are above and below of the Monoceros Ring supports this idea. The vibrations would be induced by the gravitational interactions of ordinary and dark matter of galactic halo with a passing satellite galaxy. The picture of the the article (see this) illustrates what the pattern of these vertical vibrations would look like according to simulations.

In TGD framework this model is modified since dark matter halo is replaced with cosmic string. Due to the absence of the dark matter halo, the motion along cosmic string is free apart from gravitational attraction caused by the galactic disk. Cosmic string forces the migrated stars to rotate around to the cosmic string in plane parallel to the galactic plane and the stars studied indeed belong to ring like structures: the prediction is that these rings rotate around the axis of galaxy.

One can argue that if one has stars are very far from galactic plane - say dwarf galaxy - the halo model of dark matter suggests that the orbital plane arbitrary but goes through galactic center since spherically symmetric dark matter halo dominates in mass density. TGD would predict that the orbital plane is parallel to to the galactic plane.

Are the oscillations of the galactic plane necessary in TGD framework?

  1. The large size of and the ring shape of the migrated structures suggests that oscillations of the disk could have caused them. The model for the oscillations of MW disk would be essentially that for a local interaction of a membrane (characterized by tension) with its own gravitational field and with the gravitational field of G passing by. Some stars would be stripped off from the membrane during oscillations.

  2. If the stars are local knots in a big knot (galaxy) formed by a long flux tube as TGD based model for galaxy formation suggests, one can ask whether reconnections of the flux tube could take place and split from the flux tube ring like structures to which migrating stars are associated. This would reduce the situation to single particle level and
    it is interesting to see whether this kind of model might work. One can also ask whether the stripping could be induced by the interaction with G without considerable oscillations of MW.

The simplest toy model for the interaction of MW with G would be following: I have proposed this model of cosmic traffic accidents already earlier. Also the fusion of blackholes leading could be made probable if the blackholes are associated with the same cosmic string (stars would be subknots of galactic knots.
  1. G moves past the MW and strips off stars and possibly also larger structures from MW: denote this kind of structures by O. Since the stripped objects at the both sides of the MW are at the same distance, it seems that the only plausible direction of motion of G is along the cosmic string along which galaxies are like pearls in necklace.
    G would go through MW! If the model works it gives support for TGD view about galaxies.

    One can of course worry about the dramatic implications of the head on collisions of galaxies but it is interesting to look whether it might work at all. On the other hand, one can ask whether the galactic blackhole for MW could have been created in the collision possibly via fusion of the blackhole associated with G with that of MW in analogy with the fusion of blackholes detected by LIGO.

  2. A reasonable approximation is that the motions of G and MW are not considerably affected in the collision. MW is stationary and G arrives with a constant velocity v along the axis of cosmic string above MW plane. In the region between galactic planes of G and MW the constant accelerations caused by G and MW have opposite directions so that one has

    gtot= gG -gMW between the galactic planes and above MW plane

    gtot= -gG+gMW between the galactic planes and below MW plane ,

    gtot= -gG- gMW above both galactic planes ,

    gtot= gG+ gMW below both galactic planes .

    The situation is completely symmetric with respect to the reflection with respect to galactic plane if one assumes that the situation in galactic plane is not affected considerably. Therefore it is enough to look what happens above the MW plane.

  3. If G is more massive, one can say that it attracts the material in MW and can induce oscillatory wave motion, whose amplitude could be however small. This would induce the reconnections of the cosmic string stripping objects O from MW, and O would experience upwards acceleration gtot= gG -gMW towards G (note that O also rotates around the cosmic string). After O has passed by G, it continues its motion in vertical direction and experiences deceleration gtot= -gG- gMW and eventually begins to fall back towards MW.

    One can parameterize the acceleration caused by G as gG =(1+x)× gMW, x>1 so that the acceleration felt by O in the middle regions between the planes is gtot=gG-g= x × gMW. Above planes of both G and MW the acceleration is gtot= -(2+x) gMW .

  4. Denote by T the moment when O and G pass each other. One can express the vertical height h and velocity v of O in the 2 regions above MW as

    h(t)= (gG-gMW)2t2 , v=(gG-gMW)t for t<T ,

    h(t)= [(gG +gMW)/2](t-T)2 + v(T)(t-T)+h(T) , v(T)= (gG-gMW)T ,

    h(T) = [(gG-gMW)/2] T2 for t>T .

    Note that time parameter T tells how long time it takes for O to reach G if its has been stripped off from MW. A naive estimate for the value of T is as the time scale in which the gravitational field of galactic disk begins to look like that of point mass.

    This would suggest that h(T) is of the order of the radius R of MW so that one would have using gG= (1+x)gMW

    T∼ (1/x)1/2 (2R/gMW)1/2 .

  5. The direction of motion of O changes at v(Tmax)=0. One has

    Tmax= (2gG/(gG+gMW) T ,

    hmax= -[(gG +gMW)/2] (Tmax-T)2+ v(T)(Tmax-T)+h(T) .

  6. For t>Tmax one has

    h(t)= -[(gG+gMW)/2] (t-Tmax)2+hmax ,

    hmax=-(gG +gMW)2(Tmax-T)2 +h(T) .

    Expressing hmax in terms of T and parameter x= (gG gMW)/gMW one has

    hmax= y(x)gMW(T2/2) ,

    y(x)= x(5x + 4)/2(2+x) ≈ x for small values of x .

  7. If one assumes that hmax>hnow, where hnow∼ 1.2× 105 ly the recent height of the objects considered, one obtains an estimate for the time T from hmax>hnow giving

    T> [2(2+x)/x(5x+4)]1/2 T0 , T0=hnow gMW .

    Note that Tmax<2T holds true.

It is interesting to see whether the model really works.
  1. It is easy to find (one can check the numerical factors here) that gMW can be expressed at the limit of infinitely large galactic disk as

    gMW= 2π G (dM/dS)= 2GM/R2 ,

    where R is the radius of galactic disk and dM/dS= M/π R2 is the density of the matter of galactic disk per unit area. This expression is analogous to g= GM/R2E at the surface of Earth.

  2. One can express the estimate in terms of the acceleration g= 10 m/s2 as

    gMW≈ 2g (RE/R)2(M/ME) .

    The estimate for MW radius has lower bound R=105 ly, MW mass M∼ 1012 MSun, using MSun/ME=3×106 and REarth≈ 6× 106 m, one obtains gMW∼ 2× 10-10g.

  3. Using the estimate for gMW one obtains T> [2(2+x)/[x(5x+4)]]1/2 T0 with

    T0 ∼ 3× 109 years .

    The estimate T∼ (1/x(1/2 (2R/gMW)1/2 proposed above gives T>(1/x)1/2 × 108 years. The fraction of ordinary mass from total mass is roughly 10 per cent of the contribution of the dark energy and dark particles associated with the cosmic string. Therefore x<.1 is a reasonable upper bound for x parametrizing the mass difference of G and MW. For x≈ .1 one obtains T in the range 1-10 Gy.

See the article Four new strange effects associated with galaxies or the chapter TGD and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 12, 2018

Dark matter and 21 cm line of hydrogen

Dark matter in TGD sense corresponds to heff/h=n phases of ordinary matter associated with magnetic flux tubes. These flux tubes would be n-sheeted covering spaces, and n would correspond to the dimension of the extension of rationals in which Galois group acts. The evidence for this interpretation of dark matter is accumulating. Here I discuss one of the latest anomalies - 21-cm anomaly.

Sabine Hossenfelder told about the article discussing the possible interpretation of so called 21-cm anomaly associated with the hyperfine transition of hydrogen atom and observed by EDGES collaboration.

The EDGES Collaboration has recently reported the detection of a stronger-than-expected absorption feature in the global 21-cm spectrum, centered at a frequency corresponding to a redshift of z ≈ 17. This observation has been interpreted as evidence that the gas was cooled during this era as a result of scattering with dark matter. In this study, we explore this possibility, applying constraints from the cosmic microwave background, light element abundances, Supernova 1987A, and a variety of laboratory experiments. After taking these constraints into account, we find that the vast majority of the parameter space capable of generating the observed 21-cm signal is ruled out. The only range of models that remains viable is that in which a small fraction, ≈ 0.3-2 per cent, of the dark matter consists of particles with a mass of ≈ 10-80 MeV and which couple to the photon through a small electric charge, ε ≈ 10-6-10-4. Furthermore, in order to avoid being overproduced in the early universe, such models must be supplemented with an additional depletion mechanism, such as annihilations through a Lμ-Lτ gauge boson or annihilations to a pair of rapidly decaying hidden sector scalars.

What has been found is an unexpectedly strong absorption feature in 21-cm spectrum: the redshift is about z ≈ 17 which corresponds to a distance of about 2.27× 1011 ly. Dark matter interpretation would be in terms of scattering of the baryons of gas from dark matter at lower temperature. The anomalous absorption of 21 cm line could be explained with the cooling of gas caused by the flow of energy to a colder medium consisting of dark matter. If I understood correctly, this would generate a temperature difference between background radiation and gas and consequent energy flow to gas inducing the anomaly.

The article excludes large amount of parameter space able to generate the observed signal. The idea is that the interaction of baryons of the gas with dark matter. The interaction would be mediated by photons. The small em charge of the new particle is needed to make it "dark enough". My conviction is that tinkering with the quantization of electromagnetic charge is only a symptom about how desperate the situation is concerning interpretation of dark matter in terms of some exotic particles is. Something genuinely new physics is involved and the old recipes of particle physicists do not work.

In TGD framework the dark matter at lower temperature would be heff/h=n phases of ordinary matter residing at magnetic flux tubes. This kind of energy transfer between ordinary and dark matter is a general signature of dark matter in TGD sense, and there are indications from some experiments relating to primordial life forms for this kind of energy flow in lab scale (see this) .

The ordinary photon line appearing in the Feynman diagram describing the exchange of photon would be replaced with a photon line containing a vertex in which the photon transforms to dark photon. The coupling in the vertex - call it m2 - would have dimensions of mass squared. This would transform the coupling e2 associated with the photon exchange to e2 m2/p2, where p2 is photon's virtual mass squared. The slow rate for the transformation of ordinary photon to dark photon could be see as an effective reduction of electromagnetic charge for dark matter particle from its quantized value.

Remark: In biological systems dark cyclotron photons would transform to ordinary photons and would be interpreted as bio-photons with energies in visible and UV.

To sum up, the importance of this finding is that it supports the view about dark matter as ordinary particles in a new phase. There are electromagnetic interactions but the transformation of ordinary photons to dark photons slows down the process and makes these exotic phases effectively dark.

See the article Four new strange effects associated with galaxies or the chapter TGD and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, March 11, 2018

Could functional equation and Riemann hypothesis generalize?

Number theoretical considerations lead to the modification of zeta function by replacing the powers n-s with= exp(-log(n)s) with powers exp(-Log(n)s), where rational valued number theoretic logarithm Log(n) is defined as sump kp p/π(p) corresponding to the decomposion of n to a product of powers of prime. For large primes Log(p) equals in good approximation to log(p). The point of the replacement is that Log(n) carriers number theoretical information so that the definition is very natural. This number theoretical zeta will denoted with Ζ to distinguish it from ordinary zeta function denoted by ζ.

It is interesting to list the elementary properties of the Ζ before trying to see whether functional equation for ζ and Riemann hypothesis generalize.

  1. The replacement log(n)→ Log(n)== sump kpLog(p) implies that Ζ codes explicitly number theoretic information. Note that Log(n) satisfies the crucial identity Log(mn)= Log(m)+ Log(n). Ζ is an analog of partition function with rational number valued Log(n) taking the role of energy and 1/s that of a complex temperature. In ZEO this partition function like entity could be associated with zero energy state as a "square root" of thermodynamical partition function: in this case complex temperatures are possible.|Ζ|2 would be the analog of ordinary partition function.

  2. Reduction of Ζ to a product of "prime factors" 1/[1-exp(-Log(p)s)] holds true by Log(n)== sump kpLog(p), Log(p) =p/π(p).

  3. Ζ is a combination of exponentials exp(-Log(n)s), which converge for Re(s)>0. For ζ one has
    exponentials exp(-log(n)s), which also converge for Re(s)>0: the sum ∑ n-s does not however converge in the region Re(s)<1. Presumably Ζ fails to converge for Re(s)≤ 1. The behavior of terms exp(-Log(n)s) for large values of n is very similar to that in ζ.

  4. One can express ζ o in terms of η function defined as

    η(s)= ∑ (-1)n n-s .

    The powers (-1)n guarantee that η converges (albeit not absolutely) inside the critical strip 0<s<1.

    By using a decomposition of integers to odd and even ones, one can express ζ in terms of η:

    ζ = η(s)/(-1+2-s+1) .

    This definition converges inside critical strip. Note the pole at s=1 coming from the factor.

    One can define also Η as counterpart of η:

    Η(s)= ∑ (-1)n e-Log(n)s) .

    The formula relating Ζ and Η generalizes: 2-s is replaced with exp(-2s) (Log(2)=2):

    Ζ = Η(s)/(-1+2e-2s) .

    This definition Ζ converges in the critical strip Re(s) ∈ (0,1) and also for Re(s)>1.
    Ζ(1-s) converges for Re(s)<1 so that in Η representation both converge.

    Note however that the poles of ζ at s=1 has shifted to that at s=log(2)/2 and is below Re(s)=1/2 line. If a symmetrically posioned pole at s= 1-log(2)/2 is not present in Η, functional equation cannot be true.

  5. Log(n) approaches log(n) for integers n not containing small prime factors p for which π(n) differs strongly from p/log(p). This suggests that allowing only terms exp(-Log(n)s) in the sum defining Ζ not divisible by primes p<pmax might give a cutoff Ζcut,pmax behaving very much like ζ from which "prime factors" 1/(1-exp(-Log(p)s) , p<pmax are dropped of. This is just division of Ζ by these factors and at least formally, this does not affect the zeros of Ζ. Arbitrary number of factors can be droped. Could this mean that Ζcut has same or very nearly same zeros as ζ at critical line? This sounds paradoxical and might reflect my sloppy thinking: maybe the lack of the absolute implies that the conclusion is incorrect.

The key questions are whether Ζ allows a generalization of the functional equation ξ(s)= ξ(1-s) with ξ(s)= (1/2) s(s-1) Γ(s/2) π-s/2 ζ(s) and whether Riemann hypothesis generalizes. The derivation of the functional equation is quite a tricky task and involves integral representation of ζ .
  1. One can start from the integral representation of ζ true for s>0.

    ζ(s)=[1/(1-21-s)Γ(s)]∫0[ts-1/(et+1)] dt , Re(s)>0 .

    deducible from the expression in terms of η(s). The factor 1/(1+et) can be expanded in geometric series 1/(1+et)=∑ (-1)n exp(nt) converning inside the critical strip. One formally performs the integrations by taking nt as an integration variable. The integral gives the result ∑ (-1)n/nz)Γ(s).

    The generalization of this would be obtained by a generalization of geometric series:

    1/(1+et)=∑ (-1)n exp(nt)→ ∑ (-1)n eexp(Log(n))t

    in the integral representation. This would formally give Ζ: the only difference is that one takes u= exp(Log(n))t as integration variable.

    One could try to prove the functional equation by using this representation. One proof (see this) starts from the alternative expression of ζ as

    ζ(s)=[1/Γ(s)]∫1[ ts-1/(et-1)]dt , Re(s)>1 .

    One modifies the integration contour to a contour C coming from +∞ above positive real axis, circling the origin and returning back to +∞ below the real axes to get a modified representation of ζ:

    ζ(s)=1/[2isin(π s)Γ(s)]∫1[(-w)s-1/(ew-1)] dw , Re(s)>1 .

    One modifies C further so that the origin is circle d around a square with vertices at +/- (2n+1)π and +/- i(2n+1)π.

    One calculates the integral the integral along C as a residue integral. The poles of the integrand proportional to 1/(1-et) are at imaginary axis and correspond to w= ir2π, r∈ Z. The residue integral gives the other side of the functional equation.

  2. Could one generalize this representation to the recent case? One must generalize the geometric series defined by 1/(ew-1) to -∑ eexp(Log(n))w. The problem is that one has only a generalization of the geometric series and not closed form for the counterpart of 1/(exp(w)-1) so that one does not know what the poles are. The naive guess is that one could compute the residue integrals term by term in the sum over n. An equally naive guess would be that for the poles the factors in the sum are equal to unity as they would be for Riemann zeta. This would give for the poles of n:th term the guess wn,r=r2π/exp(Log(n), r∈ Z. This does not however allow to deduce the residue at poles.Note that the pole of Η at s= log(2)/2 suggests that functional equation is not true.

There is however no need for a functional equation if one is only interested in F(s)== Ζ(s)+Ζ(1-s) at the critical line! Also the analog of Riemann hypothesis follows naturally!
  1. In the representation using Η F(s) converges at critical striple and is real(!) at the critical line Re(s)=1/2 as follows from the fact that 1-s= s* for Re(s)=1/2! Hence F(s) is expected to have a large number of zeros at critical line. Presumably their number is infinite, since F(s)cut,pmax approaches 2ζcut,pmax for large enough pmax at critical line.

  2. One can define a different kind of cutoff of Ζ for given nmax: n<nmax in the sum over e-Log(n)s. Call this cutoff Ζcut,nmax. This cutoff must be distinguished from the cutoff Ζcut,pmax obtained by dropping the "prime factors" with p<pmax. The terms in the cutoff are of the form u∑ kpp/π(p), u = exp(-s). It is analogous to a polymomial but with fractional powers of u. It can be made a polynomial by a change of variable u→ v=exp(-s/a), where a is the product of all π(p):s associated with all the primes involved with the integers n<nmax.

    One could solve numerically the zeros of Ζ(s)+Ζ(1-s) using program modules calculating π(p) for a given p and roots of a complex polynomial in given order. One can check whether also all zeros of Ζ(s)+Ζ(1-s) might reside at critical line.

  3. One an define also F(s)cut,nmax to be distinguished from F(s)cut,pmax. It reduces to a sum of terms exp(-Log(n)/2) cos(-Log(n)y) at critical line, n<nmax. Cosines come from roots of unity. F(s) function is not sum of rational powers of exp(-iy) unlike Ζ(s). The existence of zero could be shown by showing that the sign of this function varies as function of y. The functions cos(-Log(n)y) have period Δ y = 2π/Log(n). For small values of n the exponential terms exp(-Log(n)/2) are largest so that they dominate. For them the periods Δ y are smallest so that one expected that the sign of both F(s) and F(s)cut,nmax varies and forces the presence of zeros.

    One could perhaps interpret the system as quantum critical system. The rather large rapidly varying oscillatory terms with n<nmax with small Log(n) give a periodic infinite set of approximate roots and the exponentially smaller slowly varying higher terms induce small perturbations of this periodic structure. The slowly varying terms with large Log(n) become however large near the Im(s)=0 so that here the there effect is large and destroys the period structure badly for small root of Ζ.

To sum up, the definition of modified zeta and eta functions makes sense, as also the analog of Riemann Hypothesis. It however seems that the counterpart of functional equation does not hold true. This is however not a problem since one can define symmetrized zeta so that it is well-defined in critical strip.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, March 09, 2018

Number theoretic vision about Riemann zeta and evolution of Kähler coupling strength

I have made several number theoretic peculations related to the possible role of zeros of Riemann zeta in coupling constant evolution. The basic problem is that it is not even known whether the zeros of zeta are rationals, algebraic numbers or genuine transcendentals or belong to all these categories. Also the question whether number theoretic analogs of ζ defined for p-adic number fields could make sense in some sense is interesting.

1. Is number theoretic analog of ζ possible using Log(p) instead of log(p)?

The definition of Log(n) based on factorization Log(n)==∑pkpLog(p) allows to define the number theoretic version of Riemann Zeta ζ(s)=∑ n-s via the replacement n-s=exp(-log(n)s)→ exp(-Log(n)s).

  1. In suitable region of plane number-theoretic Zeta would have the usual decomposition to factors via the replacement 1/(1-p-s)→ 1/(1-exp(-Log(p)s). p-Adically this makes sense for s= O(p) and thus only for a finite number of primes p for positive integer valued s: one obtains kind of cut-off zeta. Number theoretic zeta would be sensitive only to a finite number of prime factors of integer n.

  2. This might relate to the strong physical indications that only a finite number of cognitive representations characterized by p-adic primes are present in given quantum state: the ramified primes for the extension are excellent candidates for these p-adic primes. The size scale n of CD could also have decomposition to a product of powers of ramified primes. The finiteness of cognition conforms with the cutoff: for given CD size n and extension of rationals the p-adic primes labelling cognitive representations would be fixed.

  3. One can expand the regions of converge to larger p-adic norms by introducing an extension of p-adics containing e and some of its roots (ep is automatically a p-adic number). By introducing roots of unity, one can define the phase factor exp(-iLog(n)Im(s)) for suitable values of Im(s). Clearly, exp(-ipIm(s))/π(p)) must be in the extension used for all primes p involved. One must therefore introduce prime roots exp(i/π(p)) for primes appearing in cutoff. To define the number theoretic zeta for all p-adic integer values of Re(s) and all integer values of Im(s), one should allow all roots of unity (ep(i2π/n)) and all roots e1/n: this requires infinite-dimensional extension.

  4. One can thus define a hierarchy of cutoffs of zeta: for this the factorization of Zeta to a finite number of "prime factors" takes place in genuine sense, and the points Im(s)= ikπ(p) give rise to poles of the cutoff zeta as poles of prime factors. Cutoff zeta converges to zero for Re(s)→ ∞ and exists along angles corresponding to allowed roots of unity. Cutoff zeta diverges for (Re(s)=0, Im(s)= ik π(p)) for the primes p appearing in it.

Remark: One could modify also the definition of ζ for complex numbers by replacing exp(log(n)s) with exp(Log(n)s) with Log(n)= ∑p kpLog(p) to get the prime factorization formula. I will refer to this variant of zeta as modified zeta below.

2. Could the values of 1/αK be given as zeros of ζ or of modified ζ

I have discussed the possibility that the zeros s=1/2+iy of Riemann zeta at critical line correspond to the values of complex valued Kähler coupling strength αK: s=i/αK (see this). The assumption that piy is root of unity for some combinations of p and y [log(p)y =(r/s)2π] was made. This does not allow s to be complex rational. If the exponent of Kähler action disappears from the scattering amplitudes as M8-H duality requires, one could assume that s has rational values but also algebraic values are allowed.

  1. If one combines the proposed idea about the Log-arithmic dependence of the coupling constants on the size of CD and algebraic extension with s=i/αK hypothesis, one cannot avoid the conjecture that the zeros of zeta are complex rationals. It is not known whether this is the case or not. The rationality would not have any strong implications for number theory but the existence irrational roots would have (see this). Interestingly, the rationality of the roots would have very powerful physical implications if TGD inspired number theoretical conjectures are accepted.

    The argument discussed below however shows that complex rational roots of zeta are not favored by the observations about the Fourier transform for the characteristic function for the zeros of zeta. Rather, the findings suggest that the imaginary parts (see this) should be rational multiples of 2π, which does not conform with the vision that 1/αK is algebraic number. The replacement of log(p) with Log(p) and of 2π with is natural p-adic approximation in an extension allowing roots of unity however allows 1/αK to be an algebraic number. Could the spectrum of 1/αK correspond to the roots of ζ or of modified ζ?

  2. A further conjecture discussed was that there is 1-1 correspondence between primes p≈ 2k, k prime, and zeros of zeta so that there would be an order preserving map k→ sk. The support for the conjecture was the predicted rather reasonable coupling constant evolution for αK. Primes near powers of 2 could be physically special because Log(n) decomposes to sum of Log(p):s and would increase dramatically at n=2k slightly above them.

    In an attempt to understand why just prime values of k are physically special, I have proposed that k-adic length scales correspond to the size scales of wormhole contacts whereas particle space-time sheets would correspond to p≈ 2k. Could the logarithmic relation between Lp and Lk correspond to logarithmic relation between p and π(p) in case that π(p) is prime and could this condition select the preferred p-adic primes p?

3. The argument of Dyson for the Fourier transform of the characteristic function for the set of zeros of ζ

Consider now the argument suggesting that the roots of zeta cannot be complex rationals. On basis of numerical evidence Dyson (see this) has conjectured that the Fourier transform for the characteristic function for the critical zeros of zeta consists of multiples of logarithms log(p) of primes so that one could regard zeros as one-dimensional quasi-crystal.

This hypothesis makes sense if the zeros of zeta decompose into disjoint sets such that each set corresponds to its own prime (and its powers) and one has piy= Um/n=exp(i2π m/n) (see the appendix of this). This hypothesis is also motivated by number theoretical universality (see this).

  1. One can re-write the discrete Fourier transform over zeros of ζ at critical line as

    f(x)= ∑y exp(ixy)) , y=Im(s) .

    The alternative form reads as

    f(u) =∑s uiy , u=exp(x) .

    f(u) is located at powers pn of primes defining ideals in the set of integers.

    For y=pn one would have piny=exp(inlog(p)y). Note that k=nlog(p) is analogous to a wave vector. If exp(inlog(p)y) is root of unity as proposed earlier for some combinations of p and y, the Fourier transform becomes a sum over roots of unity for these combinations: this could make possible constructive interference for the roots of unity, which are same or at least have the same sign. For given p there should be several values of y(p) with nearly the same value of exp(inlog(p)y(p)) whereas other values of y would interfere deconstructively.

    For general values y= xn x≠ p the sum would not be over roots of unity and constructive interference is not expected. Therefore the peaking at powers of p could take place. This picture does not support the hypothesis that zeros of zeta are complex rational numbers so that the values of 1/αK correspond to zeros of zeta and would be therefore complex rationals as the simplest view about coupling constant evolution would suggest.

  2. What if one replaces log(p) with Log(p) =p/π(p), which is rational and thus ζ with modified ζ? For large enough values of p Log(p)≈ log(p) finite computational accuracy does not allow distinguish Log(p) from log(p). For Log(p) one could thus understand the finding in terms of constructive interference for the roots of unity if the roots of zeta are of form s= 1/2+i(m/n)2π. The value of y cannot be rational number and 1/αK would have real part equal to y proportional to 2π which would require infinite-D extension of rationals. In p-adic sectors infinite-D extension does not conform with the finiteness of cognition.

  3. Numerical calculations have however finite accuracy, and allow also the possibility that y is algebraic number approximating rational multiple of 2π in some natural manner. In p-adic sectors would obtain the spectrum of y and 1/αK as algebraic numbers by replacing 2π in the formula is= αK= i/2+ q× 2π, q=r/s, with its approximate value:

    2π→ sin(2π/n)n= in/2(exp(i2π/n)- exp(-i2π/n))

    for an extension of rationals containing n:th of unity. Maximum value of n would give the best approximation. This approximation performed by fundamental physics should appear in the number theoretic scattering amplitudes in the expressions for 1/αK to make it algebraic number.

    y can be approximated in the same manner in p-adic sectors and a natural guess is that
    n=p defines the maximal root of unity as exp(i2π/p). The phase exp(ilog(p)y) for y= q sin(2π/n(y)), q=r/s, is replaced with the approximation induced by log(p)→ Log(p) and 2π→ sin(2π/n)n giving

    exp(ilog(p)y) → exp(iq(y) sin(2π/n(y))p/π(p)) .

    If s in q=r/s does not contain higher powers of p, the exponent exists p-adically for this extension and can can be expanded in positive powers of p as

    n inqn sin(2π/p)n (p/π(p))n .

    This makes sense p-adically.

    Also the actual complex roots of ζ could be algebraic numbers:

    s= i/2+ q× sin(2π/n(y))n(y) .

    If the proposed correlation between p-adic primes p≈ 2k, k prime and zeros of zeta predicting a reasonable coupling constant evolution for 1/αK is true, one can have naturally, n(y)=p(y), where p is the p-adic prime associated with y: the accuracy in angle measurement would increase with the size scale of CD. For given p there could be several roots y with same p(y) but different q(y) giving same phases or at least phases with same sign of real part.

    Whether the roots of modified ζ are algebraic numbers and at critical line Re(s)=1/2 is an interesting question.

Remark: This picture allows many variants. For instance, if one assumes standard zeta, one could consider the possibility that the roots yp associated with p and giving rise to constructive interference are of form y= q×(Log(p)/log(p))× sin(2π/p)p, q=r/s.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, March 08, 2018

Could Posner molecules and cortex realize a representation of genetic code?

They are now starting to get to the right right track in quantum computation!: see the popular article in Cosmos about the an advance in quantum computing by an Australian research team led by Mischelle Simmons published in Nature communications. The life-time of qubits represented by phosphorus (P) nuclei having spin 1/2 is unexpectedly long so that they are excellent candidate for qubits in quantum computation.

They have started to learn from biology! P is a key atom in metabolism and Fisher already earlier suggested that Posner molecules containing 9 Ca atoms and 6 phosphates could be a central element of life. Just now I realized that P atoms of Posner molecule could serve as qubits and 6 qubits in Posner molecule could realize genetic code with 64 code words. Could our bone marrow be performing massive quantum computations utilizing genetic code?!

Remark: Totally unrelated association: the magic number 6 appears also in the structure of cortex: could the six layers represent qubits and realize genetic code?

Posner molecules are the basic stuff of bones. What is required is however non-standard value of heff =n×h giving longer lifetime for the qubits realized as nuclear spins. The cyclotron frequency in the endogenous magnetic field Bend=.2 Gauss field central in TGD inspired biology is 9.4 Hz, in alpha band and the Larmor freqency of O nucleus is 10.9 Hz, alpha band again!

See the earlier posting about Posner molecule and the article explaining TGD view about Posner molecule.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, March 07, 2018

General number-theoretical ideas about coupling constant evolution

The discrete coupling constant evolution would be associated with the scale hierarchy for CDs and the hierarchy of extensions of rationals.

  1. Discrete p-adic coupling constant evolution would naturally correspond to the dependence of coupling constants on the size of CD. For instance, I have considered a concrete but rather ad hoc proposal for the evolution of Kähler couplings strength based on the zeros of Riemann zeta (see this). Number theoretical universality suggests that the size scale of CD identified as the temporal distance between the tips of CD using suitable multiple of CP2 length scale as a length unit is integer, call it l. The prime factors of the integer could correspond to preferred p-adic primes for given CD.

  2. I have also proposed that the so called ramified primes of the extension of rationals correspond to the physically preferred primes. Ramification is algebraically analogous to criticality in the sense that two roots understood in very general sense co-incide at criticality. Could the primes appearing as factors of l be ramified primes of extension? This would give strong correlation between the algebraic extension and the size scale of CD.

In quantum field theories coupling constants depend in good approximation logarithmically on mass scale, which would be in the case of p-adic coupling constant evolution replaced with an integer n characterizing the size scale of CD or perhaps the collection of prime factors of n (note that one cannot exclude rational numbers as size scales). Coupling constant evolution could also depend on the size of extension of rationals characterized by its order and Galois group.

In both cases one expects approximate logarithmic dependence and the challenge is to define "number theoretic logarithm" as a rational number valued function making thus sense also for p-adic number fields as required by the number theoretical universality.

Coupling constant evolution associated with the size scale of CD

Consider first the coupling constant as a function of the length scale lCD(n)/lCD(1)=n.

  1. The number π(n) of primes p≤ n behaves approximately as π(n)= n/log(n). This suggests the definition of what might be called "number theoretic logarithm" as Log(n)== n/π(n). Also iterated logarithms such log(log(x)) appearing in coupling constant evolution would have number theoretic generalization.

  2. If the p-adic variant of Log(n) is mapped to its real counterpart by canonical identification involving the replacement p→ 1/p, the behavior can very different from the ordinary logarithm. Log(n) increases however very slowly so that in the generic case one can expect Log(n)<pmax, where pmax is the largest prime factor of n, so that there would be no dependence on p for pmax and the image under canonical identification would be number theoretically universal.

    For n=pk, where p is small prime the situation changes since Log(n) can be larger than small prime p. Primes p near primes powers of 2 and perhaps also primes near powers of 3 and 5 - at least - seem to be physically special. For instance, for Mersenne prime Mk=2k-1 there would be dramatic change in the step Mk→ Mk+1=2k, which might relate to its special physical role.

  3. One can consider also the analog of Log(n) as

    Log(n)= ∑p kpLog(p) ,

    where pki is a factor of n. Log(n) would be sum of number theoretic analogs for primes factors and carry information about them.

    One can extend the definition of Log(x) to the rational values x=m/n of the argument. The logarithm Logb(n) in base b=r/s can be defined as Logb(x)= Log(x)/Log(b).

  4. For p∈ {2,3,5} one has Log(p)>log(p), where for larger primes one has Log(p)<log(p). One has
    Log(2)=2>log(2)=.693..., Log(3)= 3k/2> log(3)= 1.099, Log(5)= 5/3=1.666..>log(5)= 1.609. For p=7
    one has Log(7)= 7/4≈ 1.75<log(7)≈ 1.946. Hence these primes and CD size scales n involving large powers of p∈ {2,3,5} ought to be physically special as indeed conjectured on basis of p-adic calculations and some observations related to music and biological evolution (see this).

    In particular, for Mersenne primes Mk=2k-1 one would have Log(Mk) ≈ k log(2) for large enough k. For Log(2k) one would have k × Log(2)=2k>log(2k)=klog(2): there would be sudden increase in the value of Log(n) at n=Mk. This jump in p-adic length scale evolution might relate to the very special physical role of Mersenne primes strongly suggested by p-adic mass calculations (see this).

  5. One can wonder whether one could replace the log(p) appearing as a unit in p-adic negentropy with a rational unit Log(p)= p/π(p) to gain number theoretical universality? One could therefore interpret the p-adic negentropy as real or p-adic number for some prime. Interestingly, |Log(p)|p=1/p approaches zero for large primes p (eye cannot see itself!) whereas |Log(p)|q=1/|π(p)|q has large values for the prime power factors qr of π(p).

Coupling constant evolution associated with the extension of rationals

Consider next the dependence on the extension of rationals. The natural algebraization of the problem is to consider the Galois group of the extension.

  1. Consider first the counterparts of primes and prime factorization for groups. The counterparts of primes are simple groups, which do not have normal subgroups H satisfying gH=Hg implying invariance under automorphisms of G. Simple groups have no decomposition to a product of sub-groups. If the group has normal subgroup H, it can be decomposed to a product H× G/H and any finite group can be decomposed to a product of simple groups.

    All simple finite groups have been classified (see this). There are cyclic groups, alternating groups, 16 families of simple groups of Lie type, 26 sporadic groups. This includes 20 quotients G/H by a normal subgroup of monster group and 6 groups which for some reason are referred to as pariahs.

  2. Suppose that finite groups can be ordered so that one can assign number N(G) to group G. The roughest ordering criterion is based on ord(G). For given order ord(G)=n one has all groups, which are products of cyclic groups associated with prime factors of n plus products involving non-Abelian groups for which the order is not prime. N(G)>ord(G) thus holds true. For groups with the same order one should have additional ordering criteria, which could relate to the complexity of the group. The number of simple factors would serve as an additional ordering criterion.

    If its possible to define N(G) in a natural manner then for given G one can define the number π1(N(G)) of simple groups (analogs of primes) not larger than G. The first guess is that that the number π1(N(G)) varies slowly as a function of G. Since Zi is simple group, one has π1(N(G)) ≥ π(N(G)).

  3. One can consider two definitions of number theoretic logarithm, call it Log1.

    a) Log1(N(G))= N(G)/π1(N(G)) ,

    b) Log1(G)= ∑i ki Log1(N(Gi)) ,
    Log1(N(Gi)) = N(Gi)/π1(N(Gi)) .

    Option a) does not provide information about the decomposition of G to a product of simple factors. For Option b) one decomposes G to a product of simple groups Gi: G= ∏i Giki and defines the logarithm as Option b) so that it carries information about the simple factors of G.

  4. One could organize the groups with the same order to same equivalence class. In this case the above definitions would give

    a) Log1(ord(G))= ord(G)/π1(ord(G)) < Log(ord(G)) ,

    b) Log1(ord(G))= ∑i ki Log(ord(Gi)) , Log1(ord(Gi)) = ord(Gi)/π1(ord(Gi)) .

    Besides groups with prime orders there are non-Abelian groups with non-prime orders. The occurrence of same order for two non-isomorphic finite simple groups is very rare (see this). This would suggests that one has π1(ord(G)) <ord(G) so that Log1(ord(G))/ord(G)<1 would be true.

  5. For orders n(G)∈ {2,3,5} one has Log1(n(G))=Log(n(G))>log(n(G)) so that the ordes n(G) involving large factors of p∈ {2,3,5} would be special also for the extensions of rationals. S3 with order 6 is the first non-abelian simple group. One has π(S3)=4 giving Log(6)= 6/4=1.5<log(6)=1.79 so that S3 is different from the simple groups below it.

To sum up, number theoretic logarithm could provide answer to the long-standing question what makes Mersenne primes and also other small primes so special.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 05, 2018

Summary about twistorialization in TGD framework

Since the contribution means in well-defined sense a breakthrough in the understanding of TGD counterparts of scattering amplitudes, it is useful to summarize the basic results deduced above as a polished answer to a Facebook question.

There are two diagrammatics: Feynman diagrammatics and twistor diagrammatics.

  1. Virtual state is an auxiliary mathematical notion related to Feynman diagrammatics coding for the perturbation theory. Virtual particles in Feynman diagrammatics are off-mass-shell.

  2. In standard twistor diagrammatics one obtains counterparts of loop diagrams. Loops are replaced with diagrams in which particles in general have complex four-momenta, which however light-like: on-mass-shell in this sense. BCFW recursion formula provides a powerful tool to calculate the loop corrections recursively.

  3. Grassmannian approach in which Grassmannians Gr(k,n) consisting of k-planes in n-D space are in a central role, gives additional insights to the calculation and hints about the possible interpretation.

  4. There are two problems. The twistor counterparts of non-planar diagrams are not yet understood and physical particles are not massless in 4-D sense.

In TGD framework twistor approach generalizes.
  1. Massless particles in 8-D sense can be massive in 4-D sense so that one can describe also massive particles. If loop diagrams are not present, also the problems produced by non-planarity disappear.

  2. There are no loop diagrams- radiative corrections vanish. ZEO does not allow to define them and they would spoil the number theoretical vision, which allows only scattering amplitudes, which are rational functions of data about external particles. Coupling constant evolution - something very real - is now discrete and dictated to a high degree by number theoretical constraints.

  3. This is nice but in conflict with unitarity if momenta are 4-D. But momenta are 8-D in M8 picture (and satisfy quaternionicity as an additional constraint) and the problem disappears! There is single pole at zero mass but in 8-D sense and also many-particle states have vanishing mass in 8-D sense: this gives all the cuts in 4-D mass squared for all many-particle state. For many-particle states not satisfying this condition scattering rates vanish: these states do not exist in any operational sense! This is certainly the most significant new discovery in the recent contribution.

    BCFW recursion formula for the calculation of amplitudes trivializes and one obtains only tree diagrams. No recursion is needed. A finite number of steps are needed for the calculation and these steps are well-understood at least in 4-D case - even I might be able to calculate them in Grassmannian approach!

  4. To calculate the amplitudes one must be able to explicitly formulate the twistorialization in 8-D case for amplitudes. I have made explicit proposals but have no clear understanding yet. In fact, BCFW makes sense also in higher dimensions unlike Grassmannian approach and it might be that the one can calculate the tree diagrams in TGD framework using 8-D BCFW at M8 level and then transform the results to M4× CP2.

What I said above does yet contain anything about Grassmannians.
  1. The mysterious Grassmannians Gr(k,n) might have a beautiful interpretation in TGD: they could correspond at M8 level to reduced WCWs which is a highly natural notion at M4× CP2 level obtained by fixing the numbers of external particles in diagrams and performing number theoretical discretization for the space-time surface in terms of cognitive representation consisting of a finite number of space-time points.

    Besides Grassmannians also other flag manifolds - having Kähler structure and maximal symmetries and thus having structure of homogenous space G/H - can be considered and might be associated with the dynamical symmetries as remnants of super-symplectic isometries of WCW.

  2. Grassmannian residue integration is somewhat frustrating procedure: it gives the amplitude as a sum of contributions from a finite number of residues. Why this work when outcome is given by something at finite number of points of Grassmannian?!

    In M8 picture in TGD cognitive representations at space-time level as finite sets of points of space-time determining it completely as zero locus of real or imaginary part of octonionic polynomial would actually give WCW coordinates of the space-time surface in finite resolution.

    The residue integrals in twistor diagrams would be the manner to realize quantum classical correspondence by associating a space-time surface to a given scattering amplitude by fixing the cognitive representation determining it. This would also give the scattering amplitude.

    Cognitive representation would be highly unique: perhaps modulo the action of Galois group of extension of rationals. Symmetry breaking for Galois representation would give rise to supersymmetry breaking. The interpretation of supersymmetry would be however different: many-fermion states created by fermionic oscillator operators at partonic 2-surface give rise to a representation of supersymmetry in TGD sense.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

The Recent View about Twistorialization in TGD Framework

The twistorialization of TGD has now reached quite precise formulation and strong predictions are emerging.

  1. A proposal made already earlier is that scattering diagrams as analogs of twistor diagrams are constructible as tree diagrams for CDs connected by free particle lines. Loop contributions are not even well-defined in zero energy ontology (ZEO) and are in conflict with number theoretic vision. The coupling constant evolution would be discrete and associated with the scale of CDs (p-adic coupling constant evolution) and with the hierarchy of extensions of rationals defining the hierarchy of adelic physics.

  2. The reduction of the scattering amplitudes to tree diagrams is in conflict with unitarity in 4-D situation. The imaginary part of the scattering amplitude would have discontinuity proportional to the scattering rate only for many-particle states with light-like total momenta. Scattering rates would vanish identically for the physical momenta for many-particle states.

    In TGD framework the states would be however massless in 8-D sense. Massless pole corresponds now to a continuum for M4 mass squared and one would obtain the unitary cuts from a pole at P2=0! Scattering rates would be non-vanishing only for many-particle states having light-like 8-momentum, which would pose a powerful condition on the construction of many-particle states. This strong form of conformal symmetry has highly non-trivial implications concerning color confinement.

  3. The key idea is number theoretical discretization in terms of "cognitive representations" as space-time time points with M8-coordinates in an extension of rationals and therefore shared by both real and various p-adic sectors of the adele. Discretization realizes measurement resolution, which becomes an inherent aspect of physics rather than something forced by observed as outsider. This fixes the space-time surface completely as a zero locus of real or imaginary part of octonionic polynomial.

    This must imply the reduction of "world of classical worlds" (WCW) corresponding to a fixed number of points in the extension of rationals to a finite-dimensional discretized space with maximal symmetries and Kähler structure.

    The simplest identification for the reduced WCW would be as complex Grassmannian - a more general identification would be as a flag manifold. More complex options can of course be considered. The Yangian symmetries of the twistor Grassmann approach known to act as diffeomorphisms respecting the positivity of Grassmannian and emerging also in its TGD variant would have an interpretation as general coordinate invariance for the reduced WCW. This would give a completely unexpected connection with supersymmetric gauge theories and TGD.

  4. M8 picture implies the analog of SUSY realized in terms of polynomials of super-octonions whereas H picture suggests that supersymmetry is broken in the sense that many-fermion states as analogs of components of super-field at partonic 2-surfaces are not local. This requires breaking of SUSY. At M8 level the breaking could be due to the reduction of Galois group to its subgroup G/H, where H is normal subgroup leaving the point of cognitive representation defining space-time surface invariant. As a consequence, local many-fermion composite in M8 would be mapped to a non-local one in H by M8-H correspondence.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.