https://matpitka.blogspot.com/search?updated-max=2018-11-21T20:08:00-08:00&max-results=100&reverse-paginate=true

Tuesday, March 13, 2018

Strange finding about galactic halo as a possible further support for TGD based model of galaxies

A team led by Maria Bergemann from the Max Planck Institute for Astronomy in Heidelberg, has studied a small population of stars in the halo of the Milky Way (MW) and found its chemical composition to closely match that of the Galactic disk (see this). This similarity provides compelling evidence that these stars have originated from within the disc, rather than from merged dwarf galaxies. The reason for this stellar migration is thought to be theoretically proposed oscillations of the MW disk as a whole, induced by the tidal interaction of the MW with a passing massive satellite galaxy.

One can divide the stars in MW to the stars in the galactic disk and those in the galactic halo. The halo has gigantic structures consisting of clouds and streams of stars rotating around the center of the MW. These structures have been identified as a kind of debris thought to reflect the violent past of the MW involving collisions with smaller galaxies.

The scientists investigated 14 stars located in two different structures in the Galactic halo, the Triangulum-Andromeda (Tri-And) and the A13 stellar over-densities, which lie at opposite sides of the Galactic disc plane. Earlier studies of motion of these two diffuse structures revealed that they are kinematically associated and could relate to the Monoceros Ring, a ring-like structure that twists around the Galaxy. The position of the two stellar over-densities could be determined as each lying about 5 kiloparsec (14000 ly) above and below the Galactic plane. Chemical analysis of the stars made possible by their spectral lines demonstrated that they must must originate from MW itself, which was a complete surprise.

The proposed model for the findings is in terms of vertical vibrations of galactic disk analogous to those of drum membrane. In particular the fact that the structures are above and below of the Monoceros Ring supports this idea. The vibrations would be induced by the gravitational interactions of ordinary and dark matter of galactic halo with a passing satellite galaxy. The picture of the the article (see this) illustrates what the pattern of these vertical vibrations would look like according to simulations.

In TGD framework this model is modified since dark matter halo is replaced with cosmic string. Due to the absence of the dark matter halo, the motion along cosmic string is free apart from gravitational attraction caused by the galactic disk. Cosmic string forces the migrated stars to rotate around to the cosmic string in plane parallel to the galactic plane and the stars studied indeed belong to ring like structures: the prediction is that these rings rotate around the axis of galaxy.

One can argue that if one has stars are very far from galactic plane - say dwarf galaxy - the halo model of dark matter suggests that the orbital plane arbitrary but goes through galactic center since spherically symmetric dark matter halo dominates in mass density. TGD would predict that the orbital plane is parallel to to the galactic plane.

Are the oscillations of the galactic plane necessary in TGD framework?

  1. The large size of and the ring shape of the migrated structures suggests that oscillations of the disk could have caused them. The model for the oscillations of MW disk would be essentially that for a local interaction of a membrane (characterized by tension) with its own gravitational field and with the gravitational field of G passing by. Some stars would be stripped off from the membrane during oscillations.

  2. If the stars are local knots in a big knot (galaxy) formed by a long flux tube as TGD based model for galaxy formation suggests, one can ask whether reconnections of the flux tube could take place and split from the flux tube ring like structures to which migrating stars are associated. This would reduce the situation to single particle level and
    it is interesting to see whether this kind of model might work. One can also ask whether the stripping could be induced by the interaction with G without considerable oscillations of MW.

The simplest toy model for the interaction of MW with G would be following: I have proposed this model of cosmic traffic accidents already earlier. Also the fusion of blackholes leading could be made probable if the blackholes are associated with the same cosmic string (stars would be subknots of galactic knots.
  1. G moves past the MW and strips off stars and possibly also larger structures from MW: denote this kind of structures by O. Since the stripped objects at the both sides of the MW are at the same distance, it seems that the only plausible direction of motion of G is along the cosmic string along which galaxies are like pearls in necklace.
    G would go through MW! If the model works it gives support for TGD view about galaxies.

    One can of course worry about the dramatic implications of the head on collisions of galaxies but it is interesting to look whether it might work at all. On the other hand, one can ask whether the galactic blackhole for MW could have been created in the collision possibly via fusion of the blackhole associated with G with that of MW in analogy with the fusion of blackholes detected by LIGO.

  2. A reasonable approximation is that the motions of G and MW are not considerably affected in the collision. MW is stationary and G arrives with a constant velocity v along the axis of cosmic string above MW plane. In the region between galactic planes of G and MW the constant accelerations caused by G and MW have opposite directions so that one has

    gtot= gG -gMW between the galactic planes and above MW plane

    gtot= -gG+gMW between the galactic planes and below MW plane ,

    gtot= -gG- gMW above both galactic planes ,

    gtot= gG+ gMW below both galactic planes .


    The situation is completely symmetric with respect to the reflection with respect to galactic plane if one assumes that the situation in galactic plane is not affected considerably. Therefore it is enough to look what happens above the MW plane.

  3. If G is more massive, one can say that it attracts the material in MW and can induce oscillatory wave motion, whose amplitude could be however small. This would induce the reconnections of the cosmic string stripping objects O from MW, and O would experience upwards acceleration gtot= gG -gMW towards G (note that O also rotates around the cosmic string). After O has passed by G, it continues its motion in vertical direction and experiences deceleration gtot= -gG- gMW and eventually begins to fall back towards MW.

    One can parameterize the acceleration caused by G as gG =(1+x)× gMW, x>1 so that the acceleration felt by O in the middle regions between the planes is gtot=gG-g= x × gMW. Above planes of both G and MW the acceleration is gtot= -(2+x) gMW .

  4. Denote by T the moment when O and G pass each other. One can express the vertical height h and velocity v of O in the 2 regions above MW as

    h(t)= (gG-gMW)2t2 , v=(gG-gMW)t for t<T ,

    h(t)= [(gG +gMW)/2](t-T)2 + v(T)(t-T)+h(T) , v(T)= (gG-gMW)T ,

    h(T) = [(gG-gMW)/2] T2 for t>T .

    Note that time parameter T tells how long time it takes for O to reach G if its has been stripped off from MW. A naive estimate for the value of T is as the time scale in which the gravitational field of galactic disk begins to look like that of point mass.

    This would suggest that h(T) is of the order of the radius R of MW so that one would have using gG= (1+x)gMW

    T∼ (1/x)1/2 (2R/gMW)1/2 .

  5. The direction of motion of O changes at v(Tmax)=0. One has

    Tmax= (2gG/(gG+gMW) T ,

    hmax= -[(gG +gMW)/2] (Tmax-T)2+ v(T)(Tmax-T)+h(T) .


  6. For t>Tmax one has

    h(t)= -[(gG+gMW)/2] (t-Tmax)2+hmax ,

    hmax=-(gG +gMW)2(Tmax-T)2 +h(T) .


    Expressing hmax in terms of T and parameter x= (gG gMW)/gMW one has

    hmax= y(x)gMW(T2/2) ,

    y(x)= x(5x + 4)/2(2+x) ≈ x for small values of x .

  7. If one assumes that hmax>hnow, where hnow∼ 1.2× 105 ly the recent height of the objects considered, one obtains an estimate for the time T from hmax>hnow giving

    T> [2(2+x)/x(5x+4)]1/2 T0 , T0=hnow gMW .

    Note that Tmax<2T holds true.

It is interesting to see whether the model really works.
  1. It is easy to find (one can check the numerical factors here) that gMW can be expressed at the limit of infinitely large galactic disk as

    gMW= 2π G (dM/dS)= 2GM/R2 ,

    where R is the radius of galactic disk and dM/dS= M/π R2 is the density of the matter of galactic disk per unit area. This expression is analogous to g= GM/R2E at the surface of Earth.

  2. One can express the estimate in terms of the acceleration g= 10 m/s2 as

    gMW≈ 2g (RE/R)2(M/ME) .

    The estimate for MW radius has lower bound R=105 ly, MW mass M∼ 1012 MSun, using MSun/ME=3×106 and REarth≈ 6× 106 m, one obtains gMW∼ 2× 10-10g.

  3. Using the estimate for gMW one obtains T> [2(2+x)/[x(5x+4)]]1/2 T0 with

    T0 ∼ 3× 109 years .

    The estimate T∼ (1/x(1/2 (2R/gMW)1/2 proposed above gives T>(1/x)1/2 × 108 years. The fraction of ordinary mass from total mass is roughly 10 per cent of the contribution of the dark energy and dark particles associated with the cosmic string. Therefore x<.1 is a reasonable upper bound for x parametrizing the mass difference of G and MW. For x≈ .1 one obtains T in the range 1-10 Gy.

See the article Four new strange effects associated with galaxies or the chapter TGD and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 12, 2018

Dark matter and 21 cm line of hydrogen

Dark matter in TGD sense corresponds to heff/h=n phases of ordinary matter associated with magnetic flux tubes. These flux tubes would be n-sheeted covering spaces, and n would correspond to the dimension of the extension of rationals in which Galois group acts. The evidence for this interpretation of dark matter is accumulating. Here I discuss one of the latest anomalies - 21-cm anomaly.

Sabine Hossenfelder told about the article discussing the possible interpretation of so called 21-cm anomaly associated with the hyperfine transition of hydrogen atom and observed by EDGES collaboration.

The EDGES Collaboration has recently reported the detection of a stronger-than-expected absorption feature in the global 21-cm spectrum, centered at a frequency corresponding to a redshift of z ≈ 17. This observation has been interpreted as evidence that the gas was cooled during this era as a result of scattering with dark matter. In this study, we explore this possibility, applying constraints from the cosmic microwave background, light element abundances, Supernova 1987A, and a variety of laboratory experiments. After taking these constraints into account, we find that the vast majority of the parameter space capable of generating the observed 21-cm signal is ruled out. The only range of models that remains viable is that in which a small fraction, ≈ 0.3-2 per cent, of the dark matter consists of particles with a mass of ≈ 10-80 MeV and which couple to the photon through a small electric charge, ε ≈ 10-6-10-4. Furthermore, in order to avoid being overproduced in the early universe, such models must be supplemented with an additional depletion mechanism, such as annihilations through a Lμ-Lτ gauge boson or annihilations to a pair of rapidly decaying hidden sector scalars.

What has been found is an unexpectedly strong absorption feature in 21-cm spectrum: the redshift is about z ≈ 17 which corresponds to a distance of about 2.27× 1011 ly. Dark matter interpretation would be in terms of scattering of the baryons of gas from dark matter at lower temperature. The anomalous absorption of 21 cm line could be explained with the cooling of gas caused by the flow of energy to a colder medium consisting of dark matter. If I understood correctly, this would generate a temperature difference between background radiation and gas and consequent energy flow to gas inducing the anomaly.

The article excludes large amount of parameter space able to generate the observed signal. The idea is that the interaction of baryons of the gas with dark matter. The interaction would be mediated by photons. The small em charge of the new particle is needed to make it "dark enough". My conviction is that tinkering with the quantization of electromagnetic charge is only a symptom about how desperate the situation is concerning interpretation of dark matter in terms of some exotic particles is. Something genuinely new physics is involved and the old recipes of particle physicists do not work.

In TGD framework the dark matter at lower temperature would be heff/h=n phases of ordinary matter residing at magnetic flux tubes. This kind of energy transfer between ordinary and dark matter is a general signature of dark matter in TGD sense, and there are indications from some experiments relating to primordial life forms for this kind of energy flow in lab scale (see this) .

The ordinary photon line appearing in the Feynman diagram describing the exchange of photon would be replaced with a photon line containing a vertex in which the photon transforms to dark photon. The coupling in the vertex - call it m2 - would have dimensions of mass squared. This would transform the coupling e2 associated with the photon exchange to e2 m2/p2, where p2 is photon's virtual mass squared. The slow rate for the transformation of ordinary photon to dark photon could be see as an effective reduction of electromagnetic charge for dark matter particle from its quantized value.

Remark: In biological systems dark cyclotron photons would transform to ordinary photons and would be interpreted as bio-photons with energies in visible and UV.

To sum up, the importance of this finding is that it supports the view about dark matter as ordinary particles in a new phase. There are electromagnetic interactions but the transformation of ordinary photons to dark photons slows down the process and makes these exotic phases effectively dark.

See the article Four new strange effects associated with galaxies or the chapter TGD and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, March 11, 2018

Could functional equation and Riemann hypothesis generalize?

Number theoretical considerations lead to the modification of zeta function by replacing the powers n-s with= exp(-log(n)s) with powers exp(-Log(n)s), where rational valued number theoretic logarithm Log(n) is defined as sump kp p/π(p) corresponding to the decomposion of n to a product of powers of prime. For large primes Log(p) equals in good approximation to log(p). The point of the replacement is that Log(n) carriers number theoretical information so that the definition is very natural. This number theoretical zeta will denoted with Ζ to distinguish it from ordinary zeta function denoted by ζ.

It is interesting to list the elementary properties of the Ζ before trying to see whether functional equation for ζ and Riemann hypothesis generalize.


  1. The replacement log(n)→ Log(n)== sump kpLog(p) implies that Ζ codes explicitly number theoretic information. Note that Log(n) satisfies the crucial identity Log(mn)= Log(m)+ Log(n). Ζ is an analog of partition function with rational number valued Log(n) taking the role of energy and 1/s that of a complex temperature. In ZEO this partition function like entity could be associated with zero energy state as a "square root" of thermodynamical partition function: in this case complex temperatures are possible.|Ζ|2 would be the analog of ordinary partition function.

  2. Reduction of Ζ to a product of "prime factors" 1/[1-exp(-Log(p)s)] holds true by Log(n)== sump kpLog(p), Log(p) =p/π(p).

  3. Ζ is a combination of exponentials exp(-Log(n)s), which converge for Re(s)>0. For ζ one has
    exponentials exp(-log(n)s), which also converge for Re(s)>0: the sum ∑ n-s does not however converge in the region Re(s)<1. Presumably Ζ fails to converge for Re(s)≤ 1. The behavior of terms exp(-Log(n)s) for large values of n is very similar to that in ζ.

  4. One can express ζ o in terms of η function defined as

    η(s)= ∑ (-1)n n-s .

    The powers (-1)n guarantee that η converges (albeit not absolutely) inside the critical strip 0<s<1.

    By using a decomposition of integers to odd and even ones, one can express ζ in terms of η:

    ζ = η(s)/(-1+2-s+1) .

    This definition converges inside critical strip. Note the pole at s=1 coming from the factor.

    One can define also Η as counterpart of η:

    Η(s)= ∑ (-1)n e-Log(n)s) .

    The formula relating Ζ and Η generalizes: 2-s is replaced with exp(-2s) (Log(2)=2):

    Ζ = Η(s)/(-1+2e-2s) .

    This definition Ζ converges in the critical strip Re(s) ∈ (0,1) and also for Re(s)>1.
    Ζ(1-s) converges for Re(s)<1 so that in Η representation both converge.

    Note however that the poles of ζ at s=1 has shifted to that at s=log(2)/2 and is below Re(s)=1/2 line. If a symmetrically posioned pole at s= 1-log(2)/2 is not present in Η, functional equation cannot be true.

  5. Log(n) approaches log(n) for integers n not containing small prime factors p for which π(n) differs strongly from p/log(p). This suggests that allowing only terms exp(-Log(n)s) in the sum defining Ζ not divisible by primes p<pmax might give a cutoff Ζcut,pmax behaving very much like ζ from which "prime factors" 1/(1-exp(-Log(p)s) , p<pmax are dropped of. This is just division of Ζ by these factors and at least formally, this does not affect the zeros of Ζ. Arbitrary number of factors can be droped. Could this mean that Ζcut has same or very nearly same zeros as ζ at critical line? This sounds paradoxical and might reflect my sloppy thinking: maybe the lack of the absolute implies that the conclusion is incorrect.

The key questions are whether Ζ allows a generalization of the functional equation ξ(s)= ξ(1-s) with ξ(s)= (1/2) s(s-1) Γ(s/2) π-s/2 ζ(s) and whether Riemann hypothesis generalizes. The derivation of the functional equation is quite a tricky task and involves integral representation of ζ .
  1. One can start from the integral representation of ζ true for s>0.

    ζ(s)=[1/(1-21-s)Γ(s)]∫0[ts-1/(et+1)] dt , Re(s)>0 .

    deducible from the expression in terms of η(s). The factor 1/(1+et) can be expanded in geometric series 1/(1+et)=∑ (-1)n exp(nt) converning inside the critical strip. One formally performs the integrations by taking nt as an integration variable. The integral gives the result ∑ (-1)n/nz)Γ(s).

    The generalization of this would be obtained by a generalization of geometric series:

    1/(1+et)=∑ (-1)n exp(nt)→ ∑ (-1)n eexp(Log(n))t

    in the integral representation. This would formally give Ζ: the only difference is that one takes u= exp(Log(n))t as integration variable.

    One could try to prove the functional equation by using this representation. One proof (see this) starts from the alternative expression of ζ as

    ζ(s)=[1/Γ(s)]∫1[ ts-1/(et-1)]dt , Re(s)>1 .

    One modifies the integration contour to a contour C coming from +∞ above positive real axis, circling the origin and returning back to +∞ below the real axes to get a modified representation of ζ:

    ζ(s)=1/[2isin(π s)Γ(s)]∫1[(-w)s-1/(ew-1)] dw , Re(s)>1 .

    One modifies C further so that the origin is circle d around a square with vertices at +/- (2n+1)π and +/- i(2n+1)π.

    One calculates the integral the integral along C as a residue integral. The poles of the integrand proportional to 1/(1-et) are at imaginary axis and correspond to w= ir2π, r∈ Z. The residue integral gives the other side of the functional equation.

  2. Could one generalize this representation to the recent case? One must generalize the geometric series defined by 1/(ew-1) to -∑ eexp(Log(n))w. The problem is that one has only a generalization of the geometric series and not closed form for the counterpart of 1/(exp(w)-1) so that one does not know what the poles are. The naive guess is that one could compute the residue integrals term by term in the sum over n. An equally naive guess would be that for the poles the factors in the sum are equal to unity as they would be for Riemann zeta. This would give for the poles of n:th term the guess wn,r=r2π/exp(Log(n), r∈ Z. This does not however allow to deduce the residue at poles.Note that the pole of Η at s= log(2)/2 suggests that functional equation is not true.

There is however no need for a functional equation if one is only interested in F(s)== Ζ(s)+Ζ(1-s) at the critical line! Also the analog of Riemann hypothesis follows naturally!
  1. In the representation using Η F(s) converges at critical striple and is real(!) at the critical line Re(s)=1/2 as follows from the fact that 1-s= s* for Re(s)=1/2! Hence F(s) is expected to have a large number of zeros at critical line. Presumably their number is infinite, since F(s)cut,pmax approaches 2ζcut,pmax for large enough pmax at critical line.

  2. One can define a different kind of cutoff of Ζ for given nmax: n<nmax in the sum over e-Log(n)s. Call this cutoff Ζcut,nmax. This cutoff must be distinguished from the cutoff Ζcut,pmax obtained by dropping the "prime factors" with p<pmax. The terms in the cutoff are of the form u∑ kpp/π(p), u = exp(-s). It is analogous to a polymomial but with fractional powers of u. It can be made a polynomial by a change of variable u→ v=exp(-s/a), where a is the product of all π(p):s associated with all the primes involved with the integers n<nmax.

    One could solve numerically the zeros of Ζ(s)+Ζ(1-s) using program modules calculating π(p) for a given p and roots of a complex polynomial in given order. One can check whether also all zeros of Ζ(s)+Ζ(1-s) might reside at critical line.

  3. One an define also F(s)cut,nmax to be distinguished from F(s)cut,pmax. It reduces to a sum of terms exp(-Log(n)/2) cos(-Log(n)y) at critical line, n<nmax. Cosines come from roots of unity. F(s) function is not sum of rational powers of exp(-iy) unlike Ζ(s). The existence of zero could be shown by showing that the sign of this function varies as function of y. The functions cos(-Log(n)y) have period Δ y = 2π/Log(n). For small values of n the exponential terms exp(-Log(n)/2) are largest so that they dominate. For them the periods Δ y are smallest so that one expected that the sign of both F(s) and F(s)cut,nmax varies and forces the presence of zeros.

    One could perhaps interpret the system as quantum critical system. The rather large rapidly varying oscillatory terms with n<nmax with small Log(n) give a periodic infinite set of approximate roots and the exponentially smaller slowly varying higher terms induce small perturbations of this periodic structure. The slowly varying terms with large Log(n) become however large near the Im(s)=0 so that here the there effect is large and destroys the period structure badly for small root of Ζ.

To sum up, the definition of modified zeta and eta functions makes sense, as also the analog of Riemann Hypothesis. It however seems that the counterpart of functional equation does not hold true. This is however not a problem since one can define symmetrized zeta so that it is well-defined in critical strip.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, March 09, 2018

Number theoretic vision about Riemann zeta and evolution of Kähler coupling strength

I have made several number theoretic peculations related to the possible role of zeros of Riemann zeta in coupling constant evolution. The basic problem is that it is not even known whether the zeros of zeta are rationals, algebraic numbers or genuine transcendentals or belong to all these categories. Also the question whether number theoretic analogs of ζ defined for p-adic number fields could make sense in some sense is interesting.

1. Is number theoretic analog of ζ possible using Log(p) instead of log(p)?

The definition of Log(n) based on factorization Log(n)==∑pkpLog(p) allows to define the number theoretic version of Riemann Zeta ζ(s)=∑ n-s via the replacement n-s=exp(-log(n)s)→ exp(-Log(n)s).

  1. In suitable region of plane number-theoretic Zeta would have the usual decomposition to factors via the replacement 1/(1-p-s)→ 1/(1-exp(-Log(p)s). p-Adically this makes sense for s= O(p) and thus only for a finite number of primes p for positive integer valued s: one obtains kind of cut-off zeta. Number theoretic zeta would be sensitive only to a finite number of prime factors of integer n.

  2. This might relate to the strong physical indications that only a finite number of cognitive representations characterized by p-adic primes are present in given quantum state: the ramified primes for the extension are excellent candidates for these p-adic primes. The size scale n of CD could also have decomposition to a product of powers of ramified primes. The finiteness of cognition conforms with the cutoff: for given CD size n and extension of rationals the p-adic primes labelling cognitive representations would be fixed.

  3. One can expand the regions of converge to larger p-adic norms by introducing an extension of p-adics containing e and some of its roots (ep is automatically a p-adic number). By introducing roots of unity, one can define the phase factor exp(-iLog(n)Im(s)) for suitable values of Im(s). Clearly, exp(-ipIm(s))/π(p)) must be in the extension used for all primes p involved. One must therefore introduce prime roots exp(i/π(p)) for primes appearing in cutoff. To define the number theoretic zeta for all p-adic integer values of Re(s) and all integer values of Im(s), one should allow all roots of unity (ep(i2π/n)) and all roots e1/n: this requires infinite-dimensional extension.

  4. One can thus define a hierarchy of cutoffs of zeta: for this the factorization of Zeta to a finite number of "prime factors" takes place in genuine sense, and the points Im(s)= ikπ(p) give rise to poles of the cutoff zeta as poles of prime factors. Cutoff zeta converges to zero for Re(s)→ ∞ and exists along angles corresponding to allowed roots of unity. Cutoff zeta diverges for (Re(s)=0, Im(s)= ik π(p)) for the primes p appearing in it.

Remark: One could modify also the definition of ζ for complex numbers by replacing exp(log(n)s) with exp(Log(n)s) with Log(n)= ∑p kpLog(p) to get the prime factorization formula. I will refer to this variant of zeta as modified zeta below.

2. Could the values of 1/αK be given as zeros of ζ or of modified ζ

I have discussed the possibility that the zeros s=1/2+iy of Riemann zeta at critical line correspond to the values of complex valued Kähler coupling strength αK: s=i/αK (see this). The assumption that piy is root of unity for some combinations of p and y [log(p)y =(r/s)2π] was made. This does not allow s to be complex rational. If the exponent of Kähler action disappears from the scattering amplitudes as M8-H duality requires, one could assume that s has rational values but also algebraic values are allowed.

  1. If one combines the proposed idea about the Log-arithmic dependence of the coupling constants on the size of CD and algebraic extension with s=i/αK hypothesis, one cannot avoid the conjecture that the zeros of zeta are complex rationals. It is not known whether this is the case or not. The rationality would not have any strong implications for number theory but the existence irrational roots would have (see this). Interestingly, the rationality of the roots would have very powerful physical implications if TGD inspired number theoretical conjectures are accepted.

    The argument discussed below however shows that complex rational roots of zeta are not favored by the observations about the Fourier transform for the characteristic function for the zeros of zeta. Rather, the findings suggest that the imaginary parts (see this) should be rational multiples of 2π, which does not conform with the vision that 1/αK is algebraic number. The replacement of log(p) with Log(p) and of 2π with is natural p-adic approximation in an extension allowing roots of unity however allows 1/αK to be an algebraic number. Could the spectrum of 1/αK correspond to the roots of ζ or of modified ζ?


  2. A further conjecture discussed was that there is 1-1 correspondence between primes p≈ 2k, k prime, and zeros of zeta so that there would be an order preserving map k→ sk. The support for the conjecture was the predicted rather reasonable coupling constant evolution for αK. Primes near powers of 2 could be physically special because Log(n) decomposes to sum of Log(p):s and would increase dramatically at n=2k slightly above them.

    In an attempt to understand why just prime values of k are physically special, I have proposed that k-adic length scales correspond to the size scales of wormhole contacts whereas particle space-time sheets would correspond to p≈ 2k. Could the logarithmic relation between Lp and Lk correspond to logarithmic relation between p and π(p) in case that π(p) is prime and could this condition select the preferred p-adic primes p?

3. The argument of Dyson for the Fourier transform of the characteristic function for the set of zeros of ζ

Consider now the argument suggesting that the roots of zeta cannot be complex rationals. On basis of numerical evidence Dyson (see this) has conjectured that the Fourier transform for the characteristic function for the critical zeros of zeta consists of multiples of logarithms log(p) of primes so that one could regard zeros as one-dimensional quasi-crystal.

This hypothesis makes sense if the zeros of zeta decompose into disjoint sets such that each set corresponds to its own prime (and its powers) and one has piy= Um/n=exp(i2π m/n) (see the appendix of this). This hypothesis is also motivated by number theoretical universality (see this).

  1. One can re-write the discrete Fourier transform over zeros of ζ at critical line as

    f(x)= ∑y exp(ixy)) , y=Im(s) .

    The alternative form reads as

    f(u) =∑s uiy , u=exp(x) .

    f(u) is located at powers pn of primes defining ideals in the set of integers.

    For y=pn one would have piny=exp(inlog(p)y). Note that k=nlog(p) is analogous to a wave vector. If exp(inlog(p)y) is root of unity as proposed earlier for some combinations of p and y, the Fourier transform becomes a sum over roots of unity for these combinations: this could make possible constructive interference for the roots of unity, which are same or at least have the same sign. For given p there should be several values of y(p) with nearly the same value of exp(inlog(p)y(p)) whereas other values of y would interfere deconstructively.

    For general values y= xn x≠ p the sum would not be over roots of unity and constructive interference is not expected. Therefore the peaking at powers of p could take place. This picture does not support the hypothesis that zeros of zeta are complex rational numbers so that the values of 1/αK correspond to zeros of zeta and would be therefore complex rationals as the simplest view about coupling constant evolution would suggest.

  2. What if one replaces log(p) with Log(p) =p/π(p), which is rational and thus ζ with modified ζ? For large enough values of p Log(p)≈ log(p) finite computational accuracy does not allow distinguish Log(p) from log(p). For Log(p) one could thus understand the finding in terms of constructive interference for the roots of unity if the roots of zeta are of form s= 1/2+i(m/n)2π. The value of y cannot be rational number and 1/αK would have real part equal to y proportional to 2π which would require infinite-D extension of rationals. In p-adic sectors infinite-D extension does not conform with the finiteness of cognition.

  3. Numerical calculations have however finite accuracy, and allow also the possibility that y is algebraic number approximating rational multiple of 2π in some natural manner. In p-adic sectors would obtain the spectrum of y and 1/αK as algebraic numbers by replacing 2π in the formula is= αK= i/2+ q× 2π, q=r/s, with its approximate value:

    2π→ sin(2π/n)n= in/2(exp(i2π/n)- exp(-i2π/n))

    for an extension of rationals containing n:th of unity. Maximum value of n would give the best approximation. This approximation performed by fundamental physics should appear in the number theoretic scattering amplitudes in the expressions for 1/αK to make it algebraic number.

    y can be approximated in the same manner in p-adic sectors and a natural guess is that
    n=p defines the maximal root of unity as exp(i2π/p). The phase exp(ilog(p)y) for y= q sin(2π/n(y)), q=r/s, is replaced with the approximation induced by log(p)→ Log(p) and 2π→ sin(2π/n)n giving

    exp(ilog(p)y) → exp(iq(y) sin(2π/n(y))p/π(p)) .

    If s in q=r/s does not contain higher powers of p, the exponent exists p-adically for this extension and can can be expanded in positive powers of p as

    n inqn sin(2π/p)n (p/π(p))n .

    This makes sense p-adically.

    Also the actual complex roots of ζ could be algebraic numbers:

    s= i/2+ q× sin(2π/n(y))n(y) .

    If the proposed correlation between p-adic primes p≈ 2k, k prime and zeros of zeta predicting a reasonable coupling constant evolution for 1/αK is true, one can have naturally, n(y)=p(y), where p is the p-adic prime associated with y: the accuracy in angle measurement would increase with the size scale of CD. For given p there could be several roots y with same p(y) but different q(y) giving same phases or at least phases with same sign of real part.

    Whether the roots of modified ζ are algebraic numbers and at critical line Re(s)=1/2 is an interesting question.

Remark: This picture allows many variants. For instance, if one assumes standard zeta, one could consider the possibility that the roots yp associated with p and giving rise to constructive interference are of form y= q×(Log(p)/log(p))× sin(2π/p)p, q=r/s.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, March 08, 2018

Could Posner molecules and cortex realize a representation of genetic code?

They are now starting to get to the right right track in quantum computation!: see the popular article in Cosmos about the an advance in quantum computing by an Australian research team led by Mischelle Simmons published in Nature communications. The life-time of qubits represented by phosphorus (P) nuclei having spin 1/2 is unexpectedly long so that they are excellent candidate for qubits in quantum computation.

They have started to learn from biology! P is a key atom in metabolism and Fisher already earlier suggested that Posner molecules containing 9 Ca atoms and 6 phosphates could be a central element of life. Just now I realized that P atoms of Posner molecule could serve as qubits and 6 qubits in Posner molecule could realize genetic code with 64 code words. Could our bone marrow be performing massive quantum computations utilizing genetic code?!

Remark: Totally unrelated association: the magic number 6 appears also in the structure of cortex: could the six layers represent qubits and realize genetic code?

Posner molecules are the basic stuff of bones. What is required is however non-standard value of heff =n×h giving longer lifetime for the qubits realized as nuclear spins. The cyclotron frequency in the endogenous magnetic field Bend=.2 Gauss field central in TGD inspired biology is 9.4 Hz, in alpha band and the Larmor freqency of O nucleus is 10.9 Hz, alpha band again!

See the earlier posting about Posner molecule and the article explaining TGD view about Posner molecule.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, March 07, 2018

General number-theoretical ideas about coupling constant evolution

The discrete coupling constant evolution would be associated with the scale hierarchy for CDs and the hierarchy of extensions of rationals.

  1. Discrete p-adic coupling constant evolution would naturally correspond to the dependence of coupling constants on the size of CD. For instance, I have considered a concrete but rather ad hoc proposal for the evolution of Kähler couplings strength based on the zeros of Riemann zeta (see this). Number theoretical universality suggests that the size scale of CD identified as the temporal distance between the tips of CD using suitable multiple of CP2 length scale as a length unit is integer, call it l. The prime factors of the integer could correspond to preferred p-adic primes for given CD.

  2. I have also proposed that the so called ramified primes of the extension of rationals correspond to the physically preferred primes. Ramification is algebraically analogous to criticality in the sense that two roots understood in very general sense co-incide at criticality. Could the primes appearing as factors of l be ramified primes of extension? This would give strong correlation between the algebraic extension and the size scale of CD.

In quantum field theories coupling constants depend in good approximation logarithmically on mass scale, which would be in the case of p-adic coupling constant evolution replaced with an integer n characterizing the size scale of CD or perhaps the collection of prime factors of n (note that one cannot exclude rational numbers as size scales). Coupling constant evolution could also depend on the size of extension of rationals characterized by its order and Galois group.

In both cases one expects approximate logarithmic dependence and the challenge is to define "number theoretic logarithm" as a rational number valued function making thus sense also for p-adic number fields as required by the number theoretical universality.

Coupling constant evolution associated with the size scale of CD

Consider first the coupling constant as a function of the length scale lCD(n)/lCD(1)=n.

  1. The number π(n) of primes p≤ n behaves approximately as π(n)= n/log(n). This suggests the definition of what might be called "number theoretic logarithm" as Log(n)== n/π(n). Also iterated logarithms such log(log(x)) appearing in coupling constant evolution would have number theoretic generalization.

  2. If the p-adic variant of Log(n) is mapped to its real counterpart by canonical identification involving the replacement p→ 1/p, the behavior can very different from the ordinary logarithm. Log(n) increases however very slowly so that in the generic case one can expect Log(n)<pmax, where pmax is the largest prime factor of n, so that there would be no dependence on p for pmax and the image under canonical identification would be number theoretically universal.

    For n=pk, where p is small prime the situation changes since Log(n) can be larger than small prime p. Primes p near primes powers of 2 and perhaps also primes near powers of 3 and 5 - at least - seem to be physically special. For instance, for Mersenne prime Mk=2k-1 there would be dramatic change in the step Mk→ Mk+1=2k, which might relate to its special physical role.

  3. One can consider also the analog of Log(n) as

    Log(n)= ∑p kpLog(p) ,

    where pki is a factor of n. Log(n) would be sum of number theoretic analogs for primes factors and carry information about them.

    One can extend the definition of Log(x) to the rational values x=m/n of the argument. The logarithm Logb(n) in base b=r/s can be defined as Logb(x)= Log(x)/Log(b).

  4. For p∈ {2,3,5} one has Log(p)>log(p), where for larger primes one has Log(p)<log(p). One has
    Log(2)=2>log(2)=.693..., Log(3)= 3k/2> log(3)= 1.099, Log(5)= 5/3=1.666..>log(5)= 1.609. For p=7
    one has Log(7)= 7/4≈ 1.75<log(7)≈ 1.946. Hence these primes and CD size scales n involving large powers of p∈ {2,3,5} ought to be physically special as indeed conjectured on basis of p-adic calculations and some observations related to music and biological evolution (see this).

    In particular, for Mersenne primes Mk=2k-1 one would have Log(Mk) ≈ k log(2) for large enough k. For Log(2k) one would have k × Log(2)=2k>log(2k)=klog(2): there would be sudden increase in the value of Log(n) at n=Mk. This jump in p-adic length scale evolution might relate to the very special physical role of Mersenne primes strongly suggested by p-adic mass calculations (see this).

  5. One can wonder whether one could replace the log(p) appearing as a unit in p-adic negentropy with a rational unit Log(p)= p/π(p) to gain number theoretical universality? One could therefore interpret the p-adic negentropy as real or p-adic number for some prime. Interestingly, |Log(p)|p=1/p approaches zero for large primes p (eye cannot see itself!) whereas |Log(p)|q=1/|π(p)|q has large values for the prime power factors qr of π(p).

Coupling constant evolution associated with the extension of rationals

Consider next the dependence on the extension of rationals. The natural algebraization of the problem is to consider the Galois group of the extension.

  1. Consider first the counterparts of primes and prime factorization for groups. The counterparts of primes are simple groups, which do not have normal subgroups H satisfying gH=Hg implying invariance under automorphisms of G. Simple groups have no decomposition to a product of sub-groups. If the group has normal subgroup H, it can be decomposed to a product H× G/H and any finite group can be decomposed to a product of simple groups.

    All simple finite groups have been classified (see this). There are cyclic groups, alternating groups, 16 families of simple groups of Lie type, 26 sporadic groups. This includes 20 quotients G/H by a normal subgroup of monster group and 6 groups which for some reason are referred to as pariahs.

  2. Suppose that finite groups can be ordered so that one can assign number N(G) to group G. The roughest ordering criterion is based on ord(G). For given order ord(G)=n one has all groups, which are products of cyclic groups associated with prime factors of n plus products involving non-Abelian groups for which the order is not prime. N(G)>ord(G) thus holds true. For groups with the same order one should have additional ordering criteria, which could relate to the complexity of the group. The number of simple factors would serve as an additional ordering criterion.

    If its possible to define N(G) in a natural manner then for given G one can define the number π1(N(G)) of simple groups (analogs of primes) not larger than G. The first guess is that that the number π1(N(G)) varies slowly as a function of G. Since Zi is simple group, one has π1(N(G)) ≥ π(N(G)).

  3. One can consider two definitions of number theoretic logarithm, call it Log1.

    a) Log1(N(G))= N(G)/π1(N(G)) ,

    b) Log1(G)= ∑i ki Log1(N(Gi)) ,
    Log1(N(Gi)) = N(Gi)/π1(N(Gi)) .

    Option a) does not provide information about the decomposition of G to a product of simple factors. For Option b) one decomposes G to a product of simple groups Gi: G= ∏i Giki and defines the logarithm as Option b) so that it carries information about the simple factors of G.

  4. One could organize the groups with the same order to same equivalence class. In this case the above definitions would give

    a) Log1(ord(G))= ord(G)/π1(ord(G)) < Log(ord(G)) ,

    b) Log1(ord(G))= ∑i ki Log(ord(Gi)) , Log1(ord(Gi)) = ord(Gi)/π1(ord(Gi)) .

    Besides groups with prime orders there are non-Abelian groups with non-prime orders. The occurrence of same order for two non-isomorphic finite simple groups is very rare (see this). This would suggests that one has π1(ord(G)) <ord(G) so that Log1(ord(G))/ord(G)<1 would be true.

  5. For orders n(G)∈ {2,3,5} one has Log1(n(G))=Log(n(G))>log(n(G)) so that the ordes n(G) involving large factors of p∈ {2,3,5} would be special also for the extensions of rationals. S3 with order 6 is the first non-abelian simple group. One has π(S3)=4 giving Log(6)= 6/4=1.5<log(6)=1.79 so that S3 is different from the simple groups below it.

To sum up, number theoretic logarithm could provide answer to the long-standing question what makes Mersenne primes and also other small primes so special.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 05, 2018

Summary about twistorialization in TGD framework

Since the contribution means in well-defined sense a breakthrough in the understanding of TGD counterparts of scattering amplitudes, it is useful to summarize the basic results deduced above as a polished answer to a Facebook question.

There are two diagrammatics: Feynman diagrammatics and twistor diagrammatics.

  1. Virtual state is an auxiliary mathematical notion related to Feynman diagrammatics coding for the perturbation theory. Virtual particles in Feynman diagrammatics are off-mass-shell.

  2. In standard twistor diagrammatics one obtains counterparts of loop diagrams. Loops are replaced with diagrams in which particles in general have complex four-momenta, which however light-like: on-mass-shell in this sense. BCFW recursion formula provides a powerful tool to calculate the loop corrections recursively.

  3. Grassmannian approach in which Grassmannians Gr(k,n) consisting of k-planes in n-D space are in a central role, gives additional insights to the calculation and hints about the possible interpretation.

  4. There are two problems. The twistor counterparts of non-planar diagrams are not yet understood and physical particles are not massless in 4-D sense.

In TGD framework twistor approach generalizes.
  1. Massless particles in 8-D sense can be massive in 4-D sense so that one can describe also massive particles. If loop diagrams are not present, also the problems produced by non-planarity disappear.

  2. There are no loop diagrams- radiative corrections vanish. ZEO does not allow to define them and they would spoil the number theoretical vision, which allows only scattering amplitudes, which are rational functions of data about external particles. Coupling constant evolution - something very real - is now discrete and dictated to a high degree by number theoretical constraints.

  3. This is nice but in conflict with unitarity if momenta are 4-D. But momenta are 8-D in M8 picture (and satisfy quaternionicity as an additional constraint) and the problem disappears! There is single pole at zero mass but in 8-D sense and also many-particle states have vanishing mass in 8-D sense: this gives all the cuts in 4-D mass squared for all many-particle state. For many-particle states not satisfying this condition scattering rates vanish: these states do not exist in any operational sense! This is certainly the most significant new discovery in the recent contribution.

    BCFW recursion formula for the calculation of amplitudes trivializes and one obtains only tree diagrams. No recursion is needed. A finite number of steps are needed for the calculation and these steps are well-understood at least in 4-D case - even I might be able to calculate them in Grassmannian approach!

  4. To calculate the amplitudes one must be able to explicitly formulate the twistorialization in 8-D case for amplitudes. I have made explicit proposals but have no clear understanding yet. In fact, BCFW makes sense also in higher dimensions unlike Grassmannian approach and it might be that the one can calculate the tree diagrams in TGD framework using 8-D BCFW at M8 level and then transform the results to M4× CP2.

What I said above does yet contain anything about Grassmannians.
  1. The mysterious Grassmannians Gr(k,n) might have a beautiful interpretation in TGD: they could correspond at M8 level to reduced WCWs which is a highly natural notion at M4× CP2 level obtained by fixing the numbers of external particles in diagrams and performing number theoretical discretization for the space-time surface in terms of cognitive representation consisting of a finite number of space-time points.

    Besides Grassmannians also other flag manifolds - having Kähler structure and maximal symmetries and thus having structure of homogenous space G/H - can be considered and might be associated with the dynamical symmetries as remnants of super-symplectic isometries of WCW.

  2. Grassmannian residue integration is somewhat frustrating procedure: it gives the amplitude as a sum of contributions from a finite number of residues. Why this work when outcome is given by something at finite number of points of Grassmannian?!

    In M8 picture in TGD cognitive representations at space-time level as finite sets of points of space-time determining it completely as zero locus of real or imaginary part of octonionic polynomial would actually give WCW coordinates of the space-time surface in finite resolution.

    The residue integrals in twistor diagrams would be the manner to realize quantum classical correspondence by associating a space-time surface to a given scattering amplitude by fixing the cognitive representation determining it. This would also give the scattering amplitude.

    Cognitive representation would be highly unique: perhaps modulo the action of Galois group of extension of rationals. Symmetry breaking for Galois representation would give rise to supersymmetry breaking. The interpretation of supersymmetry would be however different: many-fermion states created by fermionic oscillator operators at partonic 2-surface give rise to a representation of supersymmetry in TGD sense.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

The Recent View about Twistorialization in TGD Framework

The twistorialization of TGD has now reached quite precise formulation and strong predictions are emerging.

  1. A proposal made already earlier is that scattering diagrams as analogs of twistor diagrams are constructible as tree diagrams for CDs connected by free particle lines. Loop contributions are not even well-defined in zero energy ontology (ZEO) and are in conflict with number theoretic vision. The coupling constant evolution would be discrete and associated with the scale of CDs (p-adic coupling constant evolution) and with the hierarchy of extensions of rationals defining the hierarchy of adelic physics.

  2. The reduction of the scattering amplitudes to tree diagrams is in conflict with unitarity in 4-D situation. The imaginary part of the scattering amplitude would have discontinuity proportional to the scattering rate only for many-particle states with light-like total momenta. Scattering rates would vanish identically for the physical momenta for many-particle states.

    In TGD framework the states would be however massless in 8-D sense. Massless pole corresponds now to a continuum for M4 mass squared and one would obtain the unitary cuts from a pole at P2=0! Scattering rates would be non-vanishing only for many-particle states having light-like 8-momentum, which would pose a powerful condition on the construction of many-particle states. This strong form of conformal symmetry has highly non-trivial implications concerning color confinement.

  3. The key idea is number theoretical discretization in terms of "cognitive representations" as space-time time points with M8-coordinates in an extension of rationals and therefore shared by both real and various p-adic sectors of the adele. Discretization realizes measurement resolution, which becomes an inherent aspect of physics rather than something forced by observed as outsider. This fixes the space-time surface completely as a zero locus of real or imaginary part of octonionic polynomial.

    This must imply the reduction of "world of classical worlds" (WCW) corresponding to a fixed number of points in the extension of rationals to a finite-dimensional discretized space with maximal symmetries and Kähler structure.

    The simplest identification for the reduced WCW would be as complex Grassmannian - a more general identification would be as a flag manifold. More complex options can of course be considered. The Yangian symmetries of the twistor Grassmann approach known to act as diffeomorphisms respecting the positivity of Grassmannian and emerging also in its TGD variant would have an interpretation as general coordinate invariance for the reduced WCW. This would give a completely unexpected connection with supersymmetric gauge theories and TGD.

  4. M8 picture implies the analog of SUSY realized in terms of polynomials of super-octonions whereas H picture suggests that supersymmetry is broken in the sense that many-fermion states as analogs of components of super-field at partonic 2-surfaces are not local. This requires breaking of SUSY. At M8 level the breaking could be due to the reduction of Galois group to its subgroup G/H, where H is normal subgroup leaving the point of cognitive representation defining space-time surface invariant. As a consequence, local many-fermion composite in M8 would be mapped to a non-local one in H by M8-H correspondence.

See the article The Recent View about Twistorialization in TGD Framework or the chapter chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, February 25, 2018

More about Posner molecule

I wrote for a couple of years ago a commentary about the work of Fisher on so called Posner molecule: Ca9(PO4)6 Ca:s are double charged ions and phosphates triply charged ions. Bones have pairs of Posner molecules as building bricks. Fisher proposed that the Larmor frequences of phospates might be fundamental in biology. This might be true.

However, while writing I found that PO4 3- ion has cyclotron frequency of 9.5 Hz in endogenous magnetic field Bend=.2 Gauss explaining together with heff/h=n hypothesis inspired by the quantal effects of ELF em fields on vertebrate brain reduced later to a prediction of adelic physics. This frequency is in alpha band defining the fundamental biorhytm. This of course puts bells ringing.

Posner molecule would be ideal for both control purposes (Ca2+ and PO43- ions) and for metabolism (6 phosphates with high energy phosphate bonds): P and O related valence bonds indeed have nearly maximal metabolic energy content in the proposed model of valence bonds based on heff/h=n hierarchy (see this). This suggests that bones might also serve as energy storages and - of course - as nutrients. Interestingly, in the evolution of humans the discovery of stones as tools to break down bones of prey animals to get bone marrow has been seen as a critical step leading to the growth of cortex requiring a lot of metabolic energy (to generate large n bonds providing ability to generate negentropy).

What is interesting that ATP molecule - the basic metabolic currency - has triphosphate with total charge -4 as a building brick. Triphosphate is characterized by cyclotron frequency 4.8 Hz, which is one half of the alpha band frequency. The diphosphate in ADP has cyclotron frequency 5.2 Hz. Note that the cyclotron frequency of Fe2+ ion central in oxygen based metabolism is 10.7 Hz and in alpha band as also the Larmor frequency 10.96 Hz of P.

This suggests that MB uses spin flips for control and coordination purposes. MB could control and coordinate all phosphate containing biomolecules usign this Larmor transition of P. This includes ATP, DNA, RNA, the tubulins of microtubules containing GTP and all biomolecules to which phosphate is attached. This would conform with the frequencies in alpha band as a universal biorhythm used by magnetic body to keep metabolism in synchrony in body scale.

Note that in DNA the singly charged phosphates in XMPs, X = A, T, C, G, have cyclotron frequency, which is one third of this, that is 3.1 Hz. This frequency appears in EEG as a kind of resonance frequency during deep sleep. DNA nucleotides as a whole have cyclotron frequencies around 1 Hz. In microtubules the phosphate of GTP can have three different charge states allowing frequencies 3.1,6.2 and 9.4 Hz. I have proposed that these charge states together with two different tubulin conformations give rise to a realization of genetic code.

The proton cyclotron frequency 300 Hz has been already earlier assigned with ATP and the models for the lifelike properties of a system consisting of plastic balls involved cyclotron frequency of Ar+ ion, which is same as that of Ca2+ ion and cyclotron frequency 300 Hz of proton (see this). Also the two important frequencies associated with honeybee dance correspond to the cyclotron frequencies of Ca2+ and proton (see this).

See the updated article Are lithium, phosphate, and Posner molecule fundamental for quantum biology? or the chapter Quantum model for nerve pulse of "TGD and EEG".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, February 23, 2018

Low surface brightness galaxies as additional support for pearls-in-necklace model for galaxies

Sabine Hossenfelder had an inspiring post about the problems of the halo dark matter scenario. My attention was caught by the title "Shut up and simulate". It was really to the point. People stopped first to think, then to calculate, and now they just simulate. Perhaps AI will replace them at the next step.

While reading I realized that Sabine mentioned a further strong piece of support for the TGD view about galaxies as knots along cosmic strings, which create cylindrically symmetric gravitational field orthogonal to the string rather than spherically symmetric field as in halo models. The string tension determines the rotation velocity of distant stars predicted to be constant constant up to arbitrarily long distances (the finite size of space-time sheet of course brings in cutoff length).

To express it concisely: Sabine told about galaxies, which have low surface brightness. In the halo model the density of both matter and dark matter halo should be low for these galaxies so that the velocity of distant stars should decrease and lead to a breakdown of so called Tully-Fisher relation. It doesn't. This is the message that the observational astrophysicist Stacy McGaugh is trying to convey in his blog: about this the post of Sabine mostly told.

I am not specialist in the field of astrophysics and it was nice to read the post and refresh my views about the problem of galactic dark matter.

  1. Tully-Fisher-relation (TFR) is an empirically well-established relation between the brightness of a galaxy and the velocity of its outermost stars. Luminosity L equals to apparent brightness (luminosity per unit area) of the galaxy multiplied by the area 4π d2 of sphere with radius equal to the distance d of the observed galaxy. The luminosity of galaxy is also proportional to the mass M of the galaxy. TFR says that luminosity of spiral galaxy - or equivalently its mass - is proportional to the emission line width, which is determined by the spectrum of angular velocities of stars in the spiral galaxy. Apparent brightness and line width can be measured, and from these one can deduce the distance d of the star: this is really elegant.

  2. It is easy to believe that the line width is determined by the rotation velocity of galaxy, which is primarily determined by the mass of the dark matter halo. The observation that the rotational velocity is roughly constant for distant stars of spiral galaxies - rather than decreasing like 1/ρ - this led to the hypothesis that there is dark matter halo around galaxy. By fitting the density of the dark matter properly, one obtains constant velocity. Flat velocity spectrum implies that the line width is same also for distant stars as for stars near galactic center.

    To explain this in halo model, one ends up with complex model for the interactions of dark matter and ordinary matter and here simulations are the only manner to deduce the predictions. As Sabine tells, the simulations typically take months and involve huge amount of bits.

  3. Since dark matter halo is finite, the rotation velocity should decrease at large enough distances like 1/R, R distance from the center of the galaxy. If one has very dilute galaxy - so called low surface brightness galaxy, which is very dim - the rotational velocities of distant stars should be smaller and therefore also their contribution to the average line width assignable to the galaxy. TFR is not expected to hold true anymore. The surprising finding is that it does!

The conclusion seems to be that there is something very badly wrong with the halo model.

Halo model of dark matter has also other problems.

  1. Too many dwarf galaxies tend to be predicted.

  2. There is also so called cusp problem: the density peak at the center of the galaxy tends to be too high. Observationally the density seems to be roughly constant in the center region, which behaves like rotating rigid body.

The excuses for the failures claim that the physics of normal matter is not well enough understood: the feedback from the physics of ordinary matter is believed to solve the problems. Sabine lists some possibilities.
  1. There is the pressure generated when stars go supernovae, which can prevent the formation of the density peak. The simulations however show that practically 100 per cent of energy liberated in the formation of supernovas should go to the creation of pressure preventing the development of the density peak.

  2. One can also claim that the dynamics of interstellar gas is not properly understood.

  3. Also the accretion and ejection of matter by supermassive black holes, which are at the center of most galaxies could reduce the density peak.

One can of course tinker with the parameters of the model and introduce new ones to get what one wants. This is why simulations are always successful!
  1. For instance, one can increase the relative portion of dark matter to overcome the problems but one ends up with fine tuning. The finding that TFR is true also for low surface brightness galaxies makes the challenge really difficult. Mere parameter fit is not enough: one should also identify the underlying dynamical processes allowing to get rid of the normal manner, and this has turned out to be difficult.

  2. What strongly speaks against the feedback from the ordinary matter is that the outcome should be the same irrespective of how galaxies were formed: directly or through mergers of other galaxies. The weak dependence on the dynamics of ordinary matter strongly suggests that stellar feedback is not a correct manner to overcome the problem.

One can look at the situation also in TGD framework.
  1. In pearls-in-necklace model galaxies are knots of long cosmic strings (see this, this, and this). Knots have constant density and this conforms with the observation: the cusp problem disappears.

  2. The long string creates gravitational field orthogonal to it and proportional to 1/ρ, ρ the orthogonal distance from the string. This cylindrically symmetric field creates correlations in much longer scales than the gravitational field of spherical halo, which for long distances is proportional to 1/r2, where r the distance from the center of the galaxy.

    Pearls-in-necklace model predicts automatically constant velocity spectrum at arbitrary long(!) distances. The velocity spectrum is independent of the details of the distribution of the visible matter and is proportional to the square root of string tension. There is almost total independence of the velocity spectrum of the ordinary matter as also the example of low surface brightness galaxies demonstrates. Also the history for the formation of the galaxy matters very little.

  3. From TFR one can conclude that the mass of the spiral galaxy is (proportional to the luminosity proportional to the line width) and also proportional to the string tension. Since galactic mass varies also string tension must vary. This is indeed predicted. String tension is essentially the energy per unit length for the thickened cosmic string and would characterize the contributions of dark matter in TGD sense (phases of ordinary matter with large heffh=n as well as dark energy, which contains both Kähler magnetic energy and constant term proportional to the 3-volume of the flux tube.

    Cosmology suggests that string thickness increases with time: this would reduce the Kähler magnetic contribution to the string tension but increase the contribution proportional to the 3-volume. There is also the dependence of the coefficient of the volume term (essentially the formal counterpart of cosmological constant), which depends on p-adic length scale like the inverse of the p-adic length scale squared L(k)∝ 2k/2, where k must be positive integer, characterizing the size scale involved (this is something totally new and solves the cosmological constant problem) (see this). It is difficult to say which contribution dominates.

  4. Dwarf galaxies would require small string tension, hence the strings with small string tension should be rather rare.

If this picture is correct, the standard views about dark matter are completely wrong, to put it bluntly. Dark matter corresponds to heff/h=n phases of ordinary matter rather than some exotic particle(s) having effectively only gravitational interaction, and there is no dark matter halo. TGD excludes also MOND. Dark energy and dark matter reside at the thickened cosmic strings, which belong to the simplest extremals of the action principle of TGD (see (see this and this). It should be emphasized that flux tubes are not ad hoc objects introduced to understand galactic velocity spectrum: they are a basic prediction of TGD and by fractality of TGD Universe present in all scales and are fundamental also for the TGD view about biology and neuroscience.

Maybe it might be a good idea to start to think again. Using brains instead of computers is also must a more cost-effective option: I have been thinking intensely for four decades, and this hasn't cost a single coin for the society! Recommended!

See the article Low surface brightness galaxies as additional support for pearls-in-necklace model for galaxies or the chapter TGD and astrophysics of "Physics in Many-sheeted Space-time".

For TGD based model of galaxies see for instance this .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Wednesday, February 21, 2018

Is time reversal involved with Pollack effect?

In Pollack effect negatively charges Exclusion Zeones (EZs) are formed. EZs have the very strange property that the impurities are spontaneously removed from them. This seems to be in conflict with the second law of thermodynamics according to which both temperature and concentration gradients should tend to disappear. Could one understand this as being due to a reversal of the arrow of time?

Indeed, TGD inspired theory of consciousness relying on zero energy ontology (ZEO) predicts the possibility of time reversed selves (see this). When conscious entity - self - dies, it reincarnates as a self with opposite arrow of geometric time.

  1. In ZEO zero energy states replace ordinary quantum states assigned with time=constant snapshots of time evolution in space-time. Zero energy states are pairs of ordinary quantum states at opposite light-like boundaries of causal diamond (CD) identifiable as counterparts of initial and finals states of a physical event. Conservation quantum numbers translates to a mathematical statement that the quantum numbers associated with the members of pairs are opposite. One can also say that zero energy state is analogous to a deterministic computer program or a behavioral mode. The act of free will replaces this program/behavior with a new one so that one avoids the paradox between the non-determinism of free will and determinism of physics.

  2. Causal diamond (CD) defines the imbedding space correlate of self. One can assign to the opposite light-like boundaries the attributes active and passive. During the sequence of analogs of "small" state function reductions analogous to weak quantum measurements (resembling classical measurements) the passive boundary remains unaffected as also the members of state pairs defining zero energy states associated with it. Active boundary recedes farther away from the passive boundary and the members of state pairs at it change. The size of CD thus increases and gives rise flow of geometric time as an increase of the temporal distance between the tips of CD.

  3. Eventually the first state function reduction to the opposite boundary of CD must occur, and active and passive boundary change their roles. Self dies and re-incarnates as a self with opposite arrow of geometric time: the formerly passive boundary of CD becomes now active and moves in opposite time direction reduction by reduction. In the next re-incarnation self continues almost from the moment of geometric time at which it died. It might be that we die repeatedly without noticing it at all!

  4. The many-sheeted space-time approximated with slightly curved regions of Minkowski space would certainly tend to mask the time reversals in given length scale. In elementary particle length scales the state function reductions would indeed change the arrow of time but this would occur so often that there would be no arrow of time in statistical sense: one would speak of microscopic reversibility. In time scales considerably longer than those of human consciousness the observed arrow of time would correspond to that associated with selves with very large CDs and with lifetime much longer than ours. The change of the arrow of time could be detectable in time scales relevant to living matter and human consciousness and just these scales are the scales where the anomalies occur!

Could the ghostly space-time regions - time reversed selves - have some physical signatures making possible to prove their existence empirically?
  1. Second law would still hold true but in opposite direction of geometric time for the space-time sheets with non-standard arrow of time. The effects implied by second law would be present as their reversals. The observer with standard direction of geometric time would see temperature and density gradients to develop spontaneously.
    Also parameters describing dissipation rates such as Ohmic resistance and viscosity could have in some situations negative values.

    This indeed seems to take place in living matter. For instance, the building bricks of molecules spontaneously arrange to molecules: DNA replication, transcription and translation of RNA to proteins are basic examples about this. The development of concentration gradients is also clear in the strange ability of EZs to get rid of impurities. Also the charge separation creating EZs could be seen as disappearence of charged separatio in reversed direction of time. Healing of living organism could be a basic example of the process in which the arrow of time changes temporarily at some level of hierarchy of space-time sheets.

  2. The generation of temperature gradients would be a clear signature for the reversal of the arrow of time. Water is the basic stuff of life, and the thermodynamics of water involves numerous anomalies summarized at Martin Chaplin's homepage "Water structure and science". TGD based explanation could be naturally in terms of dark variants of protons at magnetic flux tubes and possible change of the arrow of geometric time.

  3. There is a lot of anecdotal evidence for the effects challenging our beliefs about standard arrow of time. A spontaneous generation of temperature differences is basic example. There is a nice popular document about this boundary region of science by Phie Ambo, which even skeptic might enjoy as art experience.

    It was a great surprise for me that one of the key personalities in the document is Holger B Nielsen, one of the pioneers of string models. I have had the honor to have intense discussions with him in past: he is one of the very few colleagues who has shown keen interest on the basic ideas of TGD. The document discusses strange phenomena associated with the physics of water possibly having interpretation in terms of time reversal and formation of EZs. From the document one also learns that in Denmark physics professionals are beginning to take these anomalies seriously.

    Unfortunately, the people who claim having discovered this kind of effects - often not science professionals - are labelled as crackpots. The laws of science also tell what we are allowed to observe (and think), at least if we want to be called scientists!

  4. The ghost stories might also reflect something real - this real need of course not be ghost but something deep about consciousness. Could it be that it is sometimes possible to consciously experience the presence of a space-time region - self - with an opposite arrow of geometric time? Ghost stories typically involve a claim about the reduction of temperature of environment in presence of ghost: could this be something real and a signature for the reversal of time at some level of dark matter hierarchy affecting also dark matter? As a matter of fact, in TGD Universe our conscious experience could involve routinely sub-selves (mental images) with non-standard arrow of time (see this): motor actions could be identified as sensory mental images with opposite arrow of time.

See the article Pollack's Findings about Fourth phase of Water: TGD View or the chapter How to test TGD Based Vision about Living Matter and Remote Mental Interactions .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, February 20, 2018

Dance of the honeybee and New Physics

For more than two decades ago mathematician Barbara Shipman made rather surprising finding while working with her thesis. The 2-D projections of certain curves in flag manifold F=SU(3)/U(1)× U(1) defined by the so called momentum map look like the waggle part of the dance of the honey bee. Shipman found that one could reproduce in this framework both waggle dance and circle dance (special case of waggle dance) and the transition between these occurring as the distance of the food source from the nest reduces below some critical distance. Shipman introduced a parameter, which she called α, and found that the variation of α allows to integrate various forms of the honeybee dance to a bigger picture. Since SU(3) is the gauge group of color interactions, this unexpected finding led Shipman to as whether there might be a profound connection between quantum physics at quark level and macroscopic physics at the level of honeybee dance.

The average colleague of course regards this kind of proposal as crackpottery: the argument is that there simply cannot be any interaction between degrees of freedom in so vastly different length scales. Personally I however found this finding fascinating and wrote about the interpretation of this finding in the framework of TGD and TGD inspired consciousness. During more than two decades a lot of progress has taken place in TGD, in particular I have learned that the flag manifold F has interpretation as twistor space of CP2 and plays a fundamental role in twistor lift of TGD. Hence it is interesting to look what this could allow to say about honeybee dance.

It turned out that one could understand the waggle parts of the honeybee dance at space-time level in terms of the intersection of the space-time surface with the image of the Cartan sub-algebra of SU(3) represented in CP2 using exponential map. This allows to code the positional data about the food source. The frequencies assignable to the wing vibrations and waggling turn could have interpretation as cyclotron frequencies as expected if the magnetic body of the bee controls the waggle dance utilizing resonance mechanism. They could also correspond to the momenta (frequencies) defining constants of motion for geodesic in U(1)× U(1) defining one particular point of flag manifold F. Also a connection with the Chladni effect emerges: the waggle motion is along time-like curve at which Kähler force vanishes. Also the transition from waggle dance do circle dance.

See the article Dance of the honeybee and New Physics or the chapter of "TGD based view about living matter and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, February 02, 2018

How brain selectively remembers new places?

There was a very interesting link in Minding Brain related to the storage of new memories. The title of the popular article is "How brain selectively remembers new places?". The following represents TGD based view about what might happen.


  1. In TGD framework brain/body corresponds to 4-D geometric object classically - a space-time surface with complex topology (zero energy ontology, ZEO). Brain and biological body are accompanied by magnetic body (MB) defining a topological time evolution of flux tube network having neurons (and also body cells) as its nodes and it is MB, which seems to be of fundamental significance (see this and this). Memories are located in 4-D brain (body) for the first time to the time-place, where they were formed, later successful memory recalls form new copies of them.

  2. To remember is to see in time direction to geometric past. The signal sent from hippocampus backwards in geometric time scatters back in standard time direction: this is nothing but seeing in 4 dimensions. 4-D memory storage means that there is practically no limitations on memory storage since new storage capacity is created all the geometric time! Making careful distinction between experienced and geometric times allows to both avoid paradoxes and solve the paradoxes of existing theory.

    Remark: Also the possibility of quantum entanglement also increases exponentially the memory storage capacity (and destroys the dreams of AI afficionados about copying human consciousness as bits telling whether neuron fires or not to a computer file!).

  3. Brain is able to detect whether the sensory percept - say completely new place - is indeed new. Brain acts as novelty detector. This requires scanning of 4-D brain to see whether there are sensory percepts in geometric past, which share common features with the recent sensory percept. This requires high level conceptualization so that perceptive field is decomposed to objects with some attributes. If common objects are not found, the percept is regarded as something new. In this case a new symbolic memory representation about perceptive field is formed.

  4. This strongly suggests that the signal sent from hippocampus scatters back from brain of past and is then compared with the recent sensory percept. If they the signals are very similar - this might give rise to some kind of resonance - the experience is "I have seen this before". The information provided by the already existing memory is utilized. If not then sensory percept is regarded as new and memory representation is formed.

Where is this new memory representation constructed?
  1. The article suggests that locus coeruleus (LC) and area CA3 of hippocampus are involved. It was found that the modulation of CA3 by LC is was involved in the formation of new memory: if the modulation was prevented, no new memory was formed and the the mice behaved next day as if the place were still new.

  2. In ZEO the new memory would correspond to a collection of activated neurons in LC and CA3 accompanied by connected flux tube structure represented the new mental image as a quantum entangled structure - tensor network. This kind of mental images would have formed for some period of time in the brain of the mice and given rise to a 4-D representation of new place to be read later by sending signals backwards in geometric time.

See the article Emotions as sensory percepts about the state of magnetic body? or the chapter of "TGD based view about living matter and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

A further lethal blow to the dark matter halo paradigm

The following is a comment to a FB posting by Sabine Hossenfelder giving a link to the most recent finding challenging the dark matter halo paradigm. The article titled "A whirling plane of satellite galaxies around Centaurus A challenges cold dark matter cosmology" published in Science can be found also in Archiv.

The halo model for dark matter encounters continually lethal problems as I have repeatedly tried to tell in my blog postings and articles. But still this model continues to add items to the curriculum vitae of the specialists - presumably as long as the funding continues. Bad ideas never die.

Halo model predicts that the dwarf galaxies around massive galaxies like Milky should move randomly. The newest fatal blow comes from the observation that dwarf galaxies move along neat circular orbits in the galactic plane of Centaurus A.

Just like the TGD based pearls-in-necklace model of galaxies as knots (the pearls) of long cosmic strings predicts! The long cosmic string creates gravitational field in transversal direction and the dwarf galaxies move around nearly circular orbits. The motion along long cosmic string would be free motion and would give rise to streams. The prediction is that at large distances the rotational velocities approach constant just as in the case of distant stars.

Somehow it seems impossible for colleagues to get the heureka that dark matter could be concentrated along string like structures. Why this is so difficult, remains a mystery to me. I have been now waiting this discovery (and many other discoveries) for more than two decades but in vain.

For TGD based model of galaxies see for instance this .

See the chapter TGD and astrophysics of "Physics in Many-sheeted Space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Superfluids dissipate!

People in Aalto University - located in Finland by the way - are doing excellent work: there is full reason to be proud! I learned from the most recent experimental discovery by people working in Aalto University from Karl Stonjek. The title of the popular article is Friction found where there should be none—in superfluids near absolute zero.

In rotating superfluid one has vortices and they should not dissipate. The researchers of Aalto University however observed dissipation: the finding by J. Mäkinen et al is published in Phys Rev B. Dissipation means that they lose energy to environment. How could one explain this?

What comes in mind for an inhabitant of TGD Universe, is the hierarchy of Planck constants heff =n×h labelling a hierarchy of dark matters as phases of ordinary matter. The reduction of Planck constant heff liberates energy in a phase transition like manner giving rise to dissipation. This kind of burst like liberation of energy is mentioned in the popular article ("glitches" in neutron stars). I have already earlier proposed an explanation of fountain effect of superfluidity in which superfluid flow seems to defy gravity. The explanation is in terms of large value of heff implying delocalization of superfluid particles in long length scale (see this).

Remark: Quite generally, binding energies are reduced as function of heff/h= n. One has 1/n2 proportionality for atomic binding energies so that atomic energies defined as rest energy minus binding energy indeed increase with n. Interestingly, dimension 3 of space is unique in this respect. Harmonic oscillator energy and cyclotron energies are in turn proportional to n. The value of n for molecular valence bonds depends on n and the binding energies of valence bonds decrease as the valence of the atom with larger valence increases. One can say that the valence bonds involving atom at the right end of the row of the periodic table carry metabolic energy. This is indeed the case as one finds by looking the chemistry of nutrient molecules.

The burst of energy would correspond to a reduction of n at the flux tubes associated with the superfluid. Could the vortices decompose to smaller vortices with a smaller radius, maybe proportional to n? I have proposed similar mechanism of dissipation in ordinary fluids for more than two decades ago. Could also ordinary fluids involve hierarchy of Planck constants and could they dissipate in the same manner?

In biology liberation of metabolic energy - say in motor action - would take place in this kind of "glitch". It would reduce heff resources and thus the ability to generate negentropy: this leads to smaller negentropy resources and one gets tired and thinking becomes fuzzy.

See the chapter Quantum criticality and dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, February 01, 2018

How did language emerge?

I encountered in FB a link to an article titled Unique mix of brain chemicals separates humans from other primates . The article inspired the following comments as a reaction, which are not so much about the chemistry but about what to my view goes outside chemistry.

Cultural evolution is what distinguishes us so sharply from our cousins. The evolution of social structures made possible by the emergence of language is certainly crucial for it. To me it is far from obvious whether this can be explained in terms of chemistry alone. My views are based on TGD inspired theory of consciousness and quantum biology and involve notions like magnetic body and hierarchy of Planck constants.

The notion of magnetic body and the emergence of language and cultural evolution

  1. The notion of magnetic body (MB) as intentional agent using biological body as motor instrument and sensory receptor is central in TGD based view about biology and neuroscience. Flux tubes serving as correlates of attention and making possible quantum entanglement and communications by dark photons give quite concretely rise to bonds between systems in various scales. In TGD Universe the notion of magnetic body in crucial for understanding life in general. The emergence of collective levels of consciousness involving large scale MBs would make possible cultural evolution and allow to understand the dramatic difference between humans and other animals.

  2. The hierarchy of Planck constants heff/h=n would be crucial. The larger the value of n, the larger the scale of quantum coherence. Cultural evolution would involve increase of n leading to a formation of large MBs characterizing collective levels of consciousness. The MBs of DNAs consisting of flux sheets going through DNA would combine to bigger structures assignable to organs, organisms, and even populations. This could make possible cultural evolution as emergence of higher level conscious entities with collective genome and collective gene expression.

  3. There might be also other deep differences at DNA level not visible at the level of chemistry. The braiding of magnetic flux tubes emanating from the intronic part of DNA could make possible topological quantum computations and a new kind of memory and this might led to the quantum leap to real cultural evolution: the portion of introns is largest for humans.

What internal speech could be?

The emergence of language and speech organs is certainly a revolutionary step in evolution. What language is at quantum level? What thoughts as internal speech are at deeper level.

  1. My own proposal is that internal speech has as neuronal correlates linear structures of activated neurons giving names for things and having linear flux tube sequences and corresponding quantum states as correlates at the level of MB. This does not however tell what internal speech is at deeper quantum level.

  2. Did thinking as internal speech precede ordinary speech or vice versa? If internal speech came first, one avoids the problem of understanding why only certain sounds have meaning as words. Assume that this is the case.

  3. Genes are fundamental in biology. Did internal speech evolve as one particular form of gene expression? TGD inspired model for music harmony based on 12-note scale realize as Hamilton's cycle at icosahedron (see this) leads to a model of genetic code predicting correctly the numbers of codons coding for given amino-acid and to the proposal that genes express themselves are controlled by signals consisting of sequences of 3-chords allowed by a particular bio-harmony with 64 3-chords (256 of bio-harmonies) (see this). Given harmony would define an emotional state, mood.

    Gene would be represented as a sequence of 3-chords - accompaniment for a song, melody. Melody would be a sequence of single notes of 12-note scale consistent with the bio-harmony. The sequence of 3-chords allowed by the harmony would define the emotional character of the "music piece". Harmony would be something which chemistry cannot explain.

  4. How the accompaniment and song were represented at gene level? The most natural guess is that both the notes of 3-chords of the harmony defining the mood and the melody were represented as dark light. This would be music of light consisting of dark photons rather than phonons: notes would have been analogs of laser beams along flux tubes characterized by frequency and duration.

    How singing was represented at neuronal level? My proposal is that it was represented as 2-D structure of activated neurons having connected magnetic flux tube network as correlate and representing the mental image. Perhaps the pitch and duration of the note served as 2 discrete coordinates in neuronal lattice (see this).

  5. It is said that right brain sings and left brain talks. These two modes of expression relate like function and its Fourier transform. Did (internal) singing precede (internal) speech? At neuronal level this is suggested by the fact that Alzheimer patient who has lost understanding of language and ability to talk can still understand singing and also sing. Indeed, 1-D linear flux tube structures representing thoughts splits as amylose splits the neuronal connection so that speech is not possible. 2-D structures survive even if some connections are split (see this). Note that these two modes relate to cognition and emotion. Emotion came first as indeed evolution of nervous system demonstrates.

How did spoken language emerge?

How do the words of spoken language transform to internal speech and vice versa? What distinguishes words from ordinary sounds?

  1. The piezoelectric property of bio-matter makes possible the transformation of light to sound: now light would consist of dark photons with energies E= hefff in bio-photon range (visible and UV) and frequencies f in the range of audible sound frequencies. Did this transformation somehow give rise to genuine auditory experience of internal song/speech? Did internal singing/speech transform to heard singing/speech by virtual sensory input from brain to ears?

    In TGD based model for sensory perception, hallucinations/psychedelic experiences, and imagination (see this) this kind of virtual sensory input is essential since sensory qualia are at the level sensory organs and the objects of perceptive field are standardized mental images, kind of artwork requiring resulting from pattern recognition involving a lot of forth-and-back signalling between brain and sensory organs by dark photons).

    We would experience mere virtual sensory input in dreams (REM), hearing voices from head, etc... Pineal gland ("third eye") receiving dark photons signals would receive internal speech and in presence of DMT would channel it to ears producing heard internal song/speech. Jaynes argues that what he calls bicameral consciousness preceded modern consciousness and was like that of schizophrenic and people heard their thoughts as voices in head and interpreted these voices as voices of Gods.

  2. Did speech and speech organs evolve from the attempts to mimic this genuinely heard internal singing/speech. This would answer the question why only certain kind of sounds have meaning as words. Did this attempt provide evolutionary pressure leading to the emergence of genes coding for speech organs and speech as a motor activity?

    Remark: An amusing analogy pops in mind: internal speech viz. internal song is like rap viz. ordinary singing dropping out much of the emotional content.

    This cannot be the whole story. Language learning is a social phenomenon involving mimicry. Modern human cannot learn to speak by listening only the voices in his head! One can however ask whether languages have some universal pattern. For instance, could very primitive languages depend only on species? What is the role of the collective consciousness: does it talk in the same manner to individuals of the group who then mimic this talk. Was the God of the bicamerals the collective consciousness of the group?

See the the article How did language emerge? or the chapter Quantum model of nerve pulse of "TGD and EEG".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.