Friday, October 16, 2015

Further progress in the number theoretic anatomy of zeros of zeta

I wrote for some time ago an article about number theoretical universality (NTU) in TGD framework and ended up with the conjecture that the non-trivial zeros of zeta can be divided into classes C(p) labelled by primes p such that for zeros y in given class C(p) the phases piy are roots of unity. The impulse leading to the idea came from an argument of Dyson referring to the evidence that the Fourier transform for the locus of non-trivial zeros of zeta is a distribution concentrated on powers of primes.

There is a very interesting blog post by Mumford , which led to a much more precise formulation of the idea and improved view about the Fourier transform hypothesis: the Fourier transform must be defined for all zeros, not only the non-trivial ones and trivial zeros give a background term allowing to understand better the properties of the Fourier transform.

Mumford essentially begins from Riemann's "explicit formula" in von Mangoldt's form.

pn≥ 1 log(p)δpn(x)= 1- ∑k xsk-1 - 1/x(x2-1) ,

where p denotes prime and sk a non-trivial zero of zeta. The left hand side represents the distribution associated with powers of primes. The right hand side contains sum over cosines

k xsk-1= 2∑k cos(log(x)yk)/x1/2 ,

where yk ithe imaginary part of non-trivial zero . Apart from the factor x-1/2 this is the just Fourier transform over the distribution of zeros.

There is also a slowly varying term 1-1/x(x2-1), which has interpretation as the analog of the Fourier transform term but sum over trivial zeros of zeta at s=-2n, n>0. The entire expression is analogous to a "Fourier transform" over the distribution of all zeros.

Therefore the distribution for powers of primes is expressible as "Fourier transform" over the distribution of both trivial and non-trivial zeros rather than only non-trivial zeros as suggested by numerical data to which Dyson referred to). Trivial zeros give a slowly varying background term large for small values of argument x (poles at x=0 and x=1 - note that also p=0 and p=1 appear effectively as primes) so that the peaks of the distribution are higher for small primes.

The question was how can one obtain this kind of delta function distribution concentrated on powers of primes from a sum over terms cos(log(x)yk) appearing in the Fourier transform of the distribution of zeros. Consider x=pn. One must get a constructive interference. Stationary phase approximation is in terms of which physicist thinks. The argument was that a destructive interference occurs for given x=pn for those zeros for which cosine does not correspond to a real part of root of unity as one sums over such yk: random phase approximation gives more or less zero. To get something nontrivial yk must be proportional to 2π× n(yk)/log(p) in class C(p) to which yk belongs. If the number of these yk:s in C(p) is infinite, one obtains delta function in good approximation by destructive interference for other values of argument x.

The guess that the number of zeros in C(p) is infinite is encouraged by the behaviors of the densities of primes one hand and zeros of zeta on the other hand. The number of primes smaller than real number x goes like

π(x)=#(primes <x)∼ x/log(x)

in the sense of distribution. The number of zeros along critical line goes like

#(zeros <t) = (t/2π) × log(t/2π)

in the same sense. If the real axis and critical line have same metric measure then one can say that the number of zeros in interval T per number of primes in interval T behaves roughly like

#(zeros<T)/#(primes<T) =log(T/2π)×log(T)/2π

so that at the limit of T→ ∞ the number of zeros associated with given prime is infinite. This asumption of course makes the argument a poor man's argument only.

See the chapter Unified Number Theoretic Vision of "Physics as Generalized Number Theory" or the article Could one realize number theoretical universality for functional integral?.

For a summary of earlier postings see Links to the latest progress in TGD.


At 3:25 PM, Anonymous Anonymous said...

Matti, what do you think of J-F Geneste's LENR theory:

Sources and background:

At 8:59 PM, Anonymous said...

Thank you for interesting question. I have proposed for findings of Urutskoev and Fredericks a TGD based interpretation: html:// . The traces seem to be often created in presence of biological material.

To me the experimental findings of Urutskoev, Fredericks, and Gariaev are extremely interesting since they could be one of the first direct detections of dark matter in TGD sense. I do not like some elements of his theory - say aether. Also the identification of traces as very special kind of surfaces is ad hoc but shows that Urutskoev is already near to the realization of the fact that traces are not actually tracks of particles but static geometric objects.

The theory of Urutskoev interprets the traces as tracks created some exotic particles - the identification in terms of ordinary particles or particles of any kind does not work. I would identify them as dark magnetic flux tubes becoming visible in the emulsion. For instance, it takes time for the tracks to appear: tubes must be present for long enough time. Particle tracks would be generated immediately.

Dark matters large h_eff would be associated with the magnetic flux tubes and emulsion would generate kind of "photograph" of the tube. In Gariaev's experiment one obtains also an image consisting of tubular structures identifiable as flux tubes containing dark matter: visible photons transform to dark ones, reflect from the charged dark matter in flux tube, and transform back to ordinary ones and the image is created. Now the the first step is not present: part of dark photon radiation from flux tube is transformed to ordinary photons (interpretation is as biophoton in biology) and creates the image of flux tube in photo-emulsion.

In TGD framework dark matter is generated at quantum criticality, which has ordinary criticality (maybe also thermal criticality) as correlate. In the case of Urutskoev the "powders" were created in electrical discharge - criticality against dielectric breakdown is in question. Tesla's experiments involved also similar criticality and I have proposed generation of dark matter as explanation for the strange findings.

TGD model for LENR relies on this picture two: weak interaction length scale would be scale from 10^-17 meters to about atomic length scale and weak bosons would be effectively massless below that scale so that also weak interaction would of same strength as ordinary weak interaction. Proton shot to the target would emit dark charged weak boson (W^+) absorbed by the target and make its way to the target as neutron feeling no Coulomb wall. After that the strong reaction would proceed and lead cold fusion.

Biofusion could take place also in different manner: by generation of dark proton sequences at dark flux tubes as Pollacks's exclusion zones are generated. These dark proton sequences -dark nuclei- could transform to ordinary nuclei with some probability liberating quite a large energy if nuclear biding energy scales like 1/h_eff! This could occur in the splitting of water and explain strange properties assigned with Brown's gas such as ability to melt metals at low temperature.

At 9:08 PM, Anonymous said...

Sorry for a stupid error!! Since the author was missing from beginning, I thought that the theory was by Urutskoev. It was Geneste's theory about findings of Urutkoev!

It is mentioned that the imbedding of Lobatchevski plane to Euclidian 3-space fails globally (I think this is just negative constant curvature surface): corals consist of pieces of this kind. The imbeddability to non-Archimedean variant of E^3 is mentioned. I do not however think that the model of traces identified as flux tubes is as closed tubular surfaces.


Post a Comment

<< Home