https://matpitka.blogspot.com/

Sunday, March 23, 2025

p-Adic length scale hypothesis and Mandelbrot fractals

This post (see this) is a continuation to the previous suggesting a definition of generalized p-adic numbers in which functional powers of map g1: C2→ C2 for which g1 is a prime polynomial Pp with coefficients in extension E of rationals with degree low than p of P would define the analogs of powers of p-adic prime. For simplicity, one can restrict E to rationals. The question was whether it might be possible to understand p-adic primes satisfying p-adic length scale hypothesis as ramified primes appearing as divisors of the discriminant of the iterate of g1=P.

Generalized p-adic numbers as such are a very large structure and the systems satisfying the p-adic length scale hypothesis should be physically and mathematically special. Consider the following assumptions.

  1. Consider generalized p-adic primes associated restricted to the case when f2 is not affected in the iteration so that one has g=(g1,Id) and g1= g1(f1) is true. This would conform with the hypothesis that f2 defines the analog of a slowly varying cosmological constant. If one assumes that the small prime corresponds to q=2, the iteration reduces to the iteration appearing in the construction of Mandelbrot fractals and Julia sets. If one assumes g1= g1(f1,f2), f2 defines the analog of the complex parameter appearing in the definition of Mandelbrot fractals. The values of f2 for which the iteration converges to zero would correspond to the Mandelbrot set having a boundary, which is fractal.
  2. For the generalized p-adic numbers one can restrict the consideration to mere powers g1n as analogs of powers pn. This would be a sequence of iterates as analogs of abstractions. This would suggest g1(0)=0.
  3. The physically interesting polynomials g1 should have special properties. One possibility is that for q=2 the coefficients of the simplest polynomials make sense in finite field F2 so that the polynomials are P2(z== f1,ε) =z2 +ε z= z(z+ε), ε= +/- 1 are of special interest. For q>2 the coefficients could be analogous to the elements of the finite field Fq represented as phases exp(i2π k/3).
Consider now what these premises imply.
  1. Quite generally, the roots of P° n(g1) are given R(n)= P°(-n)(0). P(0)=0 implies that the set Rn of roots at the level n are obtained as Rn= Rn(new)∪ Rn-1, where Rn(new) consist of q new roots emerging at level n. Each step gives qn-1 roots at the previous level and qn-1 new roots.
  2. It is possible to analytically solve the roots for the iterates of polynomials with degree 2 or 3. Hence for q= 2 and 3 (there is evidence for the 3-adic length scale hypothesis) the inverse of g1 can be solved analytically. The roots at level n are obtained by solving the equation P(rn)= rn-1,k for all roots rn-1,k at level n-1. The roots in Rn-1(new) give qn-1 new roots in Rn(new).
  3. For q=2, the iteration would proceed as follows:

    0→ {0, r1} → {0,r1} ∪ {r21, r22} → {0,r1} ∪ {r21, r22}∪ {r121, r221, r122, r222} → ... .

  4. The expression for the discriminant D of g1°n can be deduced from the structure of the root set. D satisfies the recursion formula D(n)= D(n,new)× D(n-1) × D(n,new;n-1). Here D(n,new) is the product

    ri,rj ∈ D(n,new)(ri-rj)2

    and D(n,new;n-1) is the product

    ri∈ D(n,new),rj ∈ D(n-1)(ri-rj)2.

  5. At the limit n→ ∞, the set Rn(new) approaches the boundary of the Fatou set defining the Julia set.
As an example one can look at the iteration of g1(z)= z(z-ε).
  1. The roots of z(z-ε)=0 are {0,r1}={0,ε}. At second level, the new roots satisfy z(z-ε)=r1=ε given by {(ε/2)(1 +/- (1+4r1)1/2}. At the third level the new roots satisfy z(z-ε)=r2 and given by {(ε/2)(1 +/- (1+4r2)1/2}.
  2. The points z=0 and z=ε are fixed points. Assume ε=1 for definiteness. The image points w(z)= z(z-ε) satisfy the condition |w(z)/z|=|z-1|. For the disk D(1,1):|z-1| ≤ 1 the image points therefore satisfy |w| ≤ |z| ≤ 2 and belong to the disk D(0,2): |z |≤ 2.

    For the points in D(0,2)\ D(1,1) the image point satisfies |w|=|z-1||z| giving |z|-1 ≤ |w| ≤ |z|+1. Inside D(0,2)\ D(1,1) this gives 0 ≤|w| ≤ 3. Therefore w can be inside D(2,0) including D(1,1) also inside disk D(0,3).

    For the points z outside D(2,0)|w|=|z-1||z| ≥ 2. So that the iteration leads to infinity here.

  3. For the inverse of the iteration relevant for finding the roots of f°(-n) leads from the exterior of D(2,0) to its interior but cannot lead from interior to the exterior since in this case f would lead to exterior to interior. Hence the values of the roots wn in ∪n(-n) belong to the disc D(2,0).
The conjecture deserving to be killed is that the discriminant D for the iterate has Mersenne primes as factors for primes n defining Mersenne primes Mn= 2n-1 and that also for other values of n D contains as a factor ramified primes near to 2n.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter About Langlands correspondence in the TGD framework.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Dark energy weakens

There is Quanta Magazine post (see this) telling about the evidence, found by DESI collaboration, that dark energy is getting weaker .

In TGD, the new view of space-times as 4-D surfaces in H=M4×CP2, predicts the analogy of cosmological constant as well as its weakening. String tension characterizes the energy density of a magnetic monopole flux tube, a 3-D surface in H=M4×CP2. String tension contains a volume part (direct analog of Λ) and a K hler magnetic part and it is a matter of taste whether one identifies the entire string tension or only the volume contribution as counterpart of Λ. In the primordial cosmology (see this), cosmic string sheets have 2-D M4 projection and 2-D CP2 projection.

Cosmic strings are unstable against the thickening of their 2-D M4 projection, which means that the energy density is gradually reduced in a sequence of phase transitions as thickenings of the cosmic string so that they become monopole flux tubes and give rise to galaxies and stars. The energy of cosmic strings is transformed to ordinary particles. This process is the TGD analog of inflation. No inflaton fields are required.

The string tension is gradually reduced in these phase transitions and in this sense one could say that dark energy is weakened. For instance, for hadronic strings it is rather small as compared to the original value of string tension during the primordial phase dominated by cosmic strings, at the molecular level the string tension of cosmic strings is really small.

The primordial string dominated phase was followed by a transition to the radiation dominated cosmology and emergence of Einsteinian space-time with 4-D M4 projection so that general relativity and quantum field theory became good approximations explaining a lot of physics. Quantum field theory approximation cannot however explain the structures appearing in all scales and here monopole flux tubes are necessary.

See for instance the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, March 20, 2025

Witt vectors and polynomials and the representation of generalized p-adic numbers as space-time surfaces

We have had very inspiring discussions with Robert Paster, who advocates the importance of universal Witt Vectors (UWVs) and Witt polynomials (see this) in the modelling of the brain, have been very inspiring. As the special case Witt vectors code for p-adic number fields. Witt polynomials are characterized by their roots, and the TGD view about space-time surfaces both as generalized numbers and representations of ordinary numbers, inspires the idea how the roots of for suitably identified Witt polynomials could be represented as space-time surfaces in the TGD framework. This would give a representation of generalized p-adic numbers as space-time surfaces.

Could the prime polynomial pairs (g1,g2): C2→ C2 and (f1,f2): H=M4× CP2→ C2 (perpaps states of pure, non-reflective awareness) characterized by small primes give rise to p-adic numbers represented in terms of space-time surfaces such that these primes could correspond to ordinary p-adic primes? Same question applies to the pairs (f1,f2) which are functional primes.

  1. Universal Witt vectors and polynomials can be assigned to any commutative ring R, not only p-adic integers. Witt vectors Xn define sequences of elements of a ring R and Universal Witt polynomials Wn(X1,X2,...,Xn) define a sequence of polynomials of order n. In the case of p-adic number field Xn correspond to the pinary digit of power pn and can be regarded as elements of finite field Fp which can be also mapped to phase factors exp(ik 2π/p). The motivation for Witt polynomials is that the multiplication and sum of p-adic numbers can be done in a component-wise manner for Witt polynomials whereas for pinary digits sum and product affect the higher pinary digits in the sum and product.

    In the general case, the Witt polynomial as a polynomial of several variables can be written as Wn(X0,X1,...)=\sumd\mid n d Xdn/d, where d is a divisor of n, with 1 and n included.

  2. The function pairs (f1,f2): M4→ C2 define a ring-like structure. Product and sum are well-defined for these pairs. The function pair related to (f1,f2) by a multiplication by a function pair (h1,h2), which vanishes nowhere in CD, defines the same space-time surface as the original one is equivalent with the original one. Note that also the powers (f1n,f2n) define the same 4-surfaces as (f1,f2).

    The degrees for the product of polynomial pairs (P1,P2) and (Q1,Q2) are additive. In the sum, the degree of the sum is not larger than the larger degree and it can happen that the highest powers sum up to zero so that the degree is smaller. This reminds of the properties of non-Archimedean norm for the p-adic numbers. The zero element defines the entire H as a root and the unit element does not define any space-time surface as a root.

    For the pairs (g1,g2) also functional composition is possible and the degrees are multiplicative in this operation.

  3. Functional primes (f1,f2) define analogs of ordinary primes and the polynomials with degrees associated with the 3 complex coordinates of H below the primes associated with these coordinates are analogous to pinary digits. Also the pairs (g1,g2) define functional primes both with respect to powers defined by element-wise product and functional composition.

Generalization of Witt polynomials

Could a representation of polynomials, in particular the analogs of Witt polynomials in terms of their roots in turn represented in terms of space-time surfaces, be a universal feature of mathematical cognition? If so, cognition would really create worlds! In Finland we have Kalevala as a national epic and it roughly says that things were discovered by first discovering the word describing the thing. Something similar appears in the Bible: "In the beginning was the Word, and the Word was with God, and the Word was God. Word is world!

Could p-adic numbers or their generalization for functional primes (f1,f2) have a representation in terms of Witt polynomials coded by their roots defining space-time surfaces.

  1. Wn is a polynomial of n arguments Xk whereas the arguments of the polynomials defining space-time surfaces correspond to 3 complex H coordinates. In the p-adic case the factors d are powers of p. Xd are analogous to elements of a finite field as coefficients of powers of p.
  2. There are two cases to consider. The Witt polynomials assignable to the space-time surfaces (f1,f2)=(0,0): H→ C2 using element-wise sum and product. For the pairs g=(g1,g2)=(0,0): C2→ C2 one can consider sum and element-wise product giving gn= (g1n,g2n) and the sum or functional composition giving g(g(...g)...). The latter option looks especially attractive. One reason is that by the previous considerations the prime surface pairs (f1,f2) might be two simple. For instance the iterations (g1,g2) with prime degree 2,3,.. could give a justification for the p-adic length scale hypothesis and its generalization.
Consider first the pairs (f1,f2): H→ C2.
  1. If the space-time surface (f1,f2)=(0,0) is prime with respect to the functional composition f→ g(f), it naturally generalizes the p-adic prime p so that one would have pk→ (f1,f2)k and n1=n2.

    Xk are the analogs of pinary digits as elements of finite fields. Could they correspond to polynomials with the 3 degrees smaller than the corresponding prime degree assignable to the prime polynomial (f1,f2)?

  2. With these identifications it might be possible to generalize the Witt polynomials to their functional variants as such and find its roots represented as space-time surfaces. These surfaces would represent the functional analog of the p-adic number field. One can also assign to the functional p-adic numbers ramified primes defining ordinary p-adic primes. Each functional p-adic number would define ramified primes and these would correspond to the p-adic primes.
  3. fi are labelled by 3 ordinary primes pr(fi), r=1,2,3, rather than single prime p and by the earlier argument one can restrict the condition to f1.

    Every functional p-adic number corresponds to its own ramified primes determined by the roots of its Witt polynomial. There is a huge number of these generalized p-adic numbers. Could some special functional p-adic primes correspond to elementary particles? The simplest generalized p-adic number corresponds to a functional prime and in this case the surface in question would correspond to (f1,f2)=(0,0) (could this be interpreted as stating the analog of mod ~p=0 condition). These prime surfaces might be too simple and it is not easy to understand how the large values of p--adic primes could be understood.

One can ask whether the analogs of ramified primes for the Witten polynomials assignable abstraction hierarchies g(g(...(f)...) and powers gn=(g1n,g2n) for which the degree of the polynomials is n×p, p the prime assignable to g.
  1. The ramified primes for the Witten polynomials for g(g(...(f)...) and gn defining analogs of powers pn of p-adic numbers. Note that the roots of g(g(...(f)...) are a property of g(g(...(g)...) and do not depend on f in case that they exist as surfaces inside the CD.
  2. The interesting question is whether and how the ramified primes could relate to the ramified primes assignable to a generalized Witt polynomial Wn. The iterated action of prime g giving g(g(...(f)...) is the best candidate. There is hope that even the p-adic length scale hypothesis could be understood as a ramified primes assignable to some functional prime. The large values of p-adic primes require that very large ramified primes for the functional primes (f1,f2). This would suggest that the iterate g(g....g(f)...) acting on prime f is involved. For p∼ qk, kth power of g characterized by prime g is the first guess.
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter About Langlands correspondence in the TGD framework.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, March 19, 2025

Ytterbium anomaly

Sabine Hossenfelder talked about a very interesting nuclear physics anomaly known as Ytterbium anomaly (see this). There is a Phys. Rev. Lett. article about this anomaly titled "Probing New Bosons and Nuclear Structures with Ytterbium Isotope Shifts" (see this). What makes this anomaly so interesting is that it is reported to have a significance level of 23 sigmas! 5 sigmas is usually regarded as the significance, which makes it possible to speak of a discovery.

  1. Ytterbium (see this) is a heavy nuclear with charge Z=70 and mass number of A=173 so that neutron number would be N= 103 for the most general isotope (it seems that the definition of isotope number varies depending on whether its defined in terms of mass or the actual number of nucleons). The mass numbers of Yb vary in the range 168-176. Ytterbium is a rare earth metal with electron configuration [Xe] 4f146s2.
  2. The electronic state seems to be very sensitive to the number of neurons in Yb and this is why Yb is so interesting from the point of view of atomic physics. The anomaly is related to the isotope shift for the electron energies. So called Frequency Comb method amplifies the mismatch with the standard theory. Mismatch indeed exists and could be understood in terms of a new particle with a mass of few MeVs.
  3. There is an earlier anomaly known as Atomki anomaly (see this and this) explained by what is called X boson in mass range 16-17 meV and the Yb anomaly could be explained in terms of X boson (see this).
I have discussed the X boson in the TGD framework and proposed that it could be a pion-like state (see this).
  1. The proposed model provides new insights on the relation between weak and strong interactions. One can pose several questions. Could X could be a scaled variant of pion? Or could weak interaction physics have a scaled down copy in nuclear scale?
  2. The latter option would mean that some weak boson could become dark and its Compton length would be scaled up by factor heff/h to nuclear p-adic length scale. I have proposed a scaled up copy of hadron physics characterized by Mersenne prime M89 with a mass scale, which is 512 higher than for ordinary hadrons with M107. In the high energy nuclear collisions in which quark-gluon plasma is believed to be created at quantum criticality, M89 hadrons would be generated. They would be dark and have heff/h=512 so that the scaled up hadronic Compton length would be the same as for ordinary hadrons (see this and this). M89 hadron physics would explain a large number of particle physics anomalies and there is considerable evidence for it. The most radical implications of M89 hadron physics would be for solar physics (see this).
Could also weak interaction physics have dark variants at some kind of quantum criticality and could the Yb anomaly and Atomki anomaly be understood as manifestations of this new physics.
  1. Weak bosons are characterized by p∼ 2k, k=91, from the mass scale of weak bosons. A little confession: for a long time I believed that the p-adic mass scale of weak bosons corresponds to k=89 rather than k=91. For the Higgs boson the mass scale would correspond to M89. TGD also predicts a pseudoscalar variant πW of the Higgs boson. Could the dark variant of the pseudoscalar Higgs boson πW with Compton length assignable to the X boson be involved?
  2. The scaled up weak boson should have a p-adic length scale which equals nuclear length scale or is longer. Dark πW could become effectively massless below its Compton length. The mass of X boson about 16-17 MeV corresponds to a Compton length of 6× 10-14 m and is by factor 3 longer than the nuclear p-adic length scale L(k=113)∼ 2× 10-14 m. For kπW=91 would give heff/h= 2(113-91)/2=211 to give the nuclear Compton scale. k=115 would give the length scale 4× 10-14 m not far from 6× 10-14 m and require heff/h= 2(115-91)/2=212. If πW corresponds to k=89 then heff/h= 2(115-89)/2=213.

    If the πW Compton scale is scaled from M91 by factor 2(113-89)/2=212 to a Compton length corresponding for ordinary Planck constant to the mass of about mπW/212. This would give mπW∼ 64 GeV, essentially one half of the mass of the ordinary Higgs about 125.35 GeV and corresponds to the p-adic length scale k=91 just like other weak bosons.

  3. Could dark πW give rise to a dark weak force between electron and nucleus inducing the anomalous isotope shift to electron energies? The increase of the Compton length would suggest the scaling up of the electron nucleus weak interaction geometric cross section by the ratio (mX/mW)4 ∼ 1020? The weak cross section for dark πW have the same order of magnitude than nuclear strong interaction cross sections since, apart from numerical factors depending on the incoming four-momenta, the weak cross section for electron-nucleon scattering goes like GF∼ 1/TeV2. This would scale up to 2113-91=22/TeV2= 4/MeV2 (see this). The alternative manner to understand the enhancement could be by assuming that weak bosons are massless below the nuclear scale.
  4. Quantum criticality would be essential for the generation of dark phases and long range quantum fluctuations about which the emergence of scaled up dark weak bosons would be an example. Yb is indeed a critical system in the sense that the isotope shift effects are large: this is one reason for why it is so interesting. Why would the quantum criticality of some Yb isotopes induce the generation of dark weak bosons?
It should be noticed that for k=512 assigned with dark M89 hadrons, the scaled down Compton length of dark πW would be by a factor 8 shorter and correspond to mass 128 MeV not far from the pion mass. What could this mean? Could ordinary pion correspond to a dark dark variant of πW? This looks strange since usually strong and weak interactions are regarded as completely separate although the CVC and PCAC suggest that they are closely related. In TGD, the notion of induced gauge field implies extremely tight connections between strong and weak interactions.

See the article X boson as evidence for nuclear string model or the chapter Nuclear string hypothesis.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, March 16, 2025

Holography= holomorphy hypothesis, small primes viz. large p-adic primes and p-adic length scale hypothesis

We have had highly interesting Facebook discussions with Robert Paster relating to the p-adic mathematics (see this). Robert Paster finds the so called Universal Witt Vectors (UWVs) as a more promising approach than p-adic numbers whereas I am a proponent of adelic physics in which all p-adic number fields and also a hierarchy of their extensions induces by the extensions of rationals are in a central role.

UWVs generalize the Witt Vectors (WVs) (see this), which provide a representation of p-adic number field in terms of Witt polynomials with order, which is power of prime p and pinary expansion can be represented in terms of the hierarchy if Witt vectors Wk for which sum and product are element-wise operations. The UWVs are defined for any positive integer n and Wn codes for all factors of integer n and reduces to a collection of Wpk when n is restricted to n=pk: it is denoted by Wk in this case.

An interesting question is whether UWVs as a number theoretic notion might have some role in TGD.

These discussions led to some progress in the longstanding attempts to understand how small primes and p-adic primes emerging from p-adic mass calculations and very large compared to them could relate and what the interpretation of the small primes could be. In the sequel I will discuss this aspect rather than UWVs of WVs.

The polynomials (P1,P2) and also the rational functions (g1=P1/Q1,g2=P2/Q2) appearing the holography= holomorphy vision (see this) form a well-defined complexity hierarchy.

  1. In the general case, the space-time surfaces (P1,P2)=(0,0) have several disjoint components. This is the case if (f1,f2) is a composite function of form f=g(h): in other words one has (f1,f2)= (g1(h1,h2),g2(h1,h2)). The space-time surfaces correspond to roots hi=ri, which are disjoint.

    To avoid disjoint union of space-time surfaces fi must be a prime polynomial with respect to functional composition. For the polynomials of a single variable, this is the case if the degree of the polynomial is prime but this is not a necessary condition for primeness. As already found, this condition generalizes to the polynomials of 3 complex variables considered in the recent case.

    Space-time surfaces of these kinds are excellent candidates for fundamental objects and the polynomial in question would have prime degree with respect to each of the 3 complex coordinates of H: this would make 3, presumably small primes. The composites formed of maps g and of these fundamental function pairs f would define cognitive representations of the surface defined by f as kind of statements about statements. An interesting question is whether these surfaces could correspond to elementary particles.

  2. There is also a natural measure of complexity as the number of maps g, which correspond to prime polynomials with gi=Pi/Qi appearing in the functional composite with a pair of prime polynomials (f1,f2). Here the prime polynomials Pi must have degree higher than 1 in order to increase the complexity. In this case, 2 primes would characterize the prime polynomial Pi.
What could be the physical interpretation of the prime polynomials (f1,f2) and (g1,g2), in particular (g1,Id) and how it relates to the p-adic length scale hypothesis (see this)?
  1. Probably the primes as orders of prime polynomials do not correspond to very large p-adic primes (M127=2127-1 for electron) assigned in p-adic mass calculations to elementary particles and tentatively identified as ramified primes (see this) appearing as divisors of the discriminant of a polynomials define as the product of root differences, which could correspond to that for g=(g1,Id).
  2. p-Adic length scale hypothesis states that the physically preferred p-adic primes correspond to powers p∼ 2k. Also powers p∼ qk of other small primes q can be considered \cite{allb{biopadc and there is empirical evidence of time scales coming as powers of q=3 (see this and this). For Mersenne primes Mn= 2n-1, n is prime and this inspires the question whether k could be prime quite generally. The proposal has been that the p and k would correspond to a very large and small p-adic length scale. Could the 3 primes characterizing the prime polynomials fi correspond to the small primes q and could the ramified primes p∼ 2k be associated with the polynomials obtained to theire iterated functional composites?
Could small-p p-adicity make sense and could the p-adic length scale hypothesis relate small-p p-adicity and large-p p-acidity?
  1. Could the p-adic length scale hypothesis in its basic form reflect 2-adicity at the fundamental level or could it reflect that p=2 is the degree for the lowest prime polynomials, certainly the most primitive cognitive level. Or could it reflect both?
  2. Could p∼ 2k emerge when the action of a polynomial g1 of degree 2 with respect to say the complex coordinate w of M4 on polynomial Q is iterated functionally: Q→ P(Q) Q → P(P(...P..)(Q) and give n=2k disjoint space-time surfaces as representations of the roots. For p=2 the iteration is the procedure giving rise to Mandelbrot fractals and Julia sets. Electrons would correspond to objects with 127 iterations and cognitive hierarchy with 127 levels! Could p= M127 be a ramified prime associated with P(P(...P..)(P).

    If this is the case, p∼ 2k and k would tell about cognitive abilities of an electron and not so much about the system characterized by the function pair (f1,f2) at the bottom. Could the 2k disjoint space-time surfaces correspond to a representation of p∼ 2k binary numbers represented as disjoint space-time surfaces realizing binary mathematics at the level of space-time surfaces? This representation brings in mind the totally discontinuous compact-open p-adic topology. Cognition indeed decomposes the perceptive field into objects.

  3. This generalizes to a prediction of hierarchies p∼ qk, where q is a small prime as compared to p and identifiable as the prime order of a prime polynomial with respect to, say, variable w.
A highly interesting observation is that the numbers allowing expansions in powers of an integer n having powers of primes belonging to some set can be regarded as p-adic integers for all these primes. One might say that these numbers belong to an intersection of these number fields. This could allow gluing of p-adic factors of adeles to single continuous structure. This suggests the possibility of multi-p p-adicity. The discriminant D of a polynomial defined as root differences can be expressed as a product of powers of so called ramified primes and the question is which of them is physically selected and why. Could it be that the expansions of physical quantities are in powers of D. I have also proposed that D, or its suitable power, is the number theoretical counterpart for the exponent of Kähler function as vacuum functional.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter About Langlands correspondence in the TGD framework.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, March 14, 2025

A more detailed view about the TGD counterpart of Langlands correspondence

The Quanta Magazine article (see this) related to Langlands correspondence and involving concepts like elliptic curves, modular functions, and Galois groups served as an inspiration for these comments. Andrew Wiles in his proof of Fermat's Last Theorem used a relationship between elliptic curves and modular forms. Wiles proved that certain kinds of elliptic curves are modular in the sense that they correspond to a unique modular form. Later it was proved that this is true for all elliptic surfaces. Later the result was generalized to real quadratic extensions of rationals by 3 mathematicians involving Samir Siksek and now by Caraiani and Newton for the imaginary quadratic extensions.

Could this correspondence be proved for all algebraic extensions of rationals? And what about higher order polynomials of two variables? Complex elliptic curves, defined as roots of third order polynomials of two complex variables, are defined in 2-D space with two complex dimensions have the special feature that they allow a 2-D discrete translations as symmetries: in other words, they are periodic for a suitable chosen complex coordinate. I have talked about this from TGD point of view in (see this). Is the 1-1 correspondence with modular forms possible only for elliptic curves having these symmetries?

How are the Galois groups related to this? Indian mathematical genius Ramanujan realized that modular forms seem to be associated with so-called Galois representations. The Galois group would be the so- called absolute Galois group of the number field involved with the representation. Very roughly, they could be seen as representations of a Lie group which extends the Galois group. Also elliptic curves are associated with Galois representations. This suggests that the Galois representations connect elliptic curves, objects of algebraic geometry and modular forms, which correspond to group representations. These observations led to Langlands program which roughly states a correspondence between geometry and number theory.

The Galois group is indeed involved with Langlands duality. If the Lie group G is defined over field k (in the recent case extension of rationals), the Langlands dual LG of G is an extension of the absolute Galois group of k by a complex Lie group (see this). The representation of the absolute Galois group is finite-dimensional, which suggests that it reduces to a Galois group for a finite-dimensional extension of rationals. Therefore the effective Galois group used can be larger than the Galois group of extension of rationals. LG has the same Lie algebra as G.

In the following, I will consider the situation from a highly speculative view point provided by TGD. In TGD, geometric and number theoretic visions of physics are complementary: M8-H duality in which M8 is analogous to 8-D momentum space associated with 8-D H=M4× CP2 is a formulation for this duality and makes Galois groups and their generalizations dynamic symmetries in the TGD framework (see this). This complementarity is analogous to momentum position duality of quantum theory and implied by the replacement of a point-like particle with 3-surface, whose Bohr orbit defines space-time surface.

At a very abstract level this view is analogous to Langlands correspondence (see this). The recent view of TGD involving an exact algebraic solution of field equations based on holography= holomorphy vision allows to formulate the analog Langlands correspondence in 4-D context rather precisely. This requires a generalization of the notion of Galois group from 2-D situation to 4-D situation: there are 2 generalizations and both are required.

  1. The first generalization realizes Galois group elements, not as automorphisms of a number field, but as analytic flows in H=M4× CP2 permuting different regions of the space-time surface identified as roots for a pair f=(f1,f2) of pairs f=(f1,f2): H→ C2, i=1,2. The functions fi are analytic functions of one hypercomplex and 3 complex coordinates of H.
  2. Second realization is for the spectrum generating algebra defined by the functional compositions g(f), where g: C2→ C2 is analytic function of 2 complex variables. The interpretation is as a cognitive hierarchy of function of functions of .... and the pairs (f1,f2) which do not allow a composition of form f=g(h) correspond to elementary function and to the lowest levels of this hierarchy, kind of elementary particles of cognition. Also the pairs g can be expressed as composites of elementary functions.

    If g1 and g2 are polynomials with coefficients in field E identified as an extension of rationals, one can assign to g (f) root a set of pairs (r1,r2) as roots f1,f2)= (r1,r2) and ri are algebraic numbers defining disjoint space-time surfaces. One can assign to the set of root pairs the analog of the Galois group as automorphisms of the algebraic extension of the field E appearing as the coefficient field of (f1,f2) and (g1,g2). This hierarchy leads to the idea that physics could be seen as analog of formal system appearing in Gödel's theorems and that the hierarchy of functional composites could correspond to a hierarchy of meta levels in mathematical cognition (see this).

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter About Langlands correspondence in the TGD framework.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, March 10, 2025

About information thermodynamics and Maxwell's demon in TGD

About information thermodynamics and Maxwell's demon in TGD

Information thermodynamics includes information as an analogy of negative entropy. The associated temperature can be assumed to be the same as the usual temperature but this need not be the case. Information thermodynamics leads to a generalization of the second law taking into accout the presence of quantum information. Whether this information corresponds to conscious information, is left open.

Sciencedaily article (see this) tells about the article of Minagawa et al (see this) published in Nature, in which a more rigorous proof than before is represented for the statement that that the second law of thermodynamics is also valid in quantum theory with information included. The presence of information makes possible an apparent local violation of the second law in the sense that the system can do more work than the Carnot law allows. However, Maxwell's demon requires metabolic energy to function and it loses it when demonizing. This energy must be compensated and the conclusion is that the second law of thermodynamics is valid.

I have discussed Maxwell's demon earlier in (see this). In the following information dynamics and Maxwell's demon are discussed in light of the recent results related to TGD (see this, this, this, this and this). The discussion relies on the geometric vision of physics involving holography= holomorphy principle and the number theoretic vision of physics involving p-adic number fields and adeles as mathematical correlates for cognition and the evolutionary hierarchy of extensions of rationals. Negentropy Maximization Principle (NMP) for cognitive information replacing second law and implying it. Zero energy ontology (ZEO) provides the new quantum ontology.

1. Some basic ideas of TGD relevant to Maxwell's demon

Consider first briefly the geometric vision.

  1. In the classical TGD, holography= holomorphy vision is a new element not present in the earlier discussion of Maxwell's demon discussed in (see this). Space time surfaces as roots of analytic function pairs f=(f1,f2): H=M4× CP2 → C2 with Taylor coefficients in some extension E of rationals provide exact solution of field equations by reducing them from partial differential equations to algebraic equations. (see this). Polynomial solutions are obtained as an important special case.

    Also the roots g(f)=(0,0), g: C2→ C2 define space-time surfaces. The hierarchy obtained as composites of maps g gives rise to a hierarchy of field bodies. This hierarchy of maps of maps of ... is analogous to an abstraction hierarchy. For g(0,0)=(0,0) the lowest level (f1,f2)=(0.0) belongs to the hierarchy (see this).

    The algebraic complexity of the surfaces increases with the number of composites gi and their inherent complexity measured by degrees with respect to the 3 complex coordinates of H. When g reduces to map C→ C one has ordinary polynomials and one can assign Galois group and ramified primes to it. The "world of classical worlds" (WCW) decomposes to a union of sub-WCWs with f2 fixed.

  2. Field/field body (see this and this) serves as a carrier phases of the ordinary matter with non-standard value of effective Planck constant heff≥ h making it a quantum coherent system in arbitrarily long scales. The proposal is that heff corresponds to the dimension of an algebraic extension of rationals as the order of the Galois group. In the TGD inspired quantum biology, the temperature of the field body is not necessarily the same as the temperature of the biological body, but could be lower and would gradually increase with aging (see this) so that the field body would gradually lose control.
  3. Zero energy ontology (ZEO) (see this and this), forced by the holography, in turn forced by the general coordinate invariance, makes classical physics an exact part of quantum TGD.

    Holography involves space-time surfaces identified as classical Bohr orbits for particles of 3-surfaces as generalization of point-like particles and localized inside CD. These Bohr orbits are slightly non-deterministic already for 2-dimensional minimal surfaces. This forces the replacement of quantum states with superpositions of Bohr orbits and brings in new degrees of freedom related to classical non-determinism. These degrees of freedom are essential for understanding cognition. The identification of quantum states as superpositions of Bohr orbits allows us to solve the basic problem of quantum measurement theory. In ZEO both arrows of time are possible in all scales.

    The hierarchy of causal diamonds CD= cd× CP2, where cd is causal diamond of M4 is the geometric correlate of ZEO. CD has interpretation as a 4-D perceptive field of a conscious entity associated with the quantum superpositions of 4-surfaces inside CD. CDs form a scale hierarchy (see this).

    In the TGD framework, information means potentially conscious information and involves TGD view of memory based on the classical non-determinism (see this).

Number theoretic vision is complementary to the geometric view about TGD.
  1. p-Adicization and adelization of cognition. Negentropy and its energy equivalent. Negentropy could correspond to information in information thermodynamics. The production of negentropy requires metabolic energy. Cognitive negentropy as the sum of p-adic negentropies increases, but so does real entropy. This fits (see this and this) with Jeremy England's observations (see this). Classical non-determinism could correspond to p-adic non-determinism.
  2. In the TGD framework, one can speak of the Galois group in two different senses (see this, this and this). TGD predicts a 4-D variant of Galois group mapping different regions of the space-time surfaces identified in holography= holomorphy vision as roots (f1,f2)=(0,0) for function pairs H=M4× CP2→ C2 analytic with respect to Hamilton-Jacobi coordinates generalizing complex coordinates (see this). The 4-D Galois group is realized as analytic flows analogous to braidings mapping the roots as space-time regions to each other.

    The second Galois group is associated with dynamical complex analytic symmetries g: C→ C: (f1,f2)→ (g((f1,f2)). One can talk of number theoretic/topological n-ary digits for n-sheeted space-time surfaces. Binary digits (n is prime) are in a well-defined sense fundamental. In this case, the Galois group relates to each other disjoint space-time surfaces. When g reduces to map C→ C, one can assign to it an ordinary Galois group relating to each other the disjoint roots of g(f).

    These two Galois groups commute and the latter Galois group relates to the first Galois group in the same way as the Galois group of an extension of rationals to the Galois group of complex rations generated by complex conjugation.

    Prime polynomials are polynomials, which cannot be expressed as composites of two polynomials. If the degree of the polynomials fi are prime with respect to the 3 complex coordinates of H, this is the case. Same applies to the polynomials (g1,g2): C2→ C2. These kinds of surfaces are analogs to elementary particles. Finite group is prime/simple, if it does not allow normal subgroups. Prime Galois groups could be associated with g: C→ C are expected to be in a special role physically.

  3. Does the notion of Galois group generalize to the general case g=(g1,g2)? Also in this case the roots of g(f) are disjoint space-time surfaces representing pairs of algebraic numbers (f1,f2)=(ri,1,ri,2). Is it possible to assign to the roots the analog of the Galois group? This group should act as a group of automorphisms of some algebraic structure. This structure cannot be a field but algebra structure is enough. These arithmetic operations would be component-wise sum (a,b)+(c+d)=(a+c,b+d) and componentwise multiplication (a,b)*(c,d)= (ac,bd). The basic algebra would correspond to the points of (x,y)\in E2 or rationals and the extension would be generated by the pairs (f1,f2)= (ri,1,ri,2). This structure has an automorphism group and would serve as a Galois group. The dimension of the extension of E2 could define the value of the effective Planck constant.

    Also the notion of discriminant can be generalized to a pair (D1,D2) of discriminants using the component-wise product for the differences of root pairs. Could Di be decomposed to a product of powers of algebraic primes of the extension of E2?

    In (see this) the idea that the space-time surfaces can be regarded as numbers was discussed. For a given g, one can indeed construct polynomials having any for algebraic numbers in the extension F of E defined by g. g itself can be represented in terms of its n roots ri=(ri,1,ri,2), i=1,n represented as space-time surfaces as a product ∏i(f1-ri,1,f2-ri,2) of pairs of monomials. One can generalize this construction by replacing the pairs (ri,1,ri,2) with any pair of algebraic numbers in F. Therefore all algebraic numbers in F can be represented as space-time surfaces. Also the sets formed by numbers in F can be represented as unions of the corresponding space-time surfaces.

  4. This picture also leads to a vision about physics laws as being analogous to laws of logic and time evolution of the physical system as analogous to a proof of theorem (see this). Different meta levels defined by the maps g: C2→ C2 are analogous to hierarchy of statements about statements in mathematics. This applies also to the more general maps. The interpretation is that the surfaces at the higher levels of the meta hierarchy represent statements about the surfaces (f1,f2)=(0,0) at the lowest level of the hierarchy.
  5. The creation of algebraic complexity, i.e. increasing the value of heff, requires metabolic energy. The value of heff tends to spontaneously decrease, which gives rise to a dissipation as a competing effect and one should understand how these two tendencies relate to each other.
  6. Negentropy Maximization principle (see this) states that the algebraic complexity measured by the sum of p-adic negentropies increases in a statistical sense during the number theoretic evolution and NMP states this fact. Also the ordinary entropy increases as a result of the increase of the negentropy and this conforms (see this) with the view of Jeremy England (see this. The maximum value for heff partially characterizes algebraic complexity. Also Galois groups, the degree(s) of the polynomials defining the space-time surface, and ramified primes (when they are defined) characterize the complexity. Galois groups generalize the group Z2 assigned with condensed matter fermions in topological quantum computation (see this).

2. The TGD view of Maxwell's demon

The basic vision is as follows.

  1. A field body can act as a Maxwell's demon. As a carrier of potentially conscious information realized as memories about previous SSFRs (see this), the field body can transfer negentropy, basically energy, to lower levels and seemingly help to break the second law. As a result, an additional term from information to the basic equation expressing the second law. The heff for the field body however decreases in the process.

    Also the classical fields associated with the field body can do work on the biological body and the classical gravitational/electric fields are characterized by gravitational/electric Planck constants (see this and (see this.

  2. In presence of the field bodies, perhaps assignable to the hierarchy of abstractions defined by the maps g, more work can be obtained from the system than the second law would otherwise allow. This is only apparent because the heff of the field body decreases so that the algebraic complexity measuring the level of consciousness decreases. Therefore heff must be increased to its original value. Metabolic energy is needed for this purpose.
Consider now the TGD view of Maxwell's demon as a conscious entity in light of the recent results not yet taken into account in the discussion of (see this).
  1. Consider first the notion of a cognitive measurement. The Galois group for g1(g2(...), gi: C→ C is characterized by a hierarchy of coset groups defined by the hierarchy of its normal subgroups. The irreps of the Galois group can be decomposed to tensor products of the coset groups and the entanglement between the coset groups can be measured in cognition SFRs.

    In particular, the hierarchy formed by functional composition of polynomials g: C→ C is accompanied by a hierarchy of extensions of extensions ... of E and the hierarchy of normal subgroups for the Galois groups.

  2. Prime groups are simple groups having no normal groups and prime polynomials do not allow functional decomposition P(Q). If the degree of the polynomial is prime this is the case but this is not a necessary condition. Space-time surfaces of this kind are fundamental objects and the polynomial in question would have prime degree with respect to the 3 complex coordinates of H. The composites formed of maps g and of these fundamental function pairs f would define cognitive representations of the surface defined by f as kind of statements about statements. An interesting question is whether these surfaces could correspond to elementary particles. The primes orders of polynomials involved do not probably correspond to p-adic primes assigned in p-adic mass calculations to elementary particles and identified as ramified primes (see this).
  3. What happens when the value of heff descreases? Does it decrease only apparently? If the highest levels in the tensor product of irreps of the coset groups become singlets, these levels effectively drop away. Or could the space-time surface itself become simpler as some maps g in the composite g1(g2(...) become identity maps? Field body would lose some of its levels. Indeed, in the TGD based view of biocatalysis, the temporary reduction of heff is essential for the biocatalysis (see this).

3. How could the time reversal in BSFR relate to the second law?

Time reversal in BSFRs represents new physics. What are the implications?

  1. Entanglement entropy and p-adic negentropies are associated with a pair of systems. Thermodynamic entropy is associated with an ensemble. A single BSFR cannot reduce ensemble entropy. It reduces entanglement entropy. Quantum entanglement with the environment is the cause of BSFR. Ensemble entropy, however, corresponds to entanglement entropy after a quantum measurement when quantum entanglement has disappeared.
  2. In the ZEO based theory of consciousness, falling asleep (or biological death) corresponds to a BSFR and waking up to another BSFR. Sleeping restores resources and heals. What does this mean? Does the field body gain new metabolic energy resources that appear after the second BSFR? Does the BSFR violate the second law or does the energy needed come from outside the process as metabolic energy?

    During sleep, metabolic energy is not used for normal bodily purposes such as movement and sensation. It could go to the field body to restore the values of heff to their original values. For example, protons are transferred back to dark protons by the Pollack effect. That would require metabolic energy. This would be visible as the presence of an unknown energy sink during sleep. Metabolic energy consumption would not be reduced.

    During sleep, the second law applies in the opposite direction of time. This would only allow for an apparent violation of the second law.

  3. BSFR would correspond to the loss of entanglement between the system and the environment, which for an ensemble means an increase in ordinary entropy. The produce thermodynamic entropy would be equal to as entanglement entropy transformed into ensemble entropy in BSFRs. Thermodynamic entropy for the ensemble would increase for both arrows of time in BSFRs.
  4. The subsystems as smaller space-time sheets topologically condensed on the space-time sheet of the system can form a thermodynamic ensemble. The entropy of this ensemble increases in both directions of time. If it is possible to observe its increase occurring in the opposite direction of time, it would look like a decrease in entropy in the standard time direction and give rise to apparent violation of the second law. In the Pollack effect this kind of effect is observed for the negatively charged exclusion zones at EZs throw impurities out. This looks like time reversed diffusion and can be understood in the ZEO (see this).

    In the case of SSFRs it is not meaningful to talk about an ensemble. The SSFRs cannot not therefore affect the ensemble entropy. The sequences of SSFRs define the analog of an adiabatic time evolution.

  5. In biosystems the metabolic energy input makes it possible to maintain the heff distribution. At lower levels, entropy of the field body however increases gradually and eventually leads to death. Is it possible to maintain the metabolic energy feeds or are the metabolic energy currents gradually reduced during the evolution so that there is no metabolic energy feed increasing the complexity for subsystems anymore? This would be reduction of gradients leading to thermal equilibrium and loss of information. Could this be the TGD counterpart of heat death?

    Could the pairs of BSFRs solve the problem by allowing a fresh start with the original arrow of geometric time with the field body having a pure state and low entropy after the second BSFR? Sleep begins and ends with a BSFR. Sleeping overnight indeed allows one to wake up full of energy which is dissipated during the day. Could BSFRs generate the necessary gradients needed to avoid heat death? Could BSFRs take care that the system's form kind of flip flops exchanging metabolic energy by dissipating.

See the article About information thermodynamics and Maxwell's demon in TGD or the chapter About the nature of time.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, March 09, 2025

Both arrows of time are possible microscopically

Sabine Hossenfelder comments in her Youtube talk (see this) about the recent theoretical claim that microscopically both arrows of time are possible.  In the article "Emergence of opposing arrows of time in open quantum systems" (see this), Thomas Guff, Chintalpati Umashankar Shastry, Andrea Rocco  argue that  the thermodynamic arrow of time can be understood as a property of the initial state without assuming that that it is assumed at the level of dynamics as usually believe. This means that at the microscopic level both arrows of time are allowed.

It seems that the time is becoming mature for TGD: TGD indeed predicts that both arrows of time are possible non only in microscopic, but also  macroscopic, scales.   Zero energy ontology (ZEO) is the quantum ontology of TGD,  implied by the new view of space-time as a 4-surface.  In  TGD  point-like particles are replaced with 3-surfaces rather than strings.

General coordinate invariance without a mathematically non-existing path integral requires holography and  classical physics becomes an exact part of quantum physics. Space-time surfaces become   analogs of Bohr orbits for 3-surfaces as particles  and classical physics represents an exact part of  quantum physics. 

Holography= holomorphy principle allows to reduce field equations to minimal surface  equations for any action as long as it is general coordinate invariance and constructible in terms of the induced geometry. Hence the classical theory is universal.  Remarkably, the holography is slightly non-deterministic. 

 Quantum states are now quantum superpositions of Bohr orbits rather than 3-surfaces and  this  solves the basic problem of quantum measurement theory.  Space-time surfaces are located inside causal diamonds CD=cd×CP2, where cd is a causal diamond of M4.  The cds form a scale hierarchy and CD has interpretation as a counterpart for the perceptive field of a self, defined by the superposition of Bohr orbits changing in "small" state function reductions (SSFRs), which define the TGD counterpart for the sequence of measurements of the same observables in standard QM. In TGD, the Zeno effect means   that the 3-D states and 3-surfaces at the passive boundary (PA) of CD do not change in the sequence of SSFRs. They however change at  the active boundary (AB) of CD (since the superpositions of  Bohr orbits (4-D minimal surfaces) made possible by the slight classical non-determinism  change present already for 2-D minimal surfaces.  This gives rise to a conscious entity sel having CD as its perceptive field.

CD  increases in a statistical sense during the sequence of SSFRs and this gives a correlation between the subjective time  of self defined by the sequence of SSFRs and  the geometric time defined by the distance between the tips of the CD.

Either boundary of the CD can be   passive.  In "big" SFRS (BSFRs) as counterparts of ordinary SFRs, the roles of PB and AB change and the arrow of geometric time changes. This can occur   on all scales.  In the article motivating the Youtube talk this is claimed to happen in microscopic scales.

The number theoretic view of TGD   predicts an entire hierarchy of effective Planck constants and therefore quantum coherence becomes possible in all scales. Also the  arrow of time can change  in even macroscopic scales.  

In  TGD inspired theory of consciousness,  this allows us to interpret the sleep state as a state with a non-standard arrow of time. We do not remember anything from the period of deep sleep since the classical signals traveling in the wrong direction of time do not reach us the next day. Roughly, one half of the Universe is sleeping and wake-up and sleep periods characterize all systems even in cosmological scales. Biological death is also falling in sleep but on a longer time scale and with more dramatic consequences.

See  for instance  the articles   TGD as it is towards end of 2024: part I, TGD as it is towards end of 2024: part II, and Some comments related to Zero Energy Ontology (ZEO).