https://matpitka.blogspot.com/search?updated-max=2026-01-10T21:32:00-08:00&max-results=100&reverse-paginate=true

Friday, April 18, 2025

About quantum arithmetics

Holomorphy= holography vision reduces the gravitation as geometry to gravitations as algebraic geometry and leads to exact general solution of geometric field equations as local algebraic equations for the roots and poles rational functions and possibly also their inverses.

The function pairs f=(f1,f2): H→ C2 define a function field with respect to element-wise sum and multiplication. This is also true for the function pairs g=(g1,g2): C2→ C2. Now functional composition º is an additional operation. This raises the question whether ordinary arithmetics and p-adic arithmetics might have functional counterparts.

Functional (quantum) counterparts of integers, rational and algebraic numbers

Do the notions of integers, rationals and algebraic numbers generalize so that one could speak of their functional or quantum counterparts? Here the category theoretical approach suggesting that degree of the polynomial defines a morphism from quantum objects to ordinary objects leads to a unique identification of the quantum objects.

  1. For maps g: C2→ C2, both the ordinary element-wise product and functional composition º define natural products. The element-wise product does not respect polynomial irreducibility as an analog of primeness for the product of polynomials. Degree is multiplicative in º. In the sum, call it +e, the degree should be additive. This leads to the identification of +e as an elementwise product. One can identify neutral element 1º of º as 1º=Id and the neutral element 0e of +e as ordinary unit 0e=1. This is a somewhat unexpected conclusion.

    The inverse of g with respect to º corresponds to g-1 for º, which is a many-valued algebraic function and to 1/g for +e. The maps g, which do not allow decomposition g= hº i, can be identified as functional primes and have prime degree. If one restricts the product and sum to g1 (say), the degree of a functional prime g corresponds to an ordinary prime. These functional integers/rationals can be mapped to integers by a morphism mapping their degree to integer/rational. f is a functional prime with respect to º if it does not allow a decomposition f= gº h. One can construct integers as products of functional primes.

  2. The non-commutativity of º could be seen as a problem. The fact that the maps g act like operators suggest that for the functional primes gp the primes in the product commute. Since g is analogous to an operator, this can be interpreted as a generalization of commutativity as a condition for the simultaneous measurability of observables.
  3. One can also define functional polynomials P(X), quantum polynomials, using these operations. In the terms pnº Xn pn and g should commute and the sum ∑e pnXn corresponds to +e. The zeros of functional polynomials satisfy the condition P(X)=0e=1 and give as solutions roots Xk as functional algebraic numbers. The fundamental theorem of algebra generalizes at least formally if Xk and X commute. The roots have representations as space-time surfaces. One can also define functional discriminant D as the º product of root differences Xk-e Xl, with -e identified as element-wise division.
About the notion of functional primeness

There are two cases to consider corresponding to f and g. Consider first the pairs (f1,f2): H→ C2.

  1. Primeness could mean that f does not have a composition f=gº h. Second notion of primeness is based on irreducibility, which states that f does not reduce to an elementwise product of f= g× h. Concerning the definition of powers of functional primes in this case, a possible problem is that the power (f1n,f2n) defines the same surface as (f1,f2) as a root with n-fold degeneracy. Irreducibility eliminates this problem but does not allow defining the analog of p-adic numbers using (f1n,f2n) as analog of pn.

  2. Since there are 3 complex coordinates of H, fi are labelled by 3 ordinary primes pr(fi), r=1,2,3, rather than single prime p. By the earlier physical argument related to cosmological constant one could assume f2 fixed, and restrict the consideration to f1. Every functional p-adic number, in particular functional prime, corresponds to its own ramified primes. The simplest functional would correspond to (f1,f2)=(0,0) (could this be interpreted as stating the analog of mod ~p=0 condition).

  3. The degrees for the product of polynomial pairs (P1,P2) and (Q1,Q2) are additive. In the sum, the degree of the sum is not larger than the larger degree and it can happen that the highest powers sum up to zero so that the degree is smaller. This reminds of the properties of non-Archimedean norm for the p-adic numbers. The zero element defines the entire H as a root and the unit element does not define any space-time surface as a root.
Also the pairs (g1,g2) can be functional primes, both with respect to powers defined by element-wise product and functional composition º.
  1. The ordinary sum is the first guess for the sum operation in this case. Category theoretical thinking however suggests that the element-wise product corresponds to sum, call it +e. In this operation degree is additive so that products and +e sums can be mapped to ordinary integers. The functional p-adic number in this case would correspond to an elementwise product ∏ Xn º Ppn, where Xn is a polynomial with degree smaller than p defining a reducible polynomial.
  2. A natural additional assumption is that the coefficient polynomials Xn commute with each other and Pp. This is natural since the Xn and Pp act like operators and in quantum theory a complete set of commuting observables is a natural notion. This motivates the term quantum p-adics. The space-time surface is a disjoint union of space-time surfaces assignable to the factors Xk º Ppk º f. In quantum theory, quantum superpositions of these surfaces are realized. If the surface associated with Xk º Ppk º f is so large that it cannot be realized inside the CD, it is effectively absent from the pinary expansion. Therefore the size of the CD defines a pinary cutoff.
The notion of functional p-adics

What about functional p-adics?

  1. The functional powers gpº k of prime polynomials gp define analogs of powers of p-adic primes and one can define a functional generalization of p-adic numbers as quantum p-adics. The coefficients Xk in Xkº gpk are polynomials with degree smaller than p. The first idea which pops up in mind is that ordinary sum of these powers is in question. What is however required is the sum +e so that the roots are disjoint unions of the roots of the +e summands Xkº gpk. The disjointness corresponds to the fact that cognition can be said to be an analysis decomposing the system into pieces.
  2. Large powers of prime appearing in p-adic numbers must approach 0e with respect to the p-adic norm so that gPn must effectively approach Id with respect to º. Intuitively, a large n in gPn corresponds to a long p-adic length scale. For large n, gPn cannot be realized as a space-time surface in a fixed CD. This would prevent their representation and they would correspond to 0e and Id. During the sequence of SSFRs the size of CD increases and for some critical SSFRs a new power can emerge to the quantum p-adic.
The very inspiring discussions with Robert Paster, who advocates the importance of universal Witt Vectors (UWVs) and Witt polynomials (see this) in the modelling of the brain, forced me to consider Witt vectors as something more than a technical tool. As the special case Witt vectors code for p-adic number fields.
  1. Both the product and sum of ordinary p-adic numbers require memory digits and are therefore technically problematic. This is the case also for the functional p-adics. Witten polynomials solve this problem by reducing the product and sum purely digit-wise operations.
  2. Universal Witt vectors and polynomials can be assigned to any commutative ring R, not only p-adic integers. Witt vectors Xn define sequences of elements of a ring R and Universal Witt polynomials Wn(X1,X2,...,Xn) define a sequence of polynomials of order n. In the case of p-adic number field Xn correspond to the pinary digit of power pn and can be regarded as elements of finite field Fp,n, which can be also mapped to phase factors exp(ik 2π/p). The motivation for Witt polynomials is that the multiplication and sum of p-adic numbers can be done in a component-wise manner for Witt polynomials whereas for pinary digits sum and product affect the higher pinary digits in the sum and product.
  3. In the general case, the Witt polynomial as a polynomial of several variables can be written as Wn(X0,X1,...)=∑d|n d Xdn/d, where d is a divisor of n, with 1 and n included. For p-adic numbers n is power of p and the factors d are powers of p. Xd are analogous to elements of a finite field Gp,n as coefficients of powers of p.
Witt polynomials are characterized by their roots, and the TGD view about space-time surfaces both as generalized numbers and representations of ordinary numbers, inspires the idea how the roots of for suitably identified Witt polynomials could be represented as space-time surfaces in the TGD framework. This would give a representation of generalized p-adic numbers as space-time surfaces making the arithmetics very simple. Whether this representation is equivalent with the direct representation of p-adic number as surfaces, is not clear.

Could the prime polynomial pairs (g1,g2): C2→ C2 and (f1,f2): H=M4× CP2→ C2 (perhaps states of pure, non-reflective awareness) characterized by ordinary primes give rise to functional p-adic numbers represented in terms of space-time surfaces such that these primes could correspond to ordinary p-adic primes?

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Holography= holomorphy vision and functional generalization of arithmetics and p-adic number fields

In TGD, geometric and number theoretic visions of physics are complementary. This complementarity is analogous to momentum position duality of quantum theory and implied by the replacement of a point-like particle with 3-surface, whose Bohr orbit defines space-time surface.

At a very abstract level this view is analogous to Langlands correspondence. The recent view of TGD involving an exact algebraic solution of field equations based on holography= holomorphy vision allows to formulate the analog Langlands correspondence in 4-D context rather precisely. This requires a generalization of the notion of Galois group from 2-D situation to 4-D situation: there are 2 generalizations and both are required.

  1. The first generalization realizes Galois group elements, not as automorphisms of a number field, but as analytic flows in H=M4× CP2 permuting different regions of the space-time surface identified as roots for a pair f=(f1,f2) of pairs f=(f1,f2): H→ C2, i=1,2. The functions fi are analytic functions of one hypercomplex and 3 complex coordinates of H.

  2. Second realization is for the spectrum generating algebra defined by the functional compositions gº f, where g: C2→ C2 is analytic function of 2 complex variables. The interpretation is as a cognitive hierarchy of function of functions of .... and the pairs (f1,f2) which do not allow a composition of form f=gº h correspond to elementary function and to the lowest levels of this hierarchy, kind of elementary particles of cognition. Also the pairs g can be expressed as composites of elementary functions.

    If g1 and g2 are polynomials with coefficients in field E identified as an extension of rationals, one can assign to g º f root a set of pairs (r1,r2) as roots f1,f2)= (r1,r2) and ri are algebraic numbers defining disjoint space-time surfaces. One can assign to the set of root pairs the analog of the Galois group as automorphisms of the algebraic extension of the field E appearing as the coefficient field of (f1,f2) and (g1,g2). This hierarchy leads to the idea that physics could be seen as an analog of a formal system appearing in Gödel's theorems and that the hierarchy of functional composites could correspond to a hierarchy of meta levels in mathematical cognition.

  3. The quantum generalization of integers, rationals and algebraic numbers to their functional counterparts is possible for maps g: C2→ C2. The counterpart of the ordinary product is functional composition º for maps g. Degree is multiplicative in º. In sum, call it +e, the degree should be additive, which leads to the identification of the sum +e as an element-wise product. The neutral element 1º of º is 1º=Id and the neutral element 0e of +e is the ordinary unit 0e=1.

    The inverse corresponds to g-1 for º, which in general is a many-valued algebraic function and to 1/g for times. The maps g, which do not allow decomposition g= hº i, can be identified as functional primes and have prime degree. f:H→ C2 is prime if it does not allow composition f= gº h. Functional integers are products of functional primes gp.

    The non-commutativity of º could be seen as a problem. The fact that the maps g act like operators suggest that the functional primes gp in the product commute. Functional integers/rationals can be mapped to ordinary by a morphism mapping their degree to integer/rational.

  4. One can define functional polynomials P(X), quantum polynomials, using these operations. In P(X), the terms pnº Xºn, pn and X should commute. The sum ∑e pnXn corresponds to +e. The zeros of functional polynomials satisfy the condition P(X)=0e=1 and give as solutions roots Xk as functional algebraic numbers. The fundamental theorem of algebra generalizes at least formally if Xk and X commute. The roots have representation as a space-time surface. One can also define functional discriminant D as the º product of root differences Xk-e Xl, with -e identified as element-wise division and the functional primes dividing it have space-time surface as a representation.
What about functional p-adics?
  1. The functional powers gpº k of primes gp define analogs of powers of p-adic primes and one can define a functional generalization of p-adic numbers as quantum p-adics. The coefficients Xk Xkºgpk are polynomials with degree smaller than p. The sum +e so that the roots are disjoint unions of the roots of Xkºgpºk.

  2. Large powers of prime appearing in p-adic numbers must approach 0e with respect to the p-adic norm so that gPºn must effectively approach Id with respect toº. Intuitively, a large n in gPºn corresponds to a long p-adic length scale. For large n, gPºn cannot be realized as a space-time surface in a fixed CD. This would prevent their representation and they would correspond to 0e and Id. During the sequence of SSFRs the size of CD increases and for some critical SSFRs a new power can emerge to the quantum p-adic.
  3. Universal Witt polynomials Wn define an alternative representation of p-adic numbers reducing the multiplication of p-adic numbers to elementwise product for the coefficients of the Witt polynomial. The roots for the coefficients of Wn define space-time surfaces: they should be the same as those defined by the coefficients of functional p-adics.
There are many open questions.
  1. The question whether the hierarchy of infinite primes has relevance to TGD has remained open. It turns out that the 4 lowest levels of the hierarchy can be assigned to the rational functions fi: H→ C2, i=1,2 and the generalization of the hierarchy can be assigned to the composition hierarchy of prime maps gp.
  2. >Could the transitions f→ gº f correspond to the classical non-determinism in which one root of g is selected? If so, the p-adic non-determinism would correspond to classical non-determinism. Quantum superposition of the roots would make it possible to realize the quantum notion of concept.

  3. What is the interpretation of the maps g-1 which in general are many-valued algebraic functions if g is rational function? g increases the complexity but g-1 preserves or even reduces it so that its action is entropic. Could selection between g and g-1 relate to a conscious choice between good and evil?
  4. Could one understand the p-adic length scale hypothesis in terms of functional primes. The counter for functional Mersenne prime would be g2ºn/g1, where division is with respect to elementwise product defining +e? For g2 and g3 and also their iterates the roots allow analytic expression. Could primes near powers of g2 and g3 be cognitively very special?
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, April 15, 2025

Impossible device creates free energy in the Earth's magnetic field

Sabine Hossenfelder has a Youtube talk (see this) with title ""Impossible" Device Creates Free Electricity from Earth's Magnetic Field". It tells about a very interesting anomaly, which could be the anomaly found already by Faraday but forgotten since it does not quite fit the framework of Maxwellian electrodynamics. I learned of this phenomenon during my student days. The effect was an exercise in an electrodynamics course but neither I nor others realized that the effect seems to be in conflict with Maxwell's theory!

If I understood correctly, the effect has been now firmly re-established in the case of the Earth's electric field (see this). The electric field would be created by a static dipole assignable to the magnetic field of the Earth with respect to which the Earth rotates.

If my interpretation is correct, an analogous effect occurs also for the Faraday disk, which is a conductor disk rotating around its symmetry axis. Faraday observed that a very small radial electric field is generated with magnetic Eρ= ω B (c=1). This radial electric field field can be obtained from a vector potential At= ρω B. This generates electric charge density ρ= ω B inside the disk. This looks strange: how can rotation generate electric charge? Does this conform with Maxwell's laws?

  1. What comes to mind is that Maxwell's induction law implied by special relativity explains the effect. However, the rotation is not a rectilinear motion although the magnitude of the velocity is constant so that the effect is more general than predicted by the Faraday law. Furthermore, the magnetic field rotates and at least in quantum theory, nothing should happen if the rotational symmetry is exact.
  2. Could the charge generation be a dynamical phenomenon? Could there be a generation of a surface charge compensating for the charge density in the interior? The sign of this charge density depends on the direction of the rotation so that surface charge would be positive for the second direction of rotation. One would expect that the surface charge is negative since electrons are the charge carriers. Also a large parity violation would take place.
One could understand the effect in terms of the notion of induced gauge field. The explanation of Faraday effect was one of the first applications of TGD (see this). The phenomenon is familiar for free energy researchers, whom academic researchers do now what count as real researchers, and also technological applications have been proposed (see this).
  1. In the TGD framework, space-time is a 4-surface and gauge fields are induced. so that their geometrization is obtained. This means that the electroweak vector potentials are the projection of the spinor connection of CP2. Let (cos(Θ),φ) be spherical coordinates for the geodesic sphere S2 of CP2. The Kähler gauge potential is Aφ= cos(Θ) and the Kähler form is JΘφ= sin(Θ). Introduce cylindrical coordinates (t,z,ρ,φ) for M4 and space-time surface.
  2. The simplest space-time surface describing the situation without rotation corresponds to the embedding (cos(Θ),φ) = (f(ρ),nφ), n integer. The non-vanishing component of the induced gauge potential is (Aφ= nf(ρ) and induced magnetic field is Bz= n∂ρf. The choice f=Bρ gives a constant magnetic field.
  3. The rotation of the space-time surface implies φ \rightarrow φ-ω t= nφ-ω t so that induced vector potential gets time component At= fω giving rise to electric field E= ρ ω B. This is what the Faraday law extended to curvilinear motion would give. One could interpret the Faraday effect as a direct evidence for the notion of induced gauge field (see this).
How to describe the generation of the em charge? Is the charge purely geometric vacuum charge without any charger carriers or are charged carriers involved?
  1. Could there be a charge transfer between the disk and a third party? In TGD, the third party would be what I call field body, which plays a key role in the explanation of numerous anomalies. TGD predicts the possibility of both electric and magnetic bodies and magnetic bodies, which are space-time surfaces giving rise to the TGD counterparts of Maxwellian fields and gauge fields.
  2. The field bodies are carriers of macroscopic quantum phases with large effective Planck constant heff=nh0, h= (7!)2h0 (a good guess). For the electric field body, ℏem wouldbe proportional to a product of elementary particle charge q and large em charge Q associated with a negatively charged system such as DNA, cell, Earth, capacitor,.. giving rise to large scale electric field. For the gravitational magnetic body ℏgr would be proportional to a large mass M, such as the mass of the Earth or Sun and small mass m.
  3. Both signs for the charge for the rotating disk are in principle possible and are determined by the direction or the rotation but in living matter negative charge is typical and could be generated by Pollack effect transforming ordinary protons to dark protons at the gravitational or electric magnetic body associated with the system and inducing the generation of exclusion zone (EZ) with negative charge giving rise to electric field body carrying dark electrons. Reversal of the Pollack effect would bring the protons back. Electrons could be transferred to the electric body or return from it. This effect would mean a large parity breaking effect and could relate closely to the chiral selection in living matter. TGD indeed predicts large parity breaking effects since macroscopic electroweak fields are predicted to be possible.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

About the physical interpretation of gravastar in the TGD framework

The gravastar model for a blackhole-like object describes stellar interior as de-Sitter space and exterior as Schwarscild metric. The surface of the blackhole is predicted to carry exotic phase. In the previous post (see this) I demonstrated de-Sitter metric allows a realization as space-time surface and that blackhole-objects and in fact all stars could be modelled in this way.

In the sequel I will consider the physical interpretation of de-Sitter space-time represented as a 4-surface in the TGD framework.

  1. In TGD, twistor lift predicts cosmological constant Λ with a correct sign (see this and this). The twistor lift of TGD predicts that Λ= 3/α2, where α is a length scale is dynamical and has a spectrum. The mass density ρ is associated with the volume term of the dimensionally reduced action having 3/(8π Gα2) as coefficient. Also Kähler action is present and contains CP2 part and possibly also M4 part.

    Λ is not a universal constant in TGD but depends on the size scale of the space-time sheet. The naive estimate is that it corresponds to the size scale of the space-time sheet associated with the system or its field body of the system, which can be much larger than the system.

    p-Adic length scale hypothesis suggests that apart from a numerical constant the scale LΛ=(1/Λ)1/2 equals to the p-adic length scale Lp characterizing the space-time sheet. If p-adic length scale hypothesis L(k)= p1/2, where the prime p satisfies p∼ 2k, it implies L(k)= 2(k-151)/2 L(151), L(151)∼ 10 nm.

  2. How does the average density of an astrophysical object or even smaller object relate to the vacuum energy density determined by Λ. There are two options: vacuum energy density corresponds to an additional contribution to the average energy energy density or determines it completely in which case one must assume quantum classical correspondence stating that the quantal fermionic contributions to the energy and other conserved quantum numbers are identical with the classical contributions so that there would be kind of duality. This would hold true only for eigenvalues of charges of the Cartan algebra.
  3. One can assign to the cosmological constant a length scale as the geometric mean

    lΛ= (lP LΛ)1/2 ,

    where Planck length is defined as lP= (ℏ G)1/2. One obtains therefore 3 length scales, Planck length, the big length scales LΛ and their geometric mean lΛ.

  4. What is the relationship to the spectrum of Planck constants predicted by the number theoretical vision of TGD? If one replaces ℏ with ℏeff=nh0, one obtains a spectrum of gravitational constants G and of Planck length scales. CP2 size scale R ∼ 104lP is a fundamental length scale in TGD. One can argue that G is expressible in terms of R=lP as Geff=lP/(ℏeff1/2 and that the CP2 length scale satisfies R=lP for the minimal value h0 of heff so that one obtains Geff= R/heff1/2. For h0 one obtains the estimate h= (7!)2h0 in terms of Planck constant h. This would predict a hierarchy of weakening values of G.

    Note that G=lP/ℏeff1/2 would predict the scaling lΛ∝ ℏeff1/4. Gravitational Planck constant ℏgr= GMm/\beta0 for the system formed by large mass M and small mass m has very large values.

It is interesting to look at what values of lΛ are associated with LΛ , characterizing the size scale of a physical system or possibly of its field body.
  1. For the "cosmological" cosmological constant one has LΛ∼ 1061lP giving lΛ∼ 1031.5lP ∼ 2× 10-4 m. This corresponds to the size scale of a neuron. LΛ could characterize the largest layer of its field body with a cosmological size scale.
  2. A blackhole with the mass of the Sun has Scwartschild radius rS= 3 km. Λ=rS gives lΛ∼ 2.19× 10-16 m. The Compton length of the proton is lp=2.1× 10-16 m. This estimate motivated the proposal that stellar blackholes could correspond to volume filling flux tubes containing a sequence of protons with one proton per Compton length of proton. This monopole flux tube would correspond to a very long nuclear string defining a gigantic nucleus. This result conforms with quantum classical correspondence stating that vacuum energy density corresponds to the density of fermions.
  3. One can also look at what one obtains for the Sun with radius RS= 6.9× 108 m, which is in a good approximation 100 times the radius RE= 6.4× 106 m of the Earth. lΛ scales up by the ratio (RS/rS)1/2 to lΛ ∼ 5.7× 102× lP∼ 1.3× 10-14 m. This corresponds to a nuclear length scale and the corresponding particle would have a mass of about 17 MeV. Is it mere coincidence that there is recent very strong evidence (23 sigmas!) from the so called Ytterbium anomaly (see this) for so called X boson with mass 16-17 MeV (see this and this).

    The corresponding vacuum energy density ℏ/Λ4 would be about 8× 1038 mp/m3. This is 12 orders of magnitude higher than the average density .9× 1027 mp/m3 of the Sun. Since lΛ ∝ LΛ1/2 and ρ ∝ lΛ-4∝ LΛ-2 one obtains LΛ≥ 1012RS∼ 1020 m ∼ 105 ly, which corresponds to the size scale of the Milky Way.

    The only reasonable interpretation seems to be that LΛ characterizes the lengths of monopole flux tubes which fill the volume only for blackhole-like objects. The TGD based model for the Sun involves monopole flux tubes connecting the Sun with the galactic nucleus or blackhole-like object (see this). In this case the density of matter at the flux tubes would be much higher since protons would be replaced with their M89 counterparts 512 higher mass. For this estimate, the vacuum energy density along flux tubes would be the average density of the Sun. At least two kinds of flux tubes would be required and this is consistent with the notion of many-sheeted space-time.

    The proposed solar model in which the solar wind and energy would be produced in the transformation of M89 nuclei to ordinary M107 nuclei allows to consider the possibility that the Sun and stars are blackhole-like objects in the sense that the interior correspond contains a volume filling flux tube tangle carrying vacuum energy density which is the average value of the solar mass density. I have considered this kind of model in (see this).

    One can wonder whether the scaling up the value of h to heff help to reduce the vacuum energy density assigned to the Sun? From lΛ∝ ℏeff1/4 the density proportional to ℏeff/lΛ4 does not depend on the value of heff.

To sum up, TGD could allow the interior of the gravastar solution as a space-time surface and this would correspond to the simplest imaginable model for the star. It is not clear whether Einstein's equations can be satisfied for some action based on the induced geometry but volume action is an excellent candidate even if cosmological constant is not allowed. In the TGD framework, the cosmological constant would correspond to the volume action as a classical action.

Schwartschild metric as exterior metric is representable as a space-time surface (see this) although it need not be consistent with any classical action principle and it could indeed make sense only at the quantum field theory limit when the many-sheeted space-time is replaced with a region of M4 made slightly curved. The spherical coordinates for the Schwartschild metric correspond to spherical coordinates for the Minkowski metric and Schwartschild radius is associated with the radial coordinate of M4. The exotic matter at the surface of the star as a blackhole-like entity could have a counterpart in the TGD based model of star (see this).

See the article Does the notion of gravastar make sense in the TGD Universe? or the chapter Some Solar Mysteries.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, April 14, 2025

Does the notion of gravastar make sense in the TGD Universe?

Mark McWilliams asked for my TGD based opinion about gravastar as a competing candidate for the blackhole (see this and this). The metric of the Gravastar model would be the de-Sitter metric in the interior of the gravastar. The density would be constant and there would be no singularity at the origin. The condition ρ=-p would be true for de-Sitter and there would be analogy with dark energy, which in TGD framework contributes to galactic dark matter identified as classical volume and magnetic energies for what I call cosmic strings, which are 4-surfaces with string world sheet as M4 projection. The condition ρ=p would hold true for ultrarelativistic matter at the surface, which indeed as a light-like metric if the infinite value of the radial component grr of the Schwartschild metric is deformed to a finite value at the horizon. In the exterior one would have ρ=p=0.

TGD suggests a model of a blackhole-like object as volume filling monopole flux tube tangle, which carries a constant mass density, which can be interpreted as dark energy as a sum of classical magnetic and volume energies. Quantum classical correspondence forces us to ask whether the description in terms of sequences of nucleons and in terms of classical energy are equivalent or whether the possibly dark nucleons must be added as a separate contribution. I have discussed the TGD based model of blackhole-like objects in \cite{btart{Haramein.

It became as a surprise for me that the gravastar could serve as a simple model for this structure and describe the space-time sheet at which the monopole flux tube tangle is topologically condensed. TGD also suggests that the surface of the star carries a layer of M89 matter consisting of scaled variants of ordinary hadrons with mass scale which is 512 times higher than that for ordinary hadrons. This would be the counterpart for the exotic matter and the surface of the gravastar \cite{btart{Haramein. This model predicts that the nuclear fusion at the core of the star is replaced with a transformation of M89 hadrons to ordinary hadrons. This would explain the energy production of the star and also the stellar wind and question the structure of the interior. I have proposed that it could be a quantum coherent system analogous to a cell.

Consider now the TGD counterpart of the gravastar model at quantitative level.

  1. The metric of AdSn (anti de-Sitter) resp. dSn (de-Sitter) can be represented as space-like resp. time-like hyperboloid resp. of n+1-dimensional Minkowski space with one time-like dimension. The metric is induced metric

    dx02-∑ i=1ndxi2 ,

    with metric tensor deducible from the representation

    x02-∑ i=1nxi2= ε α 2 ,

    as a surface. Here one has ε =-1 AdSn and ε=1 for dSn.

    It should be warned that the Wikipedia definition of the dSn (see this) contains the right-hand side with a wrong sign (there is ε=-1 instead of ε=1) whereas the definition of AdSn (see this) is correct. For n=4 this could realize AdS4 resp. dS4 as a space-like resp. time-like hyperboloid of 5-D Minkowski space.

  2. In TGD this representation as surface is not possible as such. One can however compactify the 5:th space-like dimension and represent it as a geodesic circle of CP2. dx52 is replaced with R22 and x52 with R2φ2. The contribution of S1 to the induced metric is very small since R corresponds to CP2 radius. The space-time surface would be defined by the condition

    a2= R2φ2+ε α2 ,

    where a2=t2-x2-y2-z2 defines light-cone proper time a. In TGD it would be associated with the second half of the causal diamond (CD). A more convenient form is following

    R2φ2= a2-ε α 2 ,

    where a is the light-cone proper time coordinate of M4. This requires a2≥ε α2. For ε=1 this implies a2≥ α2. For ε=1 one has a2≥ -α2 so that also space-like hyperboloids are possible.

  3. If the embedding is possible, one obtains an infinite covering of S1 by mass shells a2= R2φn2+ε α2, where one has φn= φ +n2π. For φ → ∞ one has a → nR. Hyperboloids associated with φn define a lattice of hyperboloids at this limit, a kind of time crystal.
  4. If the classical action is Kähler action of CP2, this surface is a vacuum extremal since the CP2 projection is 1-dimensional. If also the contribution M4 Kähler action to Kähler action suggested by the twistor lift of TGD is allowed, the situation the action is instanton action and vanishes although the induced M4 Kähler form does not vanish and defines self dual abelian field. It is not quite clear whether this is vacuum extremal anymore.

    If the Kähler action vanishes, volume action is the natural guess for the classical action and minimal surface equations are indeed satisfied if S1 is a geodesic circle. The mass density associated with this action would be constant in accordance with the de-Sitter solution.

  5. Consider next the induced metric. One has

    φn= n2π + [(a/R)2-ε (α/R)2]1/2 .

    This gives Rdφn/da= +/- a/[a2-ε α2]1/2. Note that a2≥ ε α2 is required to guarantee the reality of dφ/da. The gaa component of the induced metric (Robertson-Walker metric with k=-1 sub-critical mass density) is

    gaa=1-R2(dφn/da)2= 1- a2/(a2+ε α2)= εα2/(a2+εα2) .

It is useful to consider AdS4 and dS4 separately.
  1. For AdS4 with ε=-1, the reality of dφ/da implies a2>-α2 implying gaa<0 so that the induced metric has an Euclidean signature. This is mathematically possible and CP2 type extremals with Euclidean signature are in an important role in the TGD based model of elementary particles. What Euclidian cosmology could mean physically, is however not clear.
  2. For dS4 with ε=1, dφ/da is real for a22>0 implying a2≥ -α2. This allows all time-like hyperboloids and also some space-like hyperboloids. One has

    gaa=1-R2(dφn/da)2= 1- a2/(a22)= α2/(a22) .

    gaa is positive in the range allowed by the reality of dφ/da.

  3. The mass density of Robertson-Walker cosmology is obtained from the standard expression of the metric (note that one has dt2=gaada2)is given by

    ρ =(3/8πG)[[(da/dt)/a)2-1/a2]= (3/8πG)[1/(gaaa2) -1/a2]=(3/8πG α2) .

    The mass density is constant and could be interpreted in terms of a dynamically generated cosmological constant in GRT framework. This is not what happens usually in the Big Bang cosmology but would conform with a model of a star in an expanding Universe.

Somewhat surprisingly, TGD could allow the interior of the gravastar solution as a space-time surface and this would correspond to the simplest imaginable model for the star. It is not clear whether Einstein's equations can be satisfied for some action based on the induced geometry but volume action is an excellent candidate even if cosmological constant is not allowed. In the TGD framework, the cosmological constant would correspond to the volume action as a classical action.

Schwartschild metric as exterior metric is representable as a space-time surface \cite{allb/tgdgrt} although it need not be consistent with any classical action principle and it could indeed make sense only at the quantum field theory limit when the many-sheeted space-time is replaced with a region of M4 made slightly curved. The spherical coordinates for the Schwartschild metric correspond to spherical coordinates for the Minkowski metric and Schwartschild radius is associated with the radial coordinate of M4. The exotic matter at the surface of the star as a blackhole-like entity could have a counterpart in the TGD based model of star \cite{btart/Haramein}.

See the article Does the notion of gravastar make sense in the TGD Universe? or the chapter Some Solar Mysteries.

For a summary of the earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, April 12, 2025

The rotation of galaxies in the same direction in giga-ly scale as evidence for the TGD view of space-time and cosmic quantum coherence

Sabine Hossenfelder told in her Youtube video (see this) about the recent finding of Lior Shamir (see the article) that galaxies have a clear tendency to rotate in the same direction in Giga light-year length scales. There is also a popular article (see this) about this.

The following is the abstract of the article of Shamir.

JWST provides a view of the Universe never seen before, and specifically fine details of galaxies in deep space. JWST Advanced Deep Extragalactic Survey (JADES) is a deep field survey, providing an unprecedentedly detailed view of galaxies in the early Universe. The field is also in relatively close proximity to the Galactic pole. Analysis of spiral galaxies by their direction of rotation in JADES shows that the number of galaxies in that field that rotate in the opposite direction relative to the Milky Way galaxy is 50 per cent higher than the number of galaxies that rotate in the same direction relative to the Milky Way. The analysis is done using a computer-aided quantitative method, but the difference is so extreme that it can be noticed and inspected even by the unaided human eye. These observations are in excellent agreement with deep fields taken at around the same footprint by the Hubble Space Telescope and JWST. The reason for the difference may be related to the structure of the early Universe, but it can also be related to the physics of galaxy rotation and the internal structure of galaxies. In that case the observation can provide possible explanations to other puzzling anomalies such as the tension and the observation of massive mature galaxies at very high redshifts.

The popular article says that the fractions of the galaxies rotating to opposite directions with respect to the Milky Way are 2/3 and 1/3 and there is no doubt that the observation is real. The Doppler effect allows us to deduce the rotation direction for a given galaxy. Blueshift occurs in the opposite direction of rotation. The scale occurs in the scale of Giga light year.

From the article of Shamir one learns that these kinds of observations have been made already earlier, as early as 1985. Already Zeldowich observed that galaxies are associated with long linear structures and tend to propagate in the same direction. I have proposed a TGD based explanation in terms of long cosmic strings whose tangles give rise to the generation of galaxies along them (see this).

Several explanations for the findings of Shamir have been proposed. The entire universe has been proposed to rotate. Also a fractal Universe has been proposed in which case the rotating structures would appear in all scales. TGD predicts that space-times are 4-surfaces in H=M4×CP2. This leads to the notion of many-sheeted space-time strongly suggesting the possibility of fractal structures in all scales. A fractal structure in a given scale would correspond to a quantum coherence region. The realization that holography= holomorphy principle reduces the extremely non-linear field equations of TGD to algebraic equations led to the surprising conclusion that the fractality in question is a 4-D generalization of the fractality of Mandelbrot fractals and Julia sets (see this). The number theoretic vision predicts a hierarchy of effective Planck constants and provides a precise formulation for what the long range quantum coherence means.

Galaxies as 4-surfaces assignable with monopole flux tubes obtained by a thickening of a cosmic string are predicted to be organized along long string-like objects, cosmic strings, and be highly correlated, for instance having correlated spin directions. This explains the findings of Zeldowich and quantum large scale coherence allows us to understand the more general findings. In the very early Universe cosmic strings with 2-D M^4 projection would have dominated.

For the TGD based cosmology and astrophysics see for instance this. For the recent number theoretic vision of TGD see this.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, April 11, 2025

Some questions related to the maps g defining cognitive hierarchies realized as space-time surfaces

Rational maps g and possibly also their inverses which would be central in the realization of cognition and reflective hierarchies. These ideas are however far from their final form and in the following I try to imagine and exclude various alternatives. Some new results emerge.
  1. Quantum realization of concepts as superpositions in the set of space-time surfaces defining the classical concept is more natural than the classical realization.
  2. The roots of gºf correspond to classical non-determinism and would naturally correspond to generalized p-adicity and could also explain the p-adic length scale hypothesis.
  3. The inverses g-1 of the rational maps g correspond to algebraic functions unless g is analog of Möbius transformation. g-1ºf preserves the number of roots for f and decreases it for the iterate of g and in this case reduced complexity and negentropy. In the framework of TGD inspired theory of consciousness, this raises the question whether the quantum correlates for good and evil deeds as SFRs could correspond to maps of type g increasing algebraic complexity, information and quantum coherence and g-1 possibly reducing them.

1. What could happen in the transition f→ gºf?

The proposal is that in SSFR the transition f→ gºf takes place. The number of roots becomes n-fold if g is a rational function of form P/Q. What could this transition mean physically? One can consider two options.

1.1 The option allowing quantum realization of concept

The nm roots (poles and zeros) for g ºf , where f as m roots would be alternative outcomes of SSFR of which only a single outcome, or possible quantum superposition of the outcomes would be selected. What is so nice is that the classical non-determinism crucial for the TGD view of consciousness would follow automatically from the holography= holomorphy hypothesis without any additional assumptions.

Conservation laws conform with this view. All the alternative Bohr orbits would have the same classical conserved charges. The quantum superposition of the roots would represent a particular quantum realization of a concept and f→ gºf would mean a refinement of the quantum concept defined by f.

The hypothesis that the classical non-determinism correspond to the p-adic non-determinism would transform to a statement that different Bohr orbits associated gºk define analogs for the sequences of k pinary digits if there are p outcomes for gºf. A possible interpretation would be in terms of a k-digit pinary digit sequence in powers of p. The largest integer would correspond to n=2k for gºk. The generalization of the notion of the notion of p-adic numbers for which p is replaced by a functional prime g and based on the generalization of Witt polynomials is suggestive. It remains unclear whether this could allow us to understand the generalization of the p-adic length scale hypothesis stating that a large prime p∼ pk can be assigned to this set of Bohr orbits.

1.2 The option allowing a classical realization of concept

The union of nm space-time surfaces, where n is the degree of g and m is the number of roots of f, is generated in the step f→ gºf. The set of the nm space-time surfaces would give a classical realization of a concept as a set. Does this make sense? The first grave objection is that there is no continuous time evolution between f and gºf multiplying the number of space-time surfaces by n. Second objection relates to the conservation laws which seem to be violated. The third objection is that classical non-determinism is lost. It seems that this objection cannot be circumvented.

One can try to imagine ways to overcome the first two objections.

Option I: ZEO interpreted in the "eastern" sense in principle allows the creation of n space-time surfaces from each of the m space-time surfaces assignable with f. This is because the total classical charges of the zero energy states as sums of those for states at the boundaries of CD vanish. Zero energy state would be analogous to a quantum fluctuation.

Option II: In standard ontology, the classical realization of the concept as union of space-time surfaces defining its instances is possible only in a situation in which space-time surfaces are vacua or nearly vacua. Could this kind of surface serve as a template for the non-vacuum physical systems?

Cell replication, which would correspond to n=2 for g, was motivated by the consideration of both options, at least half-seriously. The instantaneous replication of the space-time surface representing the cell does not look sensible since the generation of biomatter requires a feed of metabolics and metabolic energy. Could a replicated field body serve as a kind of template for the formation of a final state involving two cells generated in f→ gºf? Could the replication occur at the level of the field body, proposed to control the biological body?

For Option II, conservation laws pose a problem for replication. In ZEO the classical charges of the space-time nm surfaces should be those associated with the passive boundary of CD and therefore same as those for f.

  1. Could the space-time surfaces be special in the sense that the classical charges vanish? The vanishing of classical conserved charges is not possible unless the classical action reduces to Kähler action allowing vacuum extremals. The finite size of CD indeed allows by Uncertainty Principle a slight violation of the classical conservation laws assignable to the Poincare invariance (see this). This cannot be excluded and the original proposal (see this and this) indeed was that Kähler action defines the classical action by its unique property of having huge classical non-determinism defining the 4-D analog of spin-glass degeneracy (see this/sg} which could play a key role in biology.

    If one assigns to M4 the analog of the Kähler structure (see this), this argument weakens since the induced M4 and CP2 Kähler forms must vanish for the vacuum extremals. However, for a given Hamilton-Jacobi structure defining the M4 Kähler form, there exist space-time surfaces of this kind. They are Cartesian products of Lagrangian 2-manifolds of M4 and CP2 defining vacuum string world sheets.

    Holography= holomorphy principle, implying that Bohr orbits are minimal surfaces, seems to hold true for any classical action, which is general coordinate invariant and is determined by the induced geometry. For the Kähler action, the coefficient Λ of the volume term, defining the analog of cosmological constant, would vanish. Holography= holomorphy principle does not allow Cartesian products of Lagrangian 2-manifolds of M4 and CP2. One could hope that their vacuum property could change the situation but this does not look an elegant option.

  2. For the standard ontology, one can also consider another option. The classical action, and therefore the classical conserved charges, are for the twistor lift proportional to 1/αK, where αK is Kähler coupling strength. The conservation of charges would suggest αK→ nαK requiring heff→ heff/n in the n-fold multiplication. For heff=h this would require h→ h/n. This looks strange.

    h need not however be the minimal value of heff and I have considered the possibility that one has h =n0h0 (see this), where n0 corresponds to the ratio R2(CP2)/lP2. CP2 size scale would be given by Planck length lP size scale but for h=n0h0 the size scale would be scaled up to R2 ∼ n0lP2, n0 ∈ [107,108]. The estimate for n0 is given by n0=(7!)2 having numbers 2,3,5,6,7 (primes 2,3,5,7) as factors (see this). R(CP2) would naturally correspond to the M4 size of a wormhole throat. h could be reduced by a factor appearing in n0 and there is some evidence for the reduction of heff by a small power of 2 (see this). This mechanism could work for a functional prime g characterized by prime p∈{2,3,5,7}.

To classical realization of concept does not look realistic except possibly for Option I.

2. About the interpretation of the inverses of the maps g

What could be the interpretation of the inverse maps g-1 for g=P/Q, assuming that they can occur? g-1 is a multivalued algebraic function analogous to z1/n. In f→ g-1ºf the roots rn of f are mapped to g(rn) so that their number does not increase. For the iterate of g, g-1 means the reduction of the number of roots by 1/n. The complexity does not increase and can even decrease.

This is just the opposite for what occurs in f→ gºf. The increase of complexity is assigned with number theoretic evolution and NMP. Suppose for a moment that the inverses g-1 are allowed. What could be their interpretation?

  1. The sequence of the inverses g-1 does not correspond to non-determinism and does not give rise to a refinement of either classical or quantum concept. There is no increase of complexity and it can be reduced for iterates.

  2. Could the reduction of the cell to stem cell level as a reverse of cell differentiation, which occurs by cell replications, correspond at the level of the field body to a sequence of g-1:s reducing the complexity. Could cancer correspond to this kind of process? This would conform with the interpretation in terms of the reduction of negentropy.

  3. The first option is that the maps of type g-1 are possible for both arrows of the geometric time. For the iterates of g, g-1 destroys complexity and information and reduces the level of cognition in this case. g-1 would obey anti-NMP in this case. Both maps g and g-1 make possible a trial and error process. If an iterate of g is not involved, the roots rn of hºf are mapped by g to roots g(rn) and the number of roots is preserved. It is not clear whether the algebraic complexity is increased or reduced.

    This suggests that NMP (see this) is not lost if both maps of type g and g-1 are allowed? Furthermore, there is a lower bound for algebraic complexity but no upper bound so that it seems that NMP remains true even if maps of type g-1 are allowed.

    Any quantum theory of consciousness should be able to say something about the quantum correlates of ethics (see this). In TGD, one can assign the notion of good to state function reductions (SFRs) inducing the increase of quantum coherence occurring in a statistical sense in SFRs. It would correspond to the increase of algebraic complexity and would be accompanied by the increase of heff and the amount of potentially conscious information. Is evil something something analogous to a thermodynamic fluctuation reducing entropy or can one speak of an active evil? Could the notion of evil as something active be assigned with the occurrence of maps of type g-1?

  4. The maps of type g and g-1 are reversals of each other and differ unless they act as symmetries analogous to M\"obius transformations. Could they be assigned with SSFRs with opposite arrows of geometric time? If so, negentropy would not increase for both arrows of the geometric time and there would be a universal arrow of time analogous to that assumed in standard thermodynamics and defined by negentropy increase. If a universal arrow of time exists, it should somehow relate to the violation of time reflection symmetry T. To me this option does not look plausible.

    If this is the case, the trial and error process allowed by ZEO and based on pairs of BSFRs would involve a map of type g-1 induced by SSFRs whereas the second BSFR would correspond to a map of type g. The sequence of SSFRs after the first BSFR would preserve or even reduce complexity and would mean starting from a new state at the passive boundary (PB) of CD. If the first BSFR is followed by a sequence of SSFRs of type g, it in general leads to a more negentropic new initial state at PB.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, April 09, 2025

Why very distant galaxies have very sharp boundaries?

Ethan Siegel has published an interesting popular article in Bigthink (see this). It states that the deepest of the Hubble deep fields, the Ultra Deep Field and the Extreme Deep Field, show compact, luminous galaxies amidst a sea of total darkness. They are visible against dark background as bright spots. How this is possible? One would except that their brightness gradually fades near the boundaries.

The explanation discussed in the article of Siegel is that much of the actual starlight had been oversubtracted as part of the field-flattening method used. When a proper reanalysis is conducted, the light is preserved, showing that the sky is brighter than anyone realized. The proposal for the subtractions is discussed in Astronomy Astrophysics article by Borlaff et al (see this).

TGD allows to consider an alternative explanation. There are observations about galaxies, which are so distant that they should not be visible at all since at at the moment of emission the Universe should have contained mostly neutral hydrogen absorbing the light and it would have been opaque. I considered the TGD explanation for these findings around 2018 (see this).

The light would arrive along monopole flux tubes connecting distant galaxies to our galaxy, to our solar system and to the Earth. These flux tubes correspond to 4-surfaces in H=M4×CP2, kind of space-time quanta, and would act like light cables. The intensity of the signal strength would not be reduced as inverse of the distance squared as the standard view of space-time and fields predicts. This would make possible to receive light from objects, which are beyond the distance, which corresponds to the time when ionization took place and the universe became transparent (see this). There is evidence for these object (see this, this and this).

These light cables would have sharp boundaries, which would explain the sharp boundaries of galaxies without need for subtraction. This would also give an estimate for the transversal size of the monopole flux tubes or flux tube bundless.

See the article Some Solar Mysteries or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, April 07, 2025

Infinite primes, the notion of rational prime, and holography= holomorphy principle

The notion of infinite prime \cite{allb/visionc,infpc,infmotives} emerged a repeated quantization of a supersymmetric arithmetic quantum field theory in which the many-fermion states and many-boson formed from the single particle states at a given level give rise to free many-particle states at the next level. Also bound states of these states are included at the new level. There is a correspondence with rational functions as ratios R=P/Q of polynomials and infinite prime can be interpreted as prime rational function in the sense that P and Q have no common factors. The construction is possible for any coefficient field of polynomials identified as rationals or extension of rationals, call it E.

At a given level implest polynomials P and Q are products of monomials with roots in E, say rationals. Irreducible polynomials correspond to products of monomials with algebraic roots in the corresponding extension of rationals and define the counterparts of bound states so that the notion of bound state would be purely number theoretic. The level of the hierarchy would be characterized by the number of variables of the rational functions.

Holography= holomorphy principle suggests that the hierarchy of infinite primes could be used to construct the functions f1: H→ C and f2:H→ C defining space-time surfaces as roots f=(f1,f2). There is one hypercomplex coordinate and 3 complex coordinates so that the hierarchy for fi would have 4 levels. The functions g:C2→ C2 define a hierarchy of maps with respect to the functional composition º. One can identify the counterparts of primes with respect to º and it turns out that the notion of infinite prime generalizes.

The construction of infinite primes

Consider first the construction of infinite primes.

  1. Two integers with no common prime factors define a rational r=m/n uniquely. Introduce the analog of Fermi sea as the product X = ∏p p of all rational primes. Infinite primes is obtain as P= nX/r+ mr such that m=∏pk is a product for finite number of primes pk, n is not divisible by any pk, and m has as factors powers of some of primes pk. The finite and infinite parts of infinite prime correspond to the numerator and denominator of a rational n/m so that rationals and infinite primes can be identified. One can say that the rational for which n and m have no common factors is prime in this sense.

    One can interpret the primes pk dividing r as labels of fermions and r as fermions kicked out from the Fermi sea defined by X. The integers n and m as analogs of many-boson states. This construction generalizes also to athe algebraic extensions E of rationals.

  2. One can generalize the construction to the second level of the hierarchy. At the second level one introduces fermionic vacuum Y as a product of all finite and infinite primes at the first level. One can repeat the construction and now integers r,m and n are products of the monomials P(m/n,X)= nX/r+mr represented as infinite integers and . The analog of r from the new fermionic vacuum away some fermions represented by infinite primes P(m/n,X)= nX/r+mr by kicking them out of the vacuum. The infinite integers at the second level are analogous to rational functions P/Q with the polynomials P and Q defined as the products of ratio of the monomials p(m/n,X)= X/r+mr taking the role of n and m. These polynomials are not irreducible.

    One can however generalize and assume that they factor to monomials associated with the roots of some irreducible polynomial P (no rational roots) in some extension E of rationals. Hence also rational functions R(X)= P(X)/Q(X) with no common monomial factors as analogs of primes defining the analogs of primes for rational functions emerge. The lowest level with rational roots would correspond to free many-fermion states and the irreducible polynomials to a hierarchy of fermionic bound states.

  3. The construction can be continued and one obtains an infinite hierarchy of infinite primes represented as rational functions R(X1,X2,..Xn)= P(X1,X2,..Xn)/Q(X1,X2,..Xn) which have no common prime factors of level n-1. At the second level the polynomials are P(X,Y)= ∑k Pnk(X)Yk. The roots Yk of P(X,Y) are obtained as ordinary roots of a polynomials with coefficients Pnk(X) depending on X and they define the factorization of P to monomials. At the third level the coefficients are irreducible polynomials depending on X and Y and the roots of Z are algebraic functions of X and Y.

    Physically this construction is analogous to a repeated second quantization of a number theoretic quantum field theory with bosons and fermions labelled/represented by primes. The simplest states at a given level of free many-particle states and bound states correspond to irreducible polynomials. The notion of free state depends on the extension E of rationals used.

Infinite primes and holography= holomorphy principle

How does this relate to holography= holomorphy principle? One can consider two options for what the hierarchy of infinite prime could correspond to.

  1. One considers functions f=(f1,f2): H→ C2, with fi expressed in terms of rational functions of 3 complex coordinates and one hyperbolic coordinate. The general hypothesis is that the function pairs (f1,f2) defining the space-time surfaces as their roots (f1,f2)=(0,0) are analytic functions of generalized complex coordinates of H with coefficients in some extension E of rationals.
  2. Now one has a pair of functions: (f1,f2) or (g1,g2) but infinite primes involve only a single function. One can solve the problem by using element-wise sum and product so that both factors would correspond to a hierarchy of infinite primes.
  3. One can also assign space-time surfaces to polynomial pairs (P1,P2) and also to pairs rational functions (R1,R2). One can therefore restrict the consideration to f1\equiv f. f2 can be treated in the same way but there are some physical motivations to ask whether f2 could define the counterpart of cosmological constant and therefore could be more or less fixed in a given scale.
The allowance of rational functions forces us to ask whether zeros are enough or whether also poles needed?
  1. Hitherto it has been assumed that only the roots f=0 matter. If one allows rational functions P/Q then also the poles, identifiable as roots of Q are important. The compactification of the complex plane to Riemann-sphere CP1 is carried out in complex analysis so that the poles have a geometric interpretation: zeros correspond to say North Pole and poles to the South pole for the map of C→ C interpreted as map CP1→ CP1. Compactication would mean now to the compactification C2→ CP12.

    For instance, the Riemann-Roch theorem (see this) is a statement about the properties of zeros and poles of meromorphic functions defined at Riemann surfaces. The so called divisor is a representation for the poles and zeros as a formal sum over them. For instance, for meromorphic functions at a sphere the numbers of zeros and poles, with multiplicity taken into account, are the same.

    The notion of the divisor would generalize to the level of space-time surfaces so that a divisor would be a union of space-time surfaces representing zero and poles of P and Q? Note that the iversion fi→ 1/fi maps zeros and poles to each other. It can be performed for f1 and f2 separately and the obvious question concerns the physical interpretation.

  2. Infinite primes would thus correspond to rational functions R= P/Q of several variables. In the recent case, one has one hypercomplex coordinate u, one complex coordinate w of M4, and 2 complex coordinates ξ12 of CP2. They would correspond to the coordinates Xi and the hierarchy of infinite primes would have 4 levels. The order of the coordinates does not affect the rational function R(u,w,ξ22) but the hypercomplex coordinate is naturally the first one. It seems that the order of complex coordinates depends on the space-time region since not all complex coordinates can be solved in terms of the remaining coordinates. It can even happen that the coordinate does not appear in P or Q.

    The hypercomplex coordinate u is in a special position and one can ask whether rational functions for it are sensical. Trigonometric functions and Fourier analysis look more natural.

What could be the physical relationship between the space-time surfaces representing poles and zeros?

  1. Could zeros and poles relate to ZEO and the time reversal occurring in "big" state function reduction (BSFR)? Could the time reversal change zero to poles and vice versa and correspond to fi→ 1/fi inducing P/Q → Q/P? Are both zeros and poles present for a given arrow of time or only for one arrow of time? One can also ask whether complex conjugation could be involved with the time reversal occurring in BSFR (it would not be the same as time reflection T).

    For a meromorphic function, the numbers of poles and zeros are in a well-defined sense so that the numbers of corresponding space-time surfaces are the samel. What could this mean physically? Could this relate to the conservation of fermion numbers? There would be two conserved fermion numbers corresponding to f1 and f2. Could they correspond to baryon and lepton number.

  2. P and Q would have no common polynomial (prime) factors. The zeros and poles of R as zeros of P and Q are represented as space-time surfaces. Could the zeros and poles correspond to matter and antimatter so that memomorphy would state that the numbers of particles and antiparticles are the same? Or do they correspond to the two fermionic vacuums assigned to the boundaries of CD such that the vacuum associated with the passive boundary is what corresponds to quantum states in 3-D sense.
  3. Could infinite primes could have two representations. A representation as space-time surfaces in terms of holography= holomorphy principle and as fermion states involving a 4-levelled hierarchy of second quantizations for both quarks and leptons. What these 4 quantizations could mean physically?
  4. Can the space-time surfaces defined by zeros and poles intersect each other? If BSFR permutes the two kinds of space-time surfaces, they should intersect at 3-surfaces defining holographic data. The failure of the exact classical determinism implies that the 4-surfaces are not identical.

Hierarchies of functional composites of g: C2→ C2

One can consider also rational functions g=(g1,g2) with gi=R=Pi/Qi: C2→ C2 defining abstraction hierarchies. Also in this case elementwise product is possible but functional composition º and the interpretation in terms of formation of abstractions looks more natural. Fractals are obtained as a special case. º is not commutative and it is not clear whether the analogs of primes, prime decomposition, and the definition of rational functions exist.

  1. Prime decompositions for g with respect to º make sense and can identify polynomials f=(f1,f2) which are primes in the sense that they do not allow composition with g. These primal spacetime surfaces define the analogs of ground states.
  2. The notion of generalized rational makes sense. For ordinary infinite primes represented as P/Q, the polynomials P and Q do not have common prime polynomial factors. Now / is replaced with a functional division (f,g)→ fº g-1 instead of (f,g)→ f/g. In general, g-1 is a many-valued algebraic function. In the one-variable case for polynomials the inverse involves algebraic functions appearing in the expressions of the roots of the polynomial. This means a considerable generalization of the notion of infinite prime.
  3. One obtains the counterpart for the hierarchy of infinite primes. The analog for the product of infinite primes at a given level is the composite of prime g:s. The irreducible polynomials as realization of bound states for ordinary infinite primes replaces the coefficient field E with its extension. The replacement of the rationals as a coefficient field with its extensions E does the same for the composes of g:s. This gives a hierarchy similar to that of irreducible polynomials: now the hierarchy formed by rational functions with increasing number of variables corresponds to the hierarchy of extensions of rationals.
  4. The conditions for zeros and poles are not affected since they reduce to corresponding conditions for gº f.
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, April 06, 2025

More evidence for dark matter-like particles in Milky Way: TGD view of color as an explanation?

Sabine Hossenfelder told about quite recent finding possibly related to dark matter (see this). "Anomalous ionization in the central molecular zone by sub-GeV dark matter" can be found in arXiv (see this). Here is the abstract of the article:

We demonstrate that the anomalous ionization rate observed in the Central Molecular Zone can be attributed to MeV dark matter annihilations into e+e- pairs for galactic dark matter profiles with slopes γ> 1. The low annihilation cross-sections required avoid cosmological constraints and imply no detectable inverse Compton, bremsstrahlung or synchrotron emissions in radio, X and gamma rays. The possible connection to the source of the unexplained 511 keV line emission in the Galactic Center suggests that both observations could be correlated and have a common origin.

I will try to summarize what I understood from Sabine's Youtube talk.

  1. It has been observed that from the Central Molecular Zone, where stars are formed, arrives more IR light than expected. Hydrogen forms normally H2 molecules and they cannot explain the IR light in terms of vibrational excitations. H3+ could give rise to the infrared light.
  2. There should be a mechanism leading to the formation of ionized H3 molecules. Electrons could cause the ionization and the proposal is that dark particles in MeV mass range could serve as the source of the ionizing electrons. The proposal is that two dark particles in this energy range annihilate to electron-positron pairs and the electrons ionize the H3 molecules.
  3. There indeed exists earlier evidence for gamma rays with energy 511 eV from the Milky Way center (see this and this). They could be generated in the annihilation of dark particles with mass slightly above MeV to gamma pairs. This would happen in the collisions of these particles and this would require that the dark particles are very nearly at rest.
TGD leads to a much simpler explanation for the findings in terms of particles, whose mass .511 MeV is only slightly above the mass of electron (see this). They would directly decay to electron positron pairs.
  1. The empirical findings motivating this hypothesis emerged already in the seventies from the finding that in heavy ion collisions with collision energy near the criticality to overcome Coulomb wall, anomalous electron-positron pairs were observed with energy, which was slightly more than twice the rest mass .5 MeV of electrons. In the standard model, the decay widths of weak bosons do not allow new particles in this mass range and this was probably the reason why the findings were forgotten.
  2. An essential role in the explanation is played by the TGD view of color symmetry and dynamics of strong interactions, which both are in some respects very different from the QCD view. I have described this view in (see this) inspired by the quite recent finding of large isospin breaking in the production of kaon pairs. The production rate for charged kaons is 18.4 per cent higher than for neutral kaons challenging QCD. The explanation that comes to mind is that color gauge coupling slightly depends on the electric charge of the quark besides the weak dependence on the p-adic mass scale of the quark (now u or d quark).
How does the TGD based view of color lead to this proposal?
  1. Color corresponds to color partial waves in CP2 and a spectrum of colored spinor harmonics in H=M4× CP2 are predicted for both quarks and leptons in CP2. The color partial waves correlate with electroweak quantum numbers unlike the observed color quantum numbers. This means large isospin breaking (see this) at the fundamental level, where all classical gauge fields and gravitational field are expressible in terms of H coordinates and their gradients and only four of them is needed by general coordinate invariance. One can imagine a mechanism, which guarantees weak screening in scales longer than weak boson Compton length and this mechanism also explains the color quantum numbers of physical leptons and quarks.

    The weak screening above weak scale could take place by a pair of left and right-handed neutrinos assignable to the monopole flux tubes associated with the quark and it would also give the needed additional color charge so that quarks would be color triplets and leptons color singlets.

  2. It is however possible to also have color octet and higher triality t=0 excitations of leptons and analogous excitations of quarks (see this). The particles with mass slightly above 2me would be analogs of pions, electropions as I have called them. Also muopions and taupions are predicted and there are experimental indications also for them (see this, this, and this) but forgotten since they cannot exist in the standard model.
How to understand the darkness of electropions?
  1. The darkness of the leptopions and possible other leptomesons could make it possible to avoid the problems with the decay widths of weak bosons. But what could this darkness mean? The experiments of Blackman and others (see this) suggest that the irradiation of the brain with EEG frequencies has behavioral and physiological effects and that these effects are quantal and correspond to cyclotron transitions in a magnetic field of about 2BE/5, where BE is the Earth's magnetic field. This does not make sense in standard quantum theory since the value of the Planck constant is more than 10 orders of magnitude too small and the cyclotron energy would be much below the thermal energy. I have proposed that the Planck constant, or effective Planck constant heff, has a spectrum and its value can be arbitrarily large.

    In the recent formulation of TGD involving number theoretic vision heff hierarchy follows as a prediction. The large value of heff would give rise to quantum coherent phases of the ordinary matter at magnetic/field body of the system and these phases would behave like dark matter in the sense that only particles with the same value of heff can appear in the vertices of TGD analogs of Feynman diagrams.

  2. The natural guess is that the 511 keV particle is dark in this number theoretic sense. It would not be created in the decays of ordinary weak bosons unless they themselves are dark with the same value of heff. The second option is that leptomesons can appear only in the dark phase at quantum criticality associated with the situation in which the Coulomb wall can be overcome. Dark phases in this sense appear only at quantum criticality making possible long range quantum fluctuations and quantum coherence.
  3. For along time I thought that the darkness in number theoretic sense could correspond to the darkness of the galactic dark matter but now it seems that this is be the case (see this, this and this). Classically, galactic dark matter could correspond to Kähler magnetic and volume energy of cosmic strings, which are 4-surfaces in M4× CP2 with 2-D M4 projection. One can of course ask, whether the quantum classical correspondence implies that classical energy equals to its fermionic counterpart in which case these view of dar matter could be equivalent.

    The number theoretic darkness would however make itself visible also in cosmology. The transformation of ordinary particles to dark phases at the magnetic bodies, forced by the unavoidable increase of number theoretical complexity implying evolution, would reduce the amount of ordinary matter and this could explain why baryonic (and also leptonic) matter seems to gradually disappear during the cosmic evolution.

To sum up, the recently observed isospin anomaly of strong interactions together with additional empirical support for the TGD view of color is rather encouraging. This hypothesis is testable without expensive accelerators already now. Only the readiness to challenge the belief that QCD is the final theory of strong interactions would be required and I am afraid that it takes time to reach this readiness. Two very different views of science are competing. The old fashioned science in which anomalies were Gold nuggets and the Big Science in which everything is understood if 98 percent is understood.

See the article The violation of isospin symmetry in strong interactions and .511 MeV anomaly: evidence for TGD view of quark color? or the chapter New Particle Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, April 03, 2025

Do we need Future Circular Collider what should we study with it?

There are considerable pressures against building the Future Circular Collider in particle physics circles. From the Wiki page for FCC (this) one learns that 3 colliders FCC--hh, FCC-ee, and FCCeh corresponding to hadron-hadron, electron-electron and electron-hadron collisions, are planned. FCC-ee would be built first. The total cm energy for hadron hadron collisions would be about 30 times higher than at LHC.

What would be studied would be for instance dark matter particles, supersymmetric particles and electroweak interactions in higher precision and at higher energies.

My opinion is that more money on empirical research cannot help since the basic problem is that theoretical research has been for decades in a deep intellectual stagnation and cannot provide new ideas to be tested. New ideas and theories have been systematically censored during the last decades as I have learned during the last 43 years after my thesis in 1982.

My thesis proposed a new view of gravitation and standard model interactions obtained by replacing string world sheets with 4-D space-time surfaces in embedding space H=M4×CP2 geometrizing standard model symmetries. This led to a hybrid of general and special relativities solving the difficulties of general relativity with the basic conservation laws.

The embedding space H=M4×CP2 for space-time surfaces, and therefore the predicted physics, is consistent with the standard model and unique from its mere mathematical existence. A deep connection between geometric vision and number theoretic vision (something totally new) leading to a generalization of Langlands duality emerges in the 2 4-D situation. The theory is exactly solvable and there would be an enormous amount of theoretical and certainly also experimental work to be done but censorship prevents any progress (see this and this) .

Interestingly, one of the basic predictions is the strong correlation between electroweak and strong interactions at the fundamental level (since geometrization of fields implies that they all reduce to CP2 geometry). The recent totally unexpected finding of a large violation of isospin symmetry in strong interactions (see this and this) is consistent with the TGD prediction (see this). This suggests that the promising research direction is, not the particle physicist's view of dark matter or SUSY, but testing of whether the basic assumptions of QCD are really correct and whether the theory of strong interaction is really a gauge theory .

See TGD as it is towards end of 2024: part I and TGD as it is towards end of 2024: part II

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, April 02, 2025

Realization of a concept as a set of space-time surfaces

The space-time surfaces defined as roots of gº ...gº f, where f is a prime polynomial and g(0,0)=(0,0) (here f is an analytic map H=M4×CP2→ C2 and g an analytic map C2 → C2) form a kind of ensemble of disjoint space-time surfaces. Abstraction means formation of concepts and classically concept is the set of its different instances. Could this union of disjoint space-time surfaces as roots represent a concept classically?

What comes to mind are biological systems consisting of cells: do they represent a concept of a cell? What about a population of organisms? What about an ensemble of elementary particles: could it represent the concept of, say, electrons?

  1. Holography= holomorphy principle would be essential for the realization of the geometric correlate of collective quantum coherence. Only initial 3-surfaces defining holographic data matter in holography. The 4-D tangent spaces defining the counterparts for initial velocities cannot be chosen freely. This would force a coherent synchronous motion. Also classical non-determinism would be present. Could it correspond to piecewise constant Hamilton-Jacobi structure with different structure assigned to regions of the space-time surface.
  2. The Hamilton Jacobi structure of all members of the ensemble from by the roots of gº ...gº f is the same so that they can be said to behave synchronously like a single quantum coherent system. Could the loss of quantum coherence mean splitting: pk roots forming a coherent structure would decompose to pk1 sets with different H-J structures containing pk-k1 roots. Cognitive ensemble, as a representation of a concept, would decompose to ensembles representing pk1 different concepts. Is continual splitting and fusion taking place? Could this conceptualization make possible conceptualized memory: the image of the house would be represented by an ensemble of images of houses as kind of artworks.
I have often enjoyed looking at a crop field in a mild summer wind. To me, the behaviour suggests quantum coherence.
  1. Crop field in the wind seems to behave like a single entity. Could the crop field correspond to an abstraction of the notion of crop as a set of its instances, realized as a set of space-time surfaces realized as roots of for gº....º f. Also more general composites (g1 (g2)...(gn)º f, gi(0,0)=(0,0), are possible. The roots could also represent the notion of a crop field in wind as a collection of crops, each moving in wind as a particular motion of air around it.
  2. Do I create this abstraction as a conceptualization, a kind of thought bubble, or does the real crop field represent this abstraction? Could f correspond to the primary sensory perception and does cognition generate this set (not "in my head" but at my field body) as a hierarchy of iterations and an even more general set of g-composites? Different observers experience crop fields very differently, which would suggest that this is a realistic view.
  3. If this set represents the real crop field, there should also be a space-time surface representing the environment and the wind. Could wormhole contacts connect these surfaces representing the concept and the environment to a single coherent whole.

    The usual thinking is that crops from uncorrelated systems and wind as a motion of air causes the crops to move. The coherent motion would correspond to a collective mode in which crops move in unisono and synchronously. What creates this coherent motion? Could macroscopic quantum coherence at the level of the field body be the underlying reason in the TGD Universe?

  4. How to describe the wind if one accepts the crop field in wind itself represents the notion of crop in wind? Usually wind is seen as an external force. Coherent motion correlates with the wind locally. What does this mean? How could one include the wind as a part of the system? Wind should affect the crops as roots of gº...gº f. Each root should correspond to a specific crop affected locally by the wind. Or should one accept that the concept of crop field in the wind is realized only at the level of cognition rather than at the level of reality?
See the article Classical non-determinism in relation to holography, memory and the realization of intentional action in the TGD Universe or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.