https://matpitka.blogspot.com/2025/04/?m=0

Monday, April 07, 2025

Infinite primes, the notion of rational prime, and holography= holomorphy principle

The notion of infinite prime \cite{allb/visionc,infpc,infmotives} emerged a repeated quantization of a supersymmetric arithmetic quantum field theory in which the many-fermion states and many-boson formed from the single particle states at a given level give rise to free many-particle states at the next level. Also bound states of these states are included at the new level. There is a correspondence with rational functions as ratios R=P/Q of polynomials and infinite prime can be interpreted as prime rational function in the sense that P and Q have no common factors. The construction is possible for any coefficient field of polynomials identified as rationals or extension of rationals, call it E.

At a given level implest polynomials P and Q are products of monomials with roots in E, say rationals. Irreducible polynomials correspond to products of monomials with algebraic roots in the corresponding extension of rationals and define the counterparts of bound states so that the notion of bound state would be purely number theoretic. The level of the hierarchy would be characterized by the number of variables of the rational functions.

Holography= holomorphy principle suggests that the hierarchy of infinite primes could be used to construct the functions f1: H→ C and f2:H→ C defining space-time surfaces as roots f=(f1,f2). There is one hypercomplex coordinate and 3 complex coordinates so that the hierarchy for fi would have 4 levels. The functions g:C2→ C2 define a hierarchy of maps with respect to the functional composition º. One can identify the counterparts of primes with respect to º and it turns out that the notion of infinite prime generalizes.

The construction of infinite primes

Consider first the construction of infinite primes.

  1. Two integers with no common prime factors define a rational r=m/n uniquely. Introduce the analog of Fermi sea as the product X = ∏p p of all rational primes. Infinite primes is obtain as P= nX/r+ mr such that m=∏pk is a product for finite number of primes pk, n is not divisible by any pk, and m has as factors powers of some of primes pk. The finite and infinite parts of infinite prime correspond to the numerator and denominator of a rational n/m so that rationals and infinite primes can be identified. One can say that the rational for which n and m have no common factors is prime in this sense.

    One can interpret the primes pk dividing r as labels of fermions and r as fermions kicked out from the Fermi sea defined by X. The integers n and m as analogs of many-boson states. This construction generalizes also to athe algebraic extensions E of rationals.

  2. One can generalize the construction to the second level of the hierarchy. At the second level one introduces fermionic vacuum Y as a product of all finite and infinite primes at the first level. One can repeat the construction and now integers r,m and n are products of the monomials P(m/n,X)= nX/r+mr represented as infinite integers and . The analog of r from the new fermionic vacuum away some fermions represented by infinite primes P(m/n,X)= nX/r+mr by kicking them out of the vacuum. The infinite integers at the second level are analogous to rational functions P/Q with the polynomials P and Q defined as the products of ratio of the monomials p(m/n,X)= X/r+mr taking the role of n and m. These polynomials are not irreducible.

    One can however generalize and assume that they factor to monomials associated with the roots of some irreducible polynomial P (no rational roots) in some extension E of rationals. Hence also rational functions R(X)= P(X)/Q(X) with no common monomial factors as analogs of primes defining the analogs of primes for rational functions emerge. The lowest level with rational roots would correspond to free many-fermion states and the irreducible polynomials to a hierarchy of fermionic bound states.

  3. The construction can be continued and one obtains an infinite hierarchy of infinite primes represented as rational functions R(X1,X2,..Xn)= P(X1,X2,..Xn)/Q(X1,X2,..Xn) which have no common prime factors of level n-1. At the second level the polynomials are P(X,Y)= ∑k Pnk(X)Yk. The roots Yk of P(X,Y) are obtained as ordinary roots of a polynomials with coefficients Pnk(X) depending on X and they define the factorization of P to monomials. At the third level the coefficients are irreducible polynomials depending on X and Y and the roots of Z are algebraic functions of X and Y.

    Physically this construction is analogous to a repeated second quantization of a number theoretic quantum field theory with bosons and fermions labelled/represented by primes. The simplest states at a given level of free many-particle states and bound states correspond to irreducible polynomials. The notion of free state depends on the extension E of rationals used.

Infinite primes and holography= holomorphy principle

How does this relate to holography= holomorphy principle? One can consider two options for what the hierarchy of infinite prime could correspond to.

  1. One considers functions f=(f1,f2): H→ C2, with fi expressed in terms of rational functions of 3 complex coordinates and one hyperbolic coordinate. The general hypothesis is that the function pairs (f1,f2) defining the space-time surfaces as their roots (f1,f2)=(0,0) are analytic functions of generalized complex coordinates of H with coefficients in some extension E of rationals.
  2. Now one has a pair of functions: (f1,f2) or (g1,g2) but infinite primes involve only a single function. One can solve the problem by using element-wise sum and product so that both factors would correspond to a hierarchy of infinite primes.
  3. One can also assign space-time surfaces to polynomial pairs (P1,P2) and also to pairs rational functions (R1,R2). One can therefore restrict the consideration to f1\equiv f. f2 can be treated in the same way but there are some physical motivations to ask whether f2 could define the counterpart of cosmological constant and therefore could be more or less fixed in a given scale.
The allowance of rational functions forces us to ask whether zeros are enough or whether also poles needed?
  1. Hitherto it has been assumed that only the roots f=0 matter. If one allows rational functions P/Q then also the poles, identifiable as roots of Q are important. The compactification of the complex plane to Riemann-sphere CP1 is carried out in complex analysis so that the poles have a geometric interpretation: zeros correspond to say North Pole and poles to the South pole for the map of C→ C interpreted as map CP1→ CP1. Compactication would mean now to the compactification C2→ CP12.

    For instance, the Riemann-Roch theorem (see this) is a statement about the properties of zeros and poles of meromorphic functions defined at Riemann surfaces. The so called divisor is a representation for the poles and zeros as a formal sum over them. For instance, for meromorphic functions at a sphere the numbers of zeros and poles, with multiplicity taken into account, are the same.

    The notion of the divisor would generalize to the level of space-time surfaces so that a divisor would be a union of space-time surfaces representing zero and poles of P and Q? Note that the iversion fi→ 1/fi maps zeros and poles to each other. It can be performed for f1 and f2 separately and the obvious question concerns the physical interpretation.

  2. Infinite primes would thus correspond to rational functions R= P/Q of several variables. In the recent case, one has one hypercomplex coordinate u, one complex coordinate w of M4, and 2 complex coordinates ξ12 of CP2. They would correspond to the coordinates Xi and the hierarchy of infinite primes would have 4 levels. The order of the coordinates does not affect the rational function R(u,w,ξ22) but the hypercomplex coordinate is naturally the first one. It seems that the order of complex coordinates depends on the space-time region since not all complex coordinates can be solved in terms of the remaining coordinates. It can even happen that the coordinate does not appear in P or Q.

    The hypercomplex coordinate u is in a special position and one can ask whether rational functions for it are sensical. Trigonometric functions and Fourier analysis look more natural.

What could be the physical relationship between the space-time surfaces representing poles and zeros?

  1. Could zeros and poles relate to ZEO and the time reversal occurring in "big" state function reduction (BSFR)? Could the time reversal change zero to poles and vice versa and correspond to fi→ 1/fi inducing P/Q → Q/P? Are both zeros and poles present for a given arrow of time or only for one arrow of time? One can also ask whether complex conjugation could be involved with the time reversal occurring in BSFR (it would not be the same as time reflection T).

    For a meromorphic function, the numbers of poles and zeros are in a well-defined sense so that the numbers of corresponding space-time surfaces are the samel. What could this mean physically? Could this relate to the conservation of fermion numbers? There would be two conserved fermion numbers corresponding to f1 and f2. Could they correspond to baryon and lepton number.

  2. P and Q would have no common polynomial (prime) factors. The zeros and poles of R as zeros of P and Q are represented as space-time surfaces. Could the zeros and poles correspond to matter and antimatter so that memomorphy would state that the numbers of particles and antiparticles are the same? Or do they correspond to the two fermionic vacuums assigned to the boundaries of CD such that the vacuum associated with the passive boundary is what corresponds to quantum states in 3-D sense.
  3. Could infinite primes could have two representations. A representation as space-time surfaces in terms of holography= holomorphy principle and as fermion states involving a 4-levelled hierarchy of second quantizations for both quarks and leptons. What these 4 quantizations could mean physically?
  4. Can the space-time surfaces defined by zeros and poles intersect each other? If BSFR permutes the two kinds of space-time surfaces, they should intersect at 3-surfaces defining holographic data. The failure of the exact classical determinism implies that the 4-surfaces are not identical.

Hierarchies of functional composites of g: C2→ C2

One can consider also rational functions g=(g1,g2) with gi=R=Pi/Qi: C2→ C2 defining abstraction hierarchies. Also in this case elementwise product is possible but functional composition º and the interpretation in terms of formation of abstractions looks more natural. Fractals are obtained as a special case. º is not commutative and it is not clear whether the analogs of primes, prime decomposition, and the definition of rational functions exist.

  1. Prime decompositions for g with respect to º make sense and can identify polynomials f=(f1,f2) which are primes in the sense that they do not allow composition with g. These primal spacetime surfaces define the analogs of ground states.
  2. The notion of generalized rational makes sense. For ordinary infinite primes represented as P/Q, the polynomials P and Q do not have common prime polynomial factors. Now / is replaced with a functional division (f,g)→ fº g-1 instead of (f,g)→ f/g. In general, g-1 is a many-valued algebraic function. In the one-variable case for polynomials the inverse involves algebraic functions appearing in the expressions of the roots of the polynomial. This means a considerable generalization of the notion of infinite prime.
  3. One obtains the counterpart for the hierarchy of infinite primes. The analog for the product of infinite primes at a given level is the composite of prime g:s. The irreducible polynomials as realization of bound states for ordinary infinite primes replaces the coefficient field E with its extension. The replacement of the rationals as a coefficient field with its extensions E does the same for the composes of g:s. This gives a hierarchy similar to that of irreducible polynomials: now the hierarchy formed by rational functions with increasing number of variables corresponds to the hierarchy of extensions of rationals.
  4. The conditions for zeros and poles are not affected since they reduce to corresponding conditions for gº f.
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, April 06, 2025

More evidence for dark matter-like particles in Milky Way: TGD view of color as an explanation?

Sabine Hossenfelder told about quite recent finding possibly related to dark matter (see this). "Anomalous ionization in the central molecular zone by sub-GeV dark matter" can be found in arXiv (see this). Here is the abstract of the article:

We demonstrate that the anomalous ionization rate observed in the Central Molecular Zone can be attributed to MeV dark matter annihilations into e+e- pairs for galactic dark matter profiles with slopes γ> 1. The low annihilation cross-sections required avoid cosmological constraints and imply no detectable inverse Compton, bremsstrahlung or synchrotron emissions in radio, X and gamma rays. The possible connection to the source of the unexplained 511 keV line emission in the Galactic Center suggests that both observations could be correlated and have a common origin.

I will try to summarize what I understood from Sabine's Youtube talk.

  1. It has been observed that from the Central Molecular Zone, where stars are formed, arrives more IR light than expected. Hydrogen forms normally H2 molecules and they cannot explain the IR light in terms of vibrational excitations. H3+ could give rise to the infrared light.
  2. There should be a mechanism leading to the formation of ionized H3 molecules. Electrons could cause the ionization and the proposal is that dark particles in MeV mass range could serve as the source of the ionizing electrons. The proposal is that two dark particles in this energy range annihilate to electron-positron pairs and the electrons ionize the H3 molecules.
  3. There indeed exists earlier evidence for gamma rays with energy 511 eV from the Milky Way center (see this and this). They could be generated in the annihilation of dark particles with mass slightly above MeV to gamma pairs. This would happen in the collisions of these particles and this would require that the dark particles are very nearly at rest.
TGD leads to a much simpler explanation for the findings in terms of particles, whose mass .511 MeV is only slightly above the mass of electron (see this). They would directly decay to electron positron pairs.
  1. The empirical findings motivating this hypothesis emerged already in the seventies from the finding that in heavy ion collisions with collision energy near the criticality to overcome Coulomb wall, anomalous electron-positron pairs were observed with energy, which was slightly more than twice the rest mass .5 MeV of electrons. In the standard model, the decay widths of weak bosons do not allow new particles in this mass range and this was probably the reason why the findings were forgotten.
  2. An essential role in the explanation is played by the TGD view of color symmetry and dynamics of strong interactions, which both are in some respects very different from the QCD view. I have described this view in (see this) inspired by the quite recent finding of large isospin breaking in the production of kaon pairs. The production rate for charged kaons is 18.4 per cent higher than for neutral kaons challenging QCD. The explanation that comes to mind is that color gauge coupling slightly depends on the electric charge of the quark besides the weak dependence on the p-adic mass scale of the quark (now u or d quark).
How does the TGD based view of color lead to this proposal?
  1. Color corresponds to color partial waves in CP2 and a spectrum of colored spinor harmonics in H=M4× CP2 are predicted for both quarks and leptons in CP2. The color partial waves correlate with electroweak quantum numbers unlike the observed color quantum numbers. This means large isospin breaking (see this) at the fundamental level, where all classical gauge fields and gravitational field are expressible in terms of H coordinates and their gradients and only four of them is needed by general coordinate invariance. One can imagine a mechanism, which guarantees weak screening in scales longer than weak boson Compton length and this mechanism also explains the color quantum numbers of physical leptons and quarks.

    The weak screening above weak scale could take place by a pair of left and right-handed neutrinos assignable to the monopole flux tubes associated with the quark and it would also give the needed additional color charge so that quarks would be color triplets and leptons color singlets.

  2. It is however possible to also have color octet and higher triality t=0 excitations of leptons and analogous excitations of quarks (see this). The particles with mass slightly above 2me would be analogs of pions, electropions as I have called them. Also muopions and taupions are predicted and there are experimental indications also for them (see this, this, and this) but forgotten since they cannot exist in the standard model.
How to understand the darkness of electropions?
  1. The darkness of the leptopions and possible other leptomesons could make it possible to avoid the problems with the decay widths of weak bosons. But what could this darkness mean? The experiments of Blackman and others (see this) suggest that the irradiation of the brain with EEG frequencies has behavioral and physiological effects and that these effects are quantal and correspond to cyclotron transitions in a magnetic field of about 2BE/5, where BE is the Earth's magnetic field. This does not make sense in standard quantum theory since the value of the Planck constant is more than 10 orders of magnitude too small and the cyclotron energy would be much below the thermal energy. I have proposed that the Planck constant, or effective Planck constant heff, has a spectrum and its value can be arbitrarily large.

    In the recent formulation of TGD involving number theoretic vision heff hierarchy follows as a prediction. The large value of heff would give rise to quantum coherent phases of the ordinary matter at magnetic/field body of the system and these phases would behave like dark matter in the sense that only particles with the same value of heff can appear in the vertices of TGD analogs of Feynman diagrams.

  2. The natural guess is that the 511 keV particle is dark in this number theoretic sense. It would not be created in the decays of ordinary weak bosons unless they themselves are dark with the same value of heff. The second option is that leptomesons can appear only in the dark phase at quantum criticality associated with the situation in which the Coulomb wall can be overcome. Dark phases in this sense appear only at quantum criticality making possible long range quantum fluctuations and quantum coherence.
  3. For along time I thought that the darkness in number theoretic sense could correspond to the darkness of the galactic dark matter but now it seems that this is be the case (see this, this and this). Classically, galactic dark matter could correspond to Kähler magnetic and volume energy of cosmic strings, which are 4-surfaces in M4× CP2 with 2-D M4 projection. One can of course ask, whether the quantum classical correspondence implies that classical energy equals to its fermionic counterpart in which case these view of dar matter could be equivalent.

    The number theoretic darkness would however make itself visible also in cosmology. The transformation of ordinary particles to dark phases at the magnetic bodies, forced by the unavoidable increase of number theoretical complexity implying evolution, would reduce the amount of ordinary matter and this could explain why baryonic (and also leptonic) matter seems to gradually disappear during the cosmic evolution.

To sum up, the recently observed isospin anomaly of strong interactions together with additional empirical support for the TGD view of color is rather encouraging. This hypothesis is testable without expensive accelerators already now. Only the readiness to challenge the belief that QCD is the final theory of strong interactions would be required and I am afraid that it takes time to reach this readiness. Two very different views of science are competing. The old fashioned science in which anomalies were Gold nuggets and the Big Science in which everything is understood if 98 percent is understood.

See the article The violation of isospin symmetry in strong interactions and .511 MeV anomaly: evidence for TGD view of quark color? or the chapter New Particle Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, April 03, 2025

Do we need Future Circular Collider what should we study with it?

There are considerable pressures against building the Future Circular Collider in particle physics circles. From the Wiki page for FCC (this) one learns that 3 colliders FCC--hh, FCC-ee, and FCCeh corresponding to hadron-hadron, electron-electron and electron-hadron collisions, are planned. FCC-ee would be built first. The total cm energy for hadron hadron collisions would be about 30 times higher than at LHC.

What would be studied would be for instance dark matter particles, supersymmetric particles and electroweak interactions in higher precision and at higher energies.

My opinion is that more money on empirical research cannot help since the basic problem is that theoretical research has been for decades in a deep intellectual stagnation and cannot provide new ideas to be tested. New ideas and theories have been systematically censored during the last decades as I have learned during the last 43 years after my thesis in 1982.

My thesis proposed a new view of gravitation and standard model interactions obtained by replacing string world sheets with 4-D space-time surfaces in embedding space H=M4×CP2 geometrizing standard model symmetries. This led to a hybrid of general and special relativities solving the difficulties of general relativity with the basic conservation laws.

The embedding space H=M4×CP2 for space-time surfaces, and therefore the predicted physics, is consistent with the standard model and unique from its mere mathematical existence. A deep connection between geometric vision and number theoretic vision (something totally new) leading to a generalization of Langlands duality emerges in the 2 4-D situation. The theory is exactly solvable and there would be an enormous amount of theoretical and certainly also experimental work to be done but censorship prevents any progress (see this and this) .

Interestingly, one of the basic predictions is the strong correlation between electroweak and strong interactions at the fundamental level (since geometrization of fields implies that they all reduce to CP2 geometry). The recent totally unexpected finding of a large violation of isospin symmetry in strong interactions (see this and this) is consistent with the TGD prediction (see this). This suggests that the promising research direction is, not the particle physicist's view of dark matter or SUSY, but testing of whether the basic assumptions of QCD are really correct and whether the theory of strong interaction is really a gauge theory .

See TGD as it is towards end of 2024: part I and TGD as it is towards end of 2024: part II

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, April 02, 2025

Realization of a concept as a set of space-time surfaces

The space-time surfaces defined as roots of gº ...gº f, where f is a prime polynomial and g(0,0)=(0,0) (here f is an analytic map H=M4×CP2→ C2 and g an analytic map C2 → C2) form a kind of ensemble of disjoint space-time surfaces. Abstraction means formation of concepts and classically concept is the set of its different instances. Could this union of disjoint space-time surfaces as roots represent a concept classically?

What comes to mind are biological systems consisting of cells: do they represent a concept of a cell? What about a population of organisms? What about an ensemble of elementary particles: could it represent the concept of, say, electrons?

  1. Holography= holomorphy principle would be essential for the realization of the geometric correlate of collective quantum coherence. Only initial 3-surfaces defining holographic data matter in holography. The 4-D tangent spaces defining the counterparts for initial velocities cannot be chosen freely. This would force a coherent synchronous motion. Also classical non-determinism would be present. Could it correspond to piecewise constant Hamilton-Jacobi structure with different structure assigned to regions of the space-time surface.
  2. The Hamilton Jacobi structure of all members of the ensemble from by the roots of gº ...gº f is the same so that they can be said to behave synchronously like a single quantum coherent system. Could the loss of quantum coherence mean splitting: pk roots forming a coherent structure would decompose to pk1 sets with different H-J structures containing pk-k1 roots. Cognitive ensemble, as a representation of a concept, would decompose to ensembles representing pk1 different concepts. Is continual splitting and fusion taking place? Could this conceptualization make possible conceptualized memory: the image of the house would be represented by an ensemble of images of houses as kind of artworks.
I have often enjoyed looking at a crop field in a mild summer wind. To me, the behaviour suggests quantum coherence.
  1. Crop field in the wind seems to behave like a single entity. Could the crop field correspond to an abstraction of the notion of crop as a set of its instances, realized as a set of space-time surfaces realized as roots of for gº....º f. Also more general composites (g1 (g2)...(gn)º f, gi(0,0)=(0,0), are possible. The roots could also represent the notion of a crop field in wind as a collection of crops, each moving in wind as a particular motion of air around it.
  2. Do I create this abstraction as a conceptualization, a kind of thought bubble, or does the real crop field represent this abstraction? Could f correspond to the primary sensory perception and does cognition generate this set (not "in my head" but at my field body) as a hierarchy of iterations and an even more general set of g-composites? Different observers experience crop fields very differently, which would suggest that this is a realistic view.
  3. If this set represents the real crop field, there should also be a space-time surface representing the environment and the wind. Could wormhole contacts connect these surfaces representing the concept and the environment to a single coherent whole.

    The usual thinking is that crops from uncorrelated systems and wind as a motion of air causes the crops to move. The coherent motion would correspond to a collective mode in which crops move in unisono and synchronously. What creates this coherent motion? Could macroscopic quantum coherence at the level of the field body be the underlying reason in the TGD Universe?

  4. How to describe the wind if one accepts the crop field in wind itself represents the notion of crop in wind? Usually wind is seen as an external force. Coherent motion correlates with the wind locally. What does this mean? How could one include the wind as a part of the system? Wind should affect the crops as roots of gº...gº f. Each root should correspond to a specific crop affected locally by the wind. Or should one accept that the concept of crop field in the wind is realized only at the level of cognition rather than at the level of reality?
See the article Classical non-determinism in relation to holography, memory and the realization of intentional action in the TGD Universe or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.