https://matpitka.blogspot.com/2025/

Monday, April 07, 2025

Infinite primes, the notion of rational prime, and holography= holomorphy principle

The notion of infinite prime \cite{allb/visionc,infpc,infmotives} emerged a repeated quantization of a supersymmetric arithmetic quantum field theory in which the many-fermion states and many-boson formed from the single particle states at a given level give rise to free many-particle states at the next level. Also bound states of these states are included at the new level. There is a correspondence with rational functions as ratios R=P/Q of polynomials and infinite prime can be interpreted as prime rational function in the sense that P and Q have no common factors. The construction is possible for any coefficient field of polynomials identified as rationals or extension of rationals, call it E.

At a given level implest polynomials P and Q are products of monomials with roots in E, say rationals. Irreducible polynomials correspond to products of monomials with algebraic roots in the corresponding extension of rationals and define the counterparts of bound states so that the notion of bound state would be purely number theoretic. The level of the hierarchy would be characterized by the number of variables of the rational functions.

Holography= holomorphy principle suggests that the hierarchy of infinite primes could be used to construct the functions f1: H→ C and f2:H→ C defining space-time surfaces as roots f=(f1,f2). There is one hypercomplex coordinate and 3 complex coordinates so that the hierarchy for fi would have 4 levels. The functions g:C2→ C2 define a hierarchy of maps with respect to the functional composition º. One can identify the counterparts of primes with respect to º and it turns out that the notion of infinite prime generalizes.

The construction of infinite primes

Consider first the construction of infinite primes.

  1. Two integers with no common prime factors define a rational r=m/n uniquely. Introduce the analog of Fermi sea as the product X = ∏p p of all rational primes. Infinite primes is obtain as P= nX/r+ mr such that m=∏pk is a product for finite number of primes pk, n is not divisible by any pk, and m has as factors powers of some of primes pk. The finite and infinite parts of infinite prime correspond to the numerator and denominator of a rational n/m so that rationals and infinite primes can be identified. One can say that the rational for which n and m have no common factors is prime in this sense.

    One can interpret the primes pk dividing r as labels of fermions and r as fermions kicked out from the Fermi sea defined by X. The integers n and m as analogs of many-boson states. This construction generalizes also to athe algebraic extensions E of rationals.

  2. One can generalize the construction to the second level of the hierarchy. At the second level one introduces fermionic vacuum Y as a product of all finite and infinite primes at the first level. One can repeat the construction and now integers r,m and n are products of the monomials P(m/n,X)= nX/r+mr represented as infinite integers and . The analog of r from the new fermionic vacuum away some fermions represented by infinite primes P(m/n,X)= nX/r+mr by kicking them out of the vacuum. The infinite integers at the second level are analogous to rational functions P/Q with the polynomials P and Q defined as the products of ratio of the monomials p(m/n,X)= X/r+mr taking the role of n and m. These polynomials are not irreducible.

    One can however generalize and assume that they factor to monomials associated with the roots of some irreducible polynomial P (no rational roots) in some extension E of rationals. Hence also rational functions R(X)= P(X)/Q(X) with no common monomial factors as analogs of primes defining the analogs of primes for rational functions emerge. The lowest level with rational roots would correspond to free many-fermion states and the irreducible polynomials to a hierarchy of fermionic bound states.

  3. The construction can be continued and one obtains an infinite hierarchy of infinite primes represented as rational functions R(X1,X2,..Xn)= P(X1,X2,..Xn)/Q(X1,X2,..Xn) which have no common prime factors of level n-1. At the second level the polynomials are P(X,Y)= ∑k Pnk(X)Yk. The roots Yk of P(X,Y) are obtained as ordinary roots of a polynomials with coefficients Pnk(X) depending on X and they define the factorization of P to monomials. At the third level the coefficients are irreducible polynomials depending on X and Y and the roots of Z are algebraic functions of X and Y.

    Physically this construction is analogous to a repeated second quantization of a number theoretic quantum field theory with bosons and fermions labelled/represented by primes. The simplest states at a given level of free many-particle states and bound states correspond to irreducible polynomials. The notion of free state depends on the extension E of rationals used.

Infinite primes and holography= holomorphy principle

How does this relate to holography= holomorphy principle? One can consider two options for what the hierarchy of infinite prime could correspond to.

  1. One considers functions f=(f1,f2): H→ C2, with fi expressed in terms of rational functions of 3 complex coordinates and one hyperbolic coordinate. The general hypothesis is that the function pairs (f1,f2) defining the space-time surfaces as their roots (f1,f2)=(0,0) are analytic functions of generalized complex coordinates of H with coefficients in some extension E of rationals.
  2. Now one has a pair of functions: (f1,f2) or (g1,g2) but infinite primes involve only a single function. One can solve the problem by using element-wise sum and product so that both factors would correspond to a hierarchy of infinite primes.
  3. One can also assign space-time surfaces to polynomial pairs (P1,P2) and also to pairs rational functions (R1,R2). One can therefore restrict the consideration to f1\equiv f. f2 can be treated in the same way but there are some physical motivations to ask whether f2 could define the counterpart of cosmological constant and therefore could be more or less fixed in a given scale.
The allowance of rational functions forces us to ask whether zeros are enough or whether also poles needed?
  1. Hitherto it has been assumed that only the roots f=0 matter. If one allows rational functions P/Q then also the poles, identifiable as roots of Q are important. The compactification of the complex plane to Riemann-sphere CP1 is carried out in complex analysis so that the poles have a geometric interpretation: zeros correspond to say North Pole and poles to the South pole for the map of C→ C interpreted as map CP1→ CP1. Compactication would mean now to the compactification C2→ CP12.

    For instance, the Riemann-Roch theorem (see this) is a statement about the properties of zeros and poles of meromorphic functions defined at Riemann surfaces. The so called divisor is a representation for the poles and zeros as a formal sum over them. For instance, for meromorphic functions at a sphere the numbers of zeros and poles, with multiplicity taken into account, are the same.

    The notion of the divisor would generalize to the level of space-time surfaces so that a divisor would be a union of space-time surfaces representing zero and poles of P and Q? Note that the iversion fi→ 1/fi maps zeros and poles to each other. It can be performed for f1 and f2 separately and the obvious question concerns the physical interpretation.

  2. Infinite primes would thus correspond to rational functions R= P/Q of several variables. In the recent case, one has one hypercomplex coordinate u, one complex coordinate w of M4, and 2 complex coordinates ξ12 of CP2. They would correspond to the coordinates Xi and the hierarchy of infinite primes would have 4 levels. The order of the coordinates does not affect the rational function R(u,w,ξ22) but the hypercomplex coordinate is naturally the first one. It seems that the order of complex coordinates depends on the space-time region since not all complex coordinates can be solved in terms of the remaining coordinates. It can even happen that the coordinate does not appear in P or Q.

    The hypercomplex coordinate u is in a special position and one can ask whether rational functions for it are sensical. Trigonometric functions and Fourier analysis look more natural.

What could be the physical relationship between the space-time surfaces representing poles and zeros?

  1. Could zeros and poles relate to ZEO and the time reversal occurring in "big" state function reduction (BSFR)? Could the time reversal change zero to poles and vice versa and correspond to fi→ 1/fi inducing P/Q → Q/P? Are both zeros and poles present for a given arrow of time or only for one arrow of time? One can also ask whether complex conjugation could be involved with the time reversal occurring in BSFR (it would not be the same as time reflection T).

    For a meromorphic function, the numbers of poles and zeros are in a well-defined sense so that the numbers of corresponding space-time surfaces are the samel. What could this mean physically? Could this relate to the conservation of fermion numbers? There would be two conserved fermion numbers corresponding to f1 and f2. Could they correspond to baryon and lepton number.

  2. P and Q would have no common polynomial (prime) factors. The zeros and poles of R as zeros of P and Q are represented as space-time surfaces. Could the zeros and poles correspond to matter and antimatter so that memomorphy would state that the numbers of particles and antiparticles are the same? Or do they correspond to the two fermionic vacuums assigned to the boundaries of CD such that the vacuum associated with the passive boundary is what corresponds to quantum states in 3-D sense.
  3. Could infinite primes could have two representations. A representation as space-time surfaces in terms of holography= holomorphy principle and as fermion states involving a 4-levelled hierarchy of second quantizations for both quarks and leptons. What these 4 quantizations could mean physically?
  4. Can the space-time surfaces defined by zeros and poles intersect each other? If BSFR permutes the two kinds of space-time surfaces, they should intersect at 3-surfaces defining holographic data. The failure of the exact classical determinism implies that the 4-surfaces are not identical.

Hierarchies of functional composites of g: C2→ C2

One can consider also rational functions g=(g1,g2) with gi=R=Pi/Qi: C2→ C2 defining abstraction hierarchies. Also in this case elementwise product is possible but functional composition º and the interpretation in terms of formation of abstractions looks more natural. Fractals are obtained as a special case. º is not commutative and it is not clear whether the analogs of primes, prime decomposition, and the definition of rational functions exist.

  1. Prime decompositions for g with respect to º make sense and can identify polynomials f=(f1,f2) which are primes in the sense that they do not allow composition with g. These primal spacetime surfaces define the analogs of ground states.
  2. The notion of generalized rational makes sense. For ordinary infinite primes represented as P/Q, the polynomials P and Q do not have common prime polynomial factors. Now / is replaced with a functional division (f,g)→ fº g-1 instead of (f,g)→ f/g. In general, g-1 is a many-valued algebraic function. In the one-variable case for polynomials the inverse involves algebraic functions appearing in the expressions of the roots of the polynomial. This means a considerable generalization of the notion of infinite prime.
  3. One obtains the counterpart for the hierarchy of infinite primes. The analog for the product of infinite primes at a given level is the composite of prime g:s. The irreducible polynomials as realization of bound states for ordinary infinite primes replaces the coefficient field E with its extension. The replacement of the rationals as a coefficient field with its extensions E does the same for the composes of g:s. This gives a hierarchy similar to that of irreducible polynomials: now the hierarchy formed by rational functions with increasing number of variables corresponds to the hierarchy of extensions of rationals.
  4. The conditions for zeros and poles are not affected since they reduce to corresponding conditions for gº f.
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, April 06, 2025

More evidence for dark matter-like particles in Milky Way: TGD view of color as an explanation?

Sabine Hossenfelder told about quite recent finding possibly related to dark matter (see this). "Anomalous ionization in the central molecular zone by sub-GeV dark matter" can be found in arXiv (see this). Here is the abstract of the article:

We demonstrate that the anomalous ionization rate observed in the Central Molecular Zone can be attributed to MeV dark matter annihilations into e+e- pairs for galactic dark matter profiles with slopes γ> 1. The low annihilation cross-sections required avoid cosmological constraints and imply no detectable inverse Compton, bremsstrahlung or synchrotron emissions in radio, X and gamma rays. The possible connection to the source of the unexplained 511 keV line emission in the Galactic Center suggests that both observations could be correlated and have a common origin.

I will try to summarize what I understood from Sabine's Youtube talk.

  1. It has been observed that from the Central Molecular Zone, where stars are formed, arrives more IR light than expected. Hydrogen forms normally H2 molecules and they cannot explain the IR light in terms of vibrational excitations. H3+ could give rise to the infrared light.
  2. There should be a mechanism leading to the formation of ionized H3 molecules. Electrons could cause the ionization and the proposal is that dark particles in MeV mass range could serve as the source of the ionizing electrons. The proposal is that two dark particles in this energy range annihilate to electron-positron pairs and the electrons ionize the H3 molecules.
  3. There indeed exists earlier evidence for gamma rays with energy 511 eV from the Milky Way center (see this and this). They could be generated in the annihilation of dark particles with mass slightly above MeV to gamma pairs. This would happen in the collisions of these particles and this would require that the dark particles are very nearly at rest.
TGD leads to a much simpler explanation for the findings in terms of particles, whose mass .511 MeV is only slightly above the mass of electron (see this). They would directly decay to electron positron pairs.
  1. The empirical findings motivating this hypothesis emerged already in the seventies from the finding that in heavy ion collisions with collision energy near the criticality to overcome Coulomb wall, anomalous electron-positron pairs were observed with energy, which was slightly more than twice the rest mass .5 MeV of electrons. In the standard model, the decay widths of weak bosons do not allow new particles in this mass range and this was probably the reason why the findings were forgotten.
  2. An essential role in the explanation is played by the TGD view of color symmetry and dynamics of strong interactions, which both are in some respects very different from the QCD view. I have described this view in (see this) inspired by the quite recent finding of large isospin breaking in the production of kaon pairs. The production rate for charged kaons is 18.4 per cent higher than for neutral kaons challenging QCD. The explanation that comes to mind is that color gauge coupling slightly depends on the electric charge of the quark besides the weak dependence on the p-adic mass scale of the quark (now u or d quark).
How does the TGD based view of color lead to this proposal?
  1. Color corresponds to color partial waves in CP2 and a spectrum of colored spinor harmonics in H=M4× CP2 are predicted for both quarks and leptons in CP2. The color partial waves correlate with electroweak quantum numbers unlike the observed color quantum numbers. This means large isospin breaking (see this) at the fundamental level, where all classical gauge fields and gravitational field are expressible in terms of H coordinates and their gradients and only four of them is needed by general coordinate invariance. One can imagine a mechanism, which guarantees weak screening in scales longer than weak boson Compton length and this mechanism also explains the color quantum numbers of physical leptons and quarks.

    The weak screening above weak scale could take place by a pair of left and right-handed neutrinos assignable to the monopole flux tubes associated with the quark and it would also give the needed additional color charge so that quarks would be color triplets and leptons color singlets.

  2. It is however possible to also have color octet and higher triality t=0 excitations of leptons and analogous excitations of quarks (see this). The particles with mass slightly above 2me would be analogs of pions, electropions as I have called them. Also muopions and taupions are predicted and there are experimental indications also for them (see this, this, and this) but forgotten since they cannot exist in the standard model.
How to understand the darkness of electropions?
  1. The darkness of the leptopions and possible other leptomesons could make it possible to avoid the problems with the decay widths of weak bosons. But what could this darkness mean? The experiments of Blackman and others (see this) suggest that the irradiation of the brain with EEG frequencies has behavioral and physiological effects and that these effects are quantal and correspond to cyclotron transitions in a magnetic field of about 2BE/5, where BE is the Earth's magnetic field. This does not make sense in standard quantum theory since the value of the Planck constant is more than 10 orders of magnitude too small and the cyclotron energy would be much below the thermal energy. I have proposed that the Planck constant, or effective Planck constant heff, has a spectrum and its value can be arbitrarily large.

    In the recent formulation of TGD involving number theoretic vision heff hierarchy follows as a prediction. The large value of heff would give rise to quantum coherent phases of the ordinary matter at magnetic/field body of the system and these phases would behave like dark matter in the sense that only particles with the same value of heff can appear in the vertices of TGD analogs of Feynman diagrams.

  2. The natural guess is that the 511 keV particle is dark in this number theoretic sense. It would not be created in the decays of ordinary weak bosons unless they themselves are dark with the same value of heff. The second option is that leptomesons can appear only in the dark phase at quantum criticality associated with the situation in which the Coulomb wall can be overcome. Dark phases in this sense appear only at quantum criticality making possible long range quantum fluctuations and quantum coherence.
  3. For along time I thought that the darkness in number theoretic sense could correspond to the darkness of the galactic dark matter but now it seems that this is be the case (see this, this and this). Classically, galactic dark matter could correspond to Kähler magnetic and volume energy of cosmic strings, which are 4-surfaces in M4× CP2 with 2-D M4 projection. One can of course ask, whether the quantum classical correspondence implies that classical energy equals to its fermionic counterpart in which case these view of dar matter could be equivalent.

    The number theoretic darkness would however make itself visible also in cosmology. The transformation of ordinary particles to dark phases at the magnetic bodies, forced by the unavoidable increase of number theoretical complexity implying evolution, would reduce the amount of ordinary matter and this could explain why baryonic (and also leptonic) matter seems to gradually disappear during the cosmic evolution.

To sum up, the recently observed isospin anomaly of strong interactions together with additional empirical support for the TGD view of color is rather encouraging. This hypothesis is testable without expensive accelerators already now. Only the readiness to challenge the belief that QCD is the final theory of strong interactions would be required and I am afraid that it takes time to reach this readiness. Two very different views of science are competing. The old fashioned science in which anomalies were Gold nuggets and the Big Science in which everything is understood if 98 percent is understood.

See the article The violation of isospin symmetry in strong interactions and .511 MeV anomaly: evidence for TGD view of quark color? or the chapter New Particle Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, April 03, 2025

Do we need Future Circular Collider what should we study with it?

There are considerable pressures against building the Future Circular Collider in particle physics circles. From the Wiki page for FCC (this) one learns that 3 colliders FCC--hh, FCC-ee, and FCCeh corresponding to hadron-hadron, electron-electron and electron-hadron collisions, are planned. FCC-ee would be built first. The total cm energy for hadron hadron collisions would be about 30 times higher than at LHC.

What would be studied would be for instance dark matter particles, supersymmetric particles and electroweak interactions in higher precision and at higher energies.

My opinion is that more money on empirical research cannot help since the basic problem is that theoretical research has been for decades in a deep intellectual stagnation and cannot provide new ideas to be tested. New ideas and theories have been systematically censored during the last decades as I have learned during the last 43 years after my thesis in 1982.

My thesis proposed a new view of gravitation and standard model interactions obtained by replacing string world sheets with 4-D space-time surfaces in embedding space H=M4×CP2 geometrizing standard model symmetries. This led to a hybrid of general and special relativities solving the difficulties of general relativity with the basic conservation laws.

The embedding space H=M4×CP2 for space-time surfaces, and therefore the predicted physics, is consistent with the standard model and unique from its mere mathematical existence. A deep connection between geometric vision and number theoretic vision (something totally new) leading to a generalization of Langlands duality emerges in the 2 4-D situation. The theory is exactly solvable and there would be an enormous amount of theoretical and certainly also experimental work to be done but censorship prevents any progress (see this and this) .

Interestingly, one of the basic predictions is the strong correlation between electroweak and strong interactions at the fundamental level (since geometrization of fields implies that they all reduce to CP2 geometry). The recent totally unexpected finding of a large violation of isospin symmetry in strong interactions (see this and this) is consistent with the TGD prediction (see this). This suggests that the promising research direction is, not the particle physicist's view of dark matter or SUSY, but testing of whether the basic assumptions of QCD are really correct and whether the theory of strong interaction is really a gauge theory .

See TGD as it is towards end of 2024: part I and TGD as it is towards end of 2024: part II

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, April 02, 2025

Realization of a concept as a set of space-time surfaces

The space-time surfaces defined as roots of gº ...gº f, where f is a prime polynomial and g(0,0)=(0,0) (here f is an analytic map H=M4×CP2→ C2 and g an analytic map C2 → C2) form a kind of ensemble of disjoint space-time surfaces. Abstraction means formation of concepts and classically concept is the set of its different instances. Could this union of disjoint space-time surfaces as roots represent a concept classically?

What comes to mind are biological systems consisting of cells: do they represent a concept of a cell? What about a population of organisms? What about an ensemble of elementary particles: could it represent the concept of, say, electrons?

  1. Holography= holomorphy principle would be essential for the realization of the geometric correlate of collective quantum coherence. Only initial 3-surfaces defining holographic data matter in holography. The 4-D tangent spaces defining the counterparts for initial velocities cannot be chosen freely. This would force a coherent synchronous motion. Also classical non-determinism would be present. Could it correspond to piecewise constant Hamilton-Jacobi structure with different structure assigned to regions of the space-time surface.
  2. The Hamilton Jacobi structure of all members of the ensemble from by the roots of gº ...gº f is the same so that they can be said to behave synchronously like a single quantum coherent system. Could the loss of quantum coherence mean splitting: pk roots forming a coherent structure would decompose to pk1 sets with different H-J structures containing pk-k1 roots. Cognitive ensemble, as a representation of a concept, would decompose to ensembles representing pk1 different concepts. Is continual splitting and fusion taking place? Could this conceptualization make possible conceptualized memory: the image of the house would be represented by an ensemble of images of houses as kind of artworks.
I have often enjoyed looking at a crop field in a mild summer wind. To me, the behaviour suggests quantum coherence.
  1. Crop field in the wind seems to behave like a single entity. Could the crop field correspond to an abstraction of the notion of crop as a set of its instances, realized as a set of space-time surfaces realized as roots of for gº....º f. Also more general composites (g1 (g2)...(gn)º f, gi(0,0)=(0,0), are possible. The roots could also represent the notion of a crop field in wind as a collection of crops, each moving in wind as a particular motion of air around it.
  2. Do I create this abstraction as a conceptualization, a kind of thought bubble, or does the real crop field represent this abstraction? Could f correspond to the primary sensory perception and does cognition generate this set (not "in my head" but at my field body) as a hierarchy of iterations and an even more general set of g-composites? Different observers experience crop fields very differently, which would suggest that this is a realistic view.
  3. If this set represents the real crop field, there should also be a space-time surface representing the environment and the wind. Could wormhole contacts connect these surfaces representing the concept and the environment to a single coherent whole.

    The usual thinking is that crops from uncorrelated systems and wind as a motion of air causes the crops to move. The coherent motion would correspond to a collective mode in which crops move in unisono and synchronously. What creates this coherent motion? Could macroscopic quantum coherence at the level of the field body be the underlying reason in the TGD Universe?

  4. How to describe the wind if one accepts the crop field in wind itself represents the notion of crop in wind? Usually wind is seen as an external force. Coherent motion correlates with the wind locally. What does this mean? How could one include the wind as a part of the system? Wind should affect the crops as roots of gº...gº f. Each root should correspond to a specific crop affected locally by the wind. Or should one accept that the concept of crop field in the wind is realized only at the level of cognition rather than at the level of reality?
See the article Classical non-determinism in relation to holography, memory and the realization of intentional action in the TGD Universe or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, March 31, 2025

Classical non-determinism in relation to holography, memory and the realization of intentional action in the TGD Universe

Gary Ehlenberg sent a link to an interesting article with title "'Next-Level' Chaos Traces the True Limit of Predictability" (see this).

Holography, as it is realized in the TGD framework, allows an interesting point of view to the notion of classical chaos. This view is not considered in the article. In the TGD framework, there is also a close relation to the question about how intentions are realized as actions.

  1. Holography reduces the initial data at the fundamental level (space-times as surfaces) roughly by one half and space-time surfaces as orbits of 3-surfaces identified as particle like entities are analogous to Bohr orbits for which only initial positions or momenta can be fixed. This increases predictability dramatically. See (see this and this).
  2. Holography= holomorphy principle reduces the extremely nonlinear field equations of TGD to algebraic extensions and one obtains minimal surfaces irrespective of action principle if it is general coordinate invariant and involves only the induced geometry. The space-time surfaces are roots of pairs of polynomials or even analytic functions f= (f1,f2) of one hyper complex coordinate and 3 complex coordinates of H=M4× CP2. Field equations are more like rules of logic rather than an axiom system. This implies enormous simplification. Solutions are coded by the Taylor coefficients of f1 and f2 in an extension E or rationals and for polynomials their number is finite (see this) and this) .

    One obtains new solutions as roots of maps gº f , where g: C2-->C2 is analytic. The iterations of g give rise to the analogs of Mandelbrot fractals and Julia sets so that in this sense classical chaos, or not actually chaos but complexity, emerges. For the iteration of hierarchies P = gº g .... º f the complexity increases exponentially since the degree P and the dimension of the corresponding algebraic extension increases exponentially. The roots for the iterates can be however calculated explicitly. The interpretation could be as a classical geometric correlate for an abstraction hierarchy.

  3. Already 2-D minimal surfaces representable as soap films are non-deterministic. Soap films spanned by frames are not unique. Now frames would be represented by 3-surfaces and possibly lower-D surfaces representing holographic data. The second, passive, light-like boundary of the causal diamond CD is the basic carrier of holographic data. Also the light-like partonic orbits as interfaces between Minkowskian and Euclidean space-time regions carry holographic data. They serve as building bricks of elementary particles. At 3-D frames minimal surface property fails and field equations on the classical action and express conservation laws for isometry charges for the action in question?

    This is expected to give rise to a finite classical non-determinism. It would be essential for the quantum realization of conscious memory since small state function reductions (SSFRs) do not destroy the classical information about the previous SSFRs (see this). The information is carried by the loci of classical non-determinism having as a counterpart quantal non-determinism assignable to conscious experience.

How could classical non-determinism relate to p-adic non-determinism and to the realization of intentions as transformation of intentions as p-adic space-time surfaces to real space-time surfaces?
  1. In adelic physics real and p-adic space-time surfaces are assumed to satisfy essentially the same algebraic field equations. The p-adic and real Taylor coefficients of f=(f1,f2) might however relate by canonical identification to guarantee continuous correspondence (see this).

    The conjecture is that ramified primes of polynomials (and their generalization) correspond to preferred p-adic primes appearing in p-adic mass calculations and satisfying p-adic length scale hypothesis (see this) and (see this) .

  2. What is the relationship between the classical non-determinism and p-adic non-determinism, tentatively identified as a correlate for the non-determinism of imagination and intentionality? Could one think that in p-adic context intentions classically correspond to the solutions of field equations with the polynomial coefficients having values in the extension E but that being pseudo-constants and constant only inside regions of the space-time surface?

    Is it also possible to obtain real solutions with piece-wise constant Taylor coefficients of f? Is it possible to glue together solutions defined by different f? Does this pose additional conditions to the pseudo constants? If so, realizable intentions correspond to p-adic space-time surfaces, which also have real counterparts. Also real space-time surfaced with Taylor coefficients of f which are constant inside a given space-time region but they could change at the interfaces of two regions.

    A concrete guess is that the gluing of solutions with different choices of f can take place along light-like surfaces since in classical field theories light-like surfaces are seats of non-determinism. Partonic orbits are such surfaces and wormhole contact could define one possible mechanism of gluing together two Minkowskian space-time sheets defined by different choices f.

  3. Could the realizabile intentions have as quantum counterparts sequences of small state function reductions (SSFRs)? What the attempt to realize an intention could mean at the quantum level? Could for a given intention only a finite number of SSFRs be possible. After that a big state function reduction (BSFR) would take place and reverse the arrow of time: the sequence of SSFRs as self would "die" or fall asleep. After the second BSFR (wake-up) one would have a new trial for the realization of intention. Since the extension of rationals increases in size, the next real could contain more SSFRs, the updated holographic data could make the life of the new self as an attempt to realize a slightly modified intention longer.

    The hierarchy of gº g...gº f would give exponentially increasing complexity and dimension of extension of rationals if g(0,0)= (0,0) so that also f =0. would define one of the roots. Reflective levels would make it easier to realize the intentions by increasing exponentially the number of roots which are in fact disjoint space-time surfaces. One obtains a disjoint union of space-time surfaces as roots unless f is not a prime in the sense that it does not allow a decomposition f = gº h. Fundamenta space-time surfaces and intentions would be primes in this sense.

See the article Classical non-determinism in relation to holography, memory and the realization of intentional action in the TGD Universe or the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, March 28, 2025

Revolution in standard model: is the isospin symmetry of strong interactions really violated at quark level?

Phys.org's popular article (see this) tells about a rather surprising finding related to strong interactions. In QCD isospin symmetry is assumed, and the strong interactions in heavy ion collisions should produce equal amounts of charged and neutral kaons. The anomaly was discovered in late 2023, by Wojciech Brylinski who was analyzing data from the NA61/SHINE collaboration at CERN for his thesis. There was a strikingly large imbalance between charged and neutral kaons in argon scandium collisions. Brylinski found that, instead of being produced in roughly equal numbers, charged kaons were produced 18.4 percent more often than neutral kaons. Now this anomaly has been confirmed. NA61/SHINE's data in collisions with 11.9 GeV per nucleon pair (see this) contradicts the hypothesis of equal yields with a 4.7σ significance.

Unless electromagnetic interactions violating the isospin symmetry manage to cause the isospin asymmetry, the key assumption of QCD that electroweak and color interactions are independent, is wrong. Needless to say, this would mean a revolution in the standard model. Here is the abstract of the article.

Strong interactions preserve an approximate isospin symmetry between up (u) and down (d) quarks, part of the more general flavor symmetry. In the case of K meson production, if this isospin symmetry were exact, it would result in equal numbers of charged (K+ and K-) and neutral (K0 and K0) mesons in the final state. Here, we report results on the relative abundance of charged over neutral K meson production in argon and scandium nuclei collisions at a center-of-mass energy of 11.9 GeV per nucleon pair.

We find that the production of K+ and K+ mesons at mid-rapidity is (18.4+/- 6.1) per cent higher than that of the neutral K mesons. Although with large uncertainties, earlier data on nucleus-nucleus collisions in the collision center-of-mass energy range 2.6≤sNN1/2≤ 200 GeV are consistent with the present result. Using well-established models for hadron production, we demonstrate that known isospin-symmetry breaking effects and the initial nuclei containing more neutrons than protons lead only to a small (few percent) deviation of the charged-to-neutral kaon ratio from unity at high energies. Thus, they cannot explain the measurements.

The significance of the flavor-symmetry violation beyond the known effects is 4.7σ when the compilation of world data with uncertainties quoted by the experiments is used. New systematic, high-precision measurements and theoretical efforts are needed to establish the origin of the observed large isospin-symmetry breaking.

The basic prediction of TGD is that color and electroweak interactions are strongly correlated. Could one understand the anomaly in the TGD framework?

In the following the key ideas of TGD are summarized, the differences between TGD based and standard model descriptions of standard model interactions are discussed, and a simple quantitative model for the isospin anomaly is considered.

Basic ideas of TGD

Consider first the basic ideas of TGD relevant to the model of the isospin anomaly.

  1. At the fundamental level, both classical electroweak, color and gravitational fields are geometrized (see this and this). Once the space-time as a 4-surface in H=M4× CP2 is known, all these classical fields are fixed. This choice is unique also from the existence of the twistor lift of the theory: M4 and CP2 are the only 4-D spaces allowing twistor space with K\"ahler structure. Also the number theoretical vision, involving what I call M8-H duality (see this), allows only H.

    By general coordinate invariance at the level of H, 4 coordinates of H fix these classical fields so that very strong correlations between classical fields emerge. In particular, electroweak and color fields are strongly correlated. This means a profound difference from QCD.

  2. The notion of a particle generalizes at topological and geometric level. Point-like particles are replaced by 3-D surfaces. Fermionic degrees of freedom correspond to second quantized free spinor fields of H restricted to the space-time surface. Only leptons and quarks are predicted and family replication phenomenon is understood in terms of the genus of a apartonic 2-surface (see this). The light-like orbit of the partonic 2-surfaces carries fermion and antifermion lines identified as boundaries of strong world sheets in the interior of the space-time surface. In the simplest model, one can assign gauge boson quantum numbers to fermion-anti-fermion pairs and the quark model of hadrons generalizes.
The construction of Quantum TGD relies on two complementary visions of physics: physics as geometry and physics as number theory.
  1. The construction of quantum TGD as a geometrization of physics leads to the notion of World of Classical Worlds (WCW) consisting of space-time surfaces in H obeying holography necessary for getting rid of path integral plagued by divergences. Holography means that 3-D data -a 3-surface - fixes the space-time surface as analog of Bohr orbit for 3-D particles so that in geometric degrees of freedom TGD is essentially wave mechanics for 3-D particles.

    WCW spinors correspond to Fock states for the second quantized fermions of H and gamma matrices are super generators for the infinite-D symmetries of WCW. A huge generalization of conformal symmetries of string models and symplectic symmetries for H is involved.

    Conformal symmetries emerge from holography= holomorphy vision leading to an exact solvability of classical TGD. Space-time surfaces are roots for pairs f=(f1,f2) of analytic functions H→ C2 of one hypercomplex coordinate and 3 complex coordinates of H. The field equations are extremely nonlinear partial differential equations but reduce to purely algebraic equations. As long as the classical action is general coordinate invariant and depends only on the induced fields, the space-time surfaces are minimal surfaces irrespective of the choice of the action. The maps g=(g1,g2): C2→ C2 act as dynamical symmetries. The hierarchies of polynomials in extensions E of rationals define hierarchies of solutions of field equations.

  2. Number theoretical vision emerged first from the p-adic mass calculations leading to excellent predictions for particle masses. The basic assumption was conformal invariance and p-adic thermodynamics allowing to calculate mass squared as a p-adic thermal expectation mapped to a real mass squared by canonical identification (see this). p-Adic length scale hypothesis stating that physically preferred primes p∼ 2k, was an essential assumption. In particular, Mersenne primes and Gaussian Mersenne primes satisfy this condition. Also powers of small primes q>2, in particular q=3, can be considered in the p-adic length scale hypothesis (see this).

    Both the pairs (f1,f2) and (g1,g2) allow to identify candidates for p-adic primes as analogs of ramified primes associated with algebraic extensions of E, in particular those of rationals.

How does the TGD view of standard model interactions differ from the standard model view?

TGD view of standard model interactions differs in several respects from the standard model view?

  1. In TGD, elementary particles correspond to closed monopole flux tubes as analogs of hadronic strings connecting two Minkowskian space-time sheets by Euclidean wormhole contacts. The light-like orbits of wormhole throats (partonic orbits) carry fermions and antifermions at light curves located at light-like 3-surfaces, which define interfaces between Minkowskian string world regions and Euclidean regions identified as deformed CP2 type extremals.
  2. The basic difference at the level of H spinor fields is that color quantum numbers are not spin-like but are replaced with color partial waves in CP2. Color degrees of freedom are analogous to the rotational degrees of freedom of a rigid body. An infinite number of color partial waves emerges for both quarks and leptons. In TGD, color and electroweak degrees of freedom are strongly correlated as is also clear from the fact that color symmetries correspond to the non-broken symmetries as isometries of CP2 and electroweak symmetries correspond to the holonomies of CP2, which are automatically broken gauge symmetries.

    The spectrum of color partial waves is different for U and D type quarks and for charged leptons and neutrinos. The triality of the partial wave is zero for leptons and 1 resp. -1 for quarks resp. antiquarks. At the level of fundamental fermions, which do not correspond as such to fermions as elementary particles, there is a strong violation of isospin symmetry.

The physical states are constructed using p-adic thermodynamics (see this and this) for the scaling generator L0 of the conformal symmetries extended to the space-time level and involve the action of Kac-Moody type algebras. The basic challenge of the state construction of the physical states is to obtain physical states with correct color quantum numbers.
  1. General irrep of SU(3) is labelled by a pair (p,q) of integers, where p resp. q corresponds intuitively to the number of quarks resp. antiquarks. The dimension of the representation is d(p,q)= (1/2)(p+1)(q+1)(p+q+2)

    The spinors assignable to left and right handed neutrino correspond to representations of color group of type (p,p), where the integers and only right-handed neutrino allows singlet (0,0) as covariantly constant Cq spinor mode. (1,1) corresponds to octet 8. Charged leptons allow representations of type (3+p,p): p=0 corresponds to decuplet 10. Note that (0,3) corresponds to 10.

    Quarks correspond to irreps of type obtained from leptons by adding one quarks that is replacing (p+3,p) with (p+4,p) (p=0 gives d=20) or (p,p) with (p+1,p) (p=1 gives d=42). Antiquarks are obtained by replacing (p,p+3) replaced with (p,p+4) and (p,p) with (p,p+1).

  2. Physical leptons (quarks) are color singlets (triplets). One can imagine two ways to achieve this.

    Option I: The conformal generators act on the ground state defined by the spinor harmonic of H. Could the tensor product of the conformal generators with spinor modes give a color singlet state for leptons and triplet state for quarks? The constraint that Kac-Moody type generators annihilate the physical states, realizing conformal invariance, might pose severe difficulties.

    In fact, TGD leads to the proposal that there is a hierarchical symmetry breaking for conformal half-algebras containing a hierarchy of isomorphic sub-algebras with conformal weights coming as multiplets of the weights of the entire algebra. This would make the gauge symmetry of the subalgebra with weights below given maximal weight to a physical symmetry.

    Option II: The proposal is that the wormhole throats also contain pairs of left- and right-handed neutrinos guaranteeing that the total electroweak quantum numbers of the string-like closed monopole flux tube representing hadron vanishes. This would make the weak interactions short-ranged with the range determined by the length of the string-like object.

    One must study the tensor products of νLνR and νLνR states with the leptonic (quark) spinor harmonic to see whether it is possible to obtain singlet (triplet) states. The tensor product of a neutrino octet with a neutrino type spinor contains a color singlet. The tensor product 8⊗ 8 = 1+8A+8S+10+10 +27 contains 10 and its tensor product with 10 for quark contains a color triplet.

Number theoretic vision is highly relevant for the model of isospin anomaly.

  1. p-Adic length scale hypothesis can be applied to quarks. The empirical estimates for the masses of u and d type current quarks vary in wide range and have become smaller during years. One estimate (see this is that u quark has mass in the range 1.7-3.3 MeV and d has mass in the range 4.1-5.8 MeV. The estimate represented in Wikipedia (see this) is consistent with this estimate.

    p-Adic length scale hypothesis suggests that the p-adic mass scales satisfy m(d)/m(u)=2 so that p(d)/p(u)=1/4 and k(d)= k(u)-2. For electron the p-adic mass scale corresponds to the Mersenne prime M127= 2127-1 with k(e)=127. m(u)∼ 4me suggests k(u)=127-4=123 and k(d)=k(u)-2= 121.

  2. The number theoretic vision (see this) implies that coupling constant evolution is discretized and the values of coupling parameters correspond to extensions of rationals characterizing the classical solutions of field equations as roots (f1,f2)=(0,0). This conforms with the general vision that the TGD Universe is quantum critical. The quantum criticality conforms with the generalized conformal symmetries consistent with the holography= holomorphy vision. p-Adic primes are proposed to correspond to the ramified primes associated to the polynomial pairs and for rational primes one obtains ordinary p-adic primes assignable to ordinary integers.
  3. Since u and d quarks correspond to different p-adic length scales they must correspond to different p-adic primes and presumably also to different extensions of rationals. Therefore the color couplings of gluon to a u quark pair and d quark pair are different. This would imply the violation of isospin asymmetry of strong interactions. One expects that the gluon coupling strength αs depends on the p-adic length scale of the quark, and the guess, motivated by QFTs, is that the dependence is logarithmic.

How could one understand the isospin violation of strong interactions?

  1. The formation of K+=us and K0= ds involves strong interaction and therefore exchange of gluons between u and s for K+ and d and s for K0. The emission vertex is proportional to αs. In TGD, the gluon corresponds to a superposition of uu, dd pairs and also pairs of higher quark generations. Only the ud couples to K+ and dd couples to K0 in the vertex. The gluon exchange vertex is analogous to a vertex for the annihilation of gluon to a fermion pair and the different p-adic length scales for u and d imply that the analog of QCD coupling strength αs is different ud and dd.
  2. For general coupling constant evolution the dependence of length scale is logarithmic. The amplitude for the exchange of gluon characterized by αs should be larger for uuug vertex than ddg vertez. The p-adic length scale for u should be by a factor 2 longer than for u. If the coupling constant is proportional to the logarithm, one has αs \propto log2(p(q))= k(q). This would give for the ratio Δ gs/gs(u)= (Δ gs(u)-gs (d))/gs(u) the estimate Δ gs/gs(u)∼eq (k(u)-k(d))/k(u)∼ 4/123∼ 3.3 percent. For αs this would give Δ αss ∼ 6.6 per cent. This is roughly by a factor 1/3 times smaller than the empirical result 18.4 percent. k(u)-k(d)= 2 → 6 giving k(d)= 123-6=117 would produce a better result and give m(d)∼ 16 MeV. This looks non-realistic.
  3. gs can also depend on the em charge of the quark. The dependence must be very weak and logarithmic dependence is suggestive. The dependence can be only on the square of the em charge. What is wanted is Δ keff(u)-keff(d) → Δ keff= k(u)-k(d)-4. The values of x(q)= (3Qem(q))2 for U resp. D type quarks are x(q)= 4=22 resp. xq= 1 and therefore powers of 2. The simplest quess is that gs is of the form gs(q) ∝ log2(x(q) × 2k(q)). This gives keff(u)=k(u)+2 and keff(d)= k(d) giving keff(u)-keff(d)= k(u)-k(d)+2=4. This would predict (gs(u)-gs(d))/g(d)∼ 6.6 percent and (αs(u)-αs(d))/α(d)∼ 13.2. This is about 71 per cent from the empirical value 18.4 percent. The rather artifical dependence x(q)= (3Qem(q))4 would give 19.2.

See the article Is the isospin symmetry of strong interactions violated   at quark level? or the chapter New Particle Physics Predicted by TGD: Part I.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, March 27, 2025

Oxygen discovered in the most distant known galaxy

Oxygen is discovered in the most distant known galaxy at distance corresponding to the age 300 million years for the Universe (see the popular article). There are two articles related to the finding (see this and this). Researchers had thought that at that time the universe was still too young to have galaxies ripe with heavy elements. However, the two ALMA studies indicate JADES-GS-z14-0 has about 10 times more heavy elements, in particular oxygen, than expected. This does not conform with the standard story about the evolution of stars and the view of the formation of galaxies.

The conservative view (see this) is that everything conforms with the expectations: the heavy elements have been produced in very early massive galaxies showing as red dots in JWST data containing very massive and short-lived stars. The problem is that the origin of these very massive very early galaxies is far from being understood. The proponents of the standard cosmology are desperately defending the standard cosmology against empirical findings challenging it.

My own view is that the various anomalies leave only one conclusion: the views about astrophysics, galaxies, and cosmology based on the general relativistic notion of space-time are badly in need of updating. TGD suggests a dramatic modification of the notion of space-time leading to a new view about cosmic evolution, and galactic and astrophysics. Cosmic string like objects unstable against thickening to monopole flux tubes would be relevant for physics in all scales and also for the formation of galaxies, stars and even planets (see for instance this, this and this).

The zero energy ontology (ZEO) forced by TGD and solving the basic problem of quantum measurement theory predicts quantum coherence in arbitrarily long scales and that the arrow of time changes in "big" state function reductions (BSFRs) as counterparts of the ordinary SFRs. ZEO could explain stars and galaxies older than the Universe and could be highly relevant concerning the understanding of the findings of JWST. It could also allow us to understand why highly evolved galaxies appear in the very early Universe. The evolutionary age of the galaxy could be much longer than the usual age due to the living forth and back in geometric time.

In the following I will discuss only the possible role of the TGD based view of stars in attempts to understand the findings. Of course, the recent view of stellar evolution is regarded as more or less final. There are however numerous anomalies challenging it (see this). Could the recent findings mean an additional challenge for the model? The TGD based view of space-time suggests a rather radical view of the stellar evolution motivated by numerous anomalies of the standard model.

  1. Nuclei would be formed, not in the stellar cores, but at the surfaces of stars, covered by monopole flux which give rise to what I call M89 nuclear strings (see this). Also ordinary nuclei would be monopole flux tubes containing nucleons (see this). The monopole flux tubes carrying M89 nuclei and connecting the Sun to the galactic nucleus or blackhole could have time independent dynamics in a good approximation.

    M89 monopole flux tubes would decay by reconnection to flux loops and M89 nucleons would decay to ordinary nucleons by a process that I call p-adic cooling (see this and this). In this cascade-like process process the p-adic prime characterizing the nucleon and near to a power 2 would gradually decrease and the mass scale of the M89 hadrons would be reduced octave by octave and eventually reach M107 mass scale which corresponds to the ordinary nucleons. This process would liberate energy and also give rise to anomalous gamma rays with energy range extending to TeV energies: these gamma rays indeed show up themselves as anomalies. It would also create solar wind and generate solar wind.

    M89 nuclei could decay M107 nuclei or to M107 nucleons, which could fuse to dark M107 nuclei by dark fusion and transform to ordinary nuclei liberating almost all of the ordinary nuclear binding energy. At the surface of the star a slowly evolving equilibrium would emerge and give rise to the aging of the star. The abundances of various atoms would depend on the age. The difference with respect to the standard model would be that nuclei at the surface of the Sun would not originate from the solar core and that hot fusion would be replaced with dark fusion explaining the "cold fusion" (see this).

  2. The absorption line spectrum of the star is determined by the surface temperature of the star (see this). The nearby environment absorbs part of the radiation. The surface temperature and the metallicity of the star, now the metallicity of its surface, can be deduced from its spectrum.
  3. TGD view differs from the standard picture since the nuclei are not endlessly recirculated via the stellar cores but produced at the surfaces of the stars from M89 nuclei. The nuclei from the remnants of earlier stars can end up on the surface of the new stars but how important this contribution is, is not clear.
  4. The finding that the very early Universe contains high metallicity stars is consistent with the TGD view. They could be massive stars believed to have existed in very massive galaxies in the very early Universe. The TGD based model should be able to explain stellar generations and also the empirical absence of population III stars representing the hypothetical first generation stars containing mostly hydrogen and helium. A possible explanation is that dark fusion also produces heavier elements and they emerge from the very beginning. This would also explain the recent evidence for high abundances of heavier elements. Note that population II stars old and metal-poor. Population I stars relatively young and metal-rich.

    In the standard model young stars are identified as later stellar generations and have high metallicity due to the metals produced in the fusion in the core. The initial state of the core for the fusion would be determined by the abundances of the metals produced in supernova explosions of the earlier star generations.

    In the TGD framework, one is forced to challenge the notion of stellar generations. Could the metallicity at the surface of a young star be always high and could it decrease during aging so that old stars with a low metallicity would have evolved from stars with higher metallicity? Why would metallicity be reduced with aging? Gravitational binding energy is larger for the heavier nuclei. Could the lighter nuclei remain near the surface and the heavier nuclei sink towards the core as in the case of Earth?

See the article Some Solar Mysteries of the chapter with the same title.

Sunday, March 23, 2025

p-Adic length scale hypothesis and Mandelbrot fractals

This post (see this) is a continuation to the previous suggesting a definition of generalized p-adic numbers in which functional powers of map g1: C2→ C2 for which g1 is a prime polynomial Pp with coefficients in extension E of rationals with degree low than p of P would define the analogs of powers of p-adic prime. For simplicity, one can restrict E to rationals. The question was whether it might be possible to understand p-adic primes satisfying p-adic length scale hypothesis as ramified primes appearing as divisors of the discriminant of the iterate of g1=P.

Generalized p-adic numbers as such are a very large structure and the systems satisfying the p-adic length scale hypothesis should be physically and mathematically special. Consider the following assumptions.

  1. Consider generalized p-adic primes associated restricted to the case when f2 is not affected in the iteration so that one has g=(g1,Id) and g1= g1(f1) is true. This would conform with the hypothesis that f2 defines the analog of a slowly varying cosmological constant. If one assumes that the small prime corresponds to q=2, the iteration reduces to the iteration appearing in the construction of Mandelbrot fractals and Julia sets. If one assumes g1= g1(f1,f2), f2 defines the analog of the complex parameter appearing in the definition of Mandelbrot fractals. The values of f2 for which the iteration converges to zero would correspond to the Mandelbrot set having a boundary, which is fractal.
  2. For the generalized p-adic numbers one can restrict the consideration to mere powers g1n as analogs of powers pn. This would be a sequence of iterates as analogs of abstractions. This would suggest g1(0)=0.
  3. The physically interesting polynomials g1 should have special properties. One possibility is that for q=2 the coefficients of the simplest polynomials make sense in finite field F2 so that the polynomials are P2(z== f1,ε) =z2 +ε z= z(z+ε), ε= +/- 1 are of special interest. For q>2 the coefficients could be analogous to the elements of the finite field Fq represented as phases exp(i2π k/3).
Consider now what these premises imply.
  1. Quite generally, the roots of P° n(g1) are given R(n)= P°(-n)(0). P(0)=0 implies that the set Rn of roots at the level n are obtained as Rn= Rn(new)∪ Rn-1, where Rn(new) consist of q new roots emerging at level n. Each step gives qn-1 roots at the previous level and qn-1 new roots.
  2. It is possible to analytically solve the roots for the iterates of polynomials with degree 2 or 3. Hence for q= 2 and 3 (there is evidence for the 3-adic length scale hypothesis) the inverse of g1 can be solved analytically. The roots at level n are obtained by solving the equation P(rn)= rn-1,k for all roots rn-1,k at level n-1. The roots in Rn-1(new) give qn-1 new roots in Rn(new).
  3. For q=2, the iteration would proceed as follows:

    0→ {0, r1} → {0,r1} ∪ {r21, r22} → {0,r1} ∪ {r21, r22}∪ {r121, r221, r122, r222} → ... .

  4. The expression for the discriminant D of g1°n can be deduced from the structure of the root set. D satisfies the recursion formula D(n)= D(n,new)× D(n-1) × D(n,new;n-1). Here D(n,new) is the product

    ri,rj ∈ D(n,new)(ri-rj)2

    and D(n,new;n-1) is the product

    ri∈ D(n,new),rj ∈ D(n-1)(ri-rj)2.

  5. At the limit n→ ∞, the set Rn(new) approaches the boundary of the Fatou set defining the Julia set.
As an example one can look at the iteration of g1(z)= z(z-ε).
  1. The roots of z(z-ε)=0 are {0,r1}={0,ε}. At second level, the new roots satisfy z(z-ε)=r1=ε given by {(ε/2)(1 +/- (1+4r1)1/2}. At the third level the new roots satisfy z(z-ε)=r2 and given by {(ε/2)(1 +/- (1+4r2)1/2}.
  2. The points z=0 and z=ε are fixed points. Assume ε=1 for definiteness. The image points w(z)= z(z-ε) satisfy the condition |w(z)/z|=|z-1|. For the disk D(1,1):|z-1| ≤ 1 the image points therefore satisfy |w| ≤ |z| ≤ 2 and belong to the disk D(0,2): |z |≤ 2.

    For the points in D(0,2)\ D(1,1) the image point satisfies |w|=|z-1||z| giving |z|-1 ≤ |w| ≤ |z|+1. Inside D(0,2)\ D(1,1) this gives 0 ≤|w| ≤ 3. Therefore w can be inside D(2,0) including D(1,1) also inside disk D(0,3).

    For the points z outside D(2,0)|w|=|z-1||z| ≥ 2. So that the iteration leads to infinity here.

  3. For the inverse of the iteration relevant for finding the roots of f°(-n) leads from the exterior of D(2,0) to its interior but cannot lead from interior to the exterior since in this case f would lead to exterior to interior. Hence the values of the roots wn in ∪n(-n) belong to the disc D(2,0).
The conjecture deserving to be killed is that the discriminant D for the iterate has Mersenne primes as factors for primes n defining Mersenne primes Mn= 2n-1 and that also for other values of n D contains as a factor ramified primes near to 2n.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Dark energy weakens

Quanta Magazine post (see this) tells about the evidence, found by DESI collaboration, that dark energy is getting weaker. These findings challenge the very notion of dark energy, which is also theoretically problematic. There is the problem whether the dark energy corresponds to a modification of the gravitational part of the action obtained by replacing the curvature scalar by adding a volume term or a modification of the matter part of the action corresponding to exotic particles, quintessence, with a negative pressure.

In Big Think (see this) there is article discussing in detail the claim about the claimed weakening of dark energy (see this). The article describes in detail their constraints on the $\Lambda$CDM model. It becomes clear that standard cosmology is a good parametrization of huge amounts of data but has a lot of problems and that the notion of dark energy is far from being elegant. The basic empirical inputs are as follows.

  1. Cosmic microwave background (CMB) provides information about the basic parameters of the standard cosmology such as its age, the value of Hubble constant characterizing the expansion rate, and density matter. This information allows us to identify several anomalies. For instance, the Hubble constant seems to have two slightly different values. It would seem that Hubble constant depends on length scale but the notion of scale is lacking from the standard cosmology.
  2. The finding that the radiation from supernovae of type I is weaker than expected led to the conclusion that cosmic expansion is accelerating. Cosmological constant characterizing dark energy density would at least parametrix the acceleration expansion in the general relativistic framework.
  3. Baryonic acoustic oscillations (BAO) provide information about the large scale structure of the Universe and BAO led in the recent study to the conclusion that dark energy is weakening. BAO has also led to the conclusion that the density of baryonic matter is decreasing: as if baryons were disappearing. Are these two phenomena different aspects of the same phenomenon?
In TGD, the new view of space-times as 4-D surfaces in H=M4×CP2, predicts the analogy of cosmological constant as well as its weakening.
  1. String tension characterizes the energy density of a magnetic monopole flux tube, a 3-D surface in H=M4×CP2. String tension contains a volume part (direct analog of Λ) and Kähler magnetic part and it is a matter of taste whether one identifies the entire string tension or only the volume contribution as counterpart of Λ. In the primordial cosmology (see this), cosmic string sheets have 2-D M4 projection and 2-D CP2 projection.
  2. Cosmic strings are unstable against the thickening of their 2-D M4 projection, which means that the energy density is gradually reduced in a sequence of phase transitions as thickenings of the cosmic string so that they become monopole flux tubes and give rise to galaxies and stars. The energy of cosmic strings is transformed to ordinary particles. This process is the TGD analog of inflation. No inflaton fields are required.
  3. The string tension is gradually reduced in these phase transitions and in this sense one could say that dark energy is weakened. For instance, for hadronic strings it is rather small as compared to the original value of string tension during the primordial phase dominated by cosmic strings, at the molecular level the string tension of cosmic strings is really small.
  4. The primordial string dominated phase was followed by a transition to the radiation dominated cosmology and emergence of Einsteinian space-time with 4-D M4 projection so that general relativity and quantum field theory became good approximations explaining a lot of physics. Quantum field theory approximation cannot however explain the structures appearing in all scales and here monopole flux tubes are necessary.
  5. TGD also predicts a hierarchy of effective Planck constants labelling phases of the ordinary matter behaving like dark matter in many respects. These phases would be quantum coherent in arbitrarily long scales. They would reside at the magnetic bodies consisting of monopole flux tubes and define a number theoretic complexity hierarchy highly relevant in quantum biology. The transformation of ordinary matter to this kind of dark matter would explain the observed apparent disappearance of baryons during cosmic evolution. In primordial cosmology cosmic strings as 4-D objects with 2-D M4 projection would dominate: during this period one cannot speak of Einsteinian space-time as space-time surfaces with 4-D M4 projection.
One can try to translate these analogies to a more detailed quantitative view of dark energy and dark matter.
  1. The energy of cosmic string contains two contributions: volume contribution and Kähler magnetic contribution. Their sum defines the galactic dark matter (see this), whose portion is about 26 percent of cosmic energy density, and it is concentrated at cosmic strings. Ordinary matter, about 5 percent of energy density emerges in the thickening of these cosmic strings as they form local tangles. The liberated energy transforms to ordinary matter. This is the TGD counterpart of inflation and cosmic strings carry the counterpart of matter assigned with the vacuum expectations of inflation fields.
  2. What about the dark energy forming 85 percent of cosmic energy density? Does dark energy correspond to the energy associated with Minkowski space-time sheets as the energy associated with the volume action with Käher magnetic part being negligible. Could the volume contribution dominate or are the contributions of the same size scale? The cosmological constant would have a spectrum being inversely proportional to the p-adic length scale characterizing these space-time sheets. The value of the cosmological constant would not depend on time but on the scale of the space-time sheet and would decrease as a function of the scale. This might explain the latest findings if they are true.
  3. One can ask whether only magnetic flux tubes and sheets are present at the fundamental level and whether cosmological constant corresponds to the energy assignable to large enough monopole flux tubes. Already flux tubes of the thickness of neuron size correspond to the extremely small value of cosmological constant deduced from cosmology. Also hadronic string tension corresponds to a particular value of cosmological constant.
See for instance the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, March 20, 2025

Witt vectors and polynomials and the representation of generalized p-adic numbers as space-time surfaces

We have had very inspiring discussions with Robert Paster, who advocates the importance of universal Witt Vectors (UWVs) and Witt polynomials (see this) in the modelling of the brain, have been very inspiring. As the special case Witt vectors code for p-adic number fields. Witt polynomials are characterized by their roots, and the TGD view about space-time surfaces both as generalized numbers and representations of ordinary numbers, inspires the idea how the roots of for suitably identified Witt polynomials could be represented as space-time surfaces in the TGD framework. This would give a representation of generalized p-adic numbers as space-time surfaces.

Could the prime polynomial pairs (g1,g2): C2→ C2 and (f1,f2): H=M4× CP2→ C2 (perpaps states of pure, non-reflective awareness) characterized by small primes give rise to p-adic numbers represented in terms of space-time surfaces such that these primes could correspond to ordinary p-adic primes? Same question applies to the pairs (f1,f2) which are functional primes.

  1. Universal Witt vectors and polynomials can be assigned to any commutative ring R, not only p-adic integers. Witt vectors Xn define sequences of elements of a ring R and Universal Witt polynomials Wn(X1,X2,...,Xn) define a sequence of polynomials of order n. In the case of p-adic number field Xn correspond to the pinary digit of power pn and can be regarded as elements of finite field Fp which can be also mapped to phase factors exp(ik 2π/p). The motivation for Witt polynomials is that the multiplication and sum of p-adic numbers can be done in a component-wise manner for Witt polynomials whereas for pinary digits sum and product affect the higher pinary digits in the sum and product.

    In the general case, the Witt polynomial as a polynomial of several variables can be written as Wn(X0,X1,...)=\sumd\mid n d Xdn/d, where d is a divisor of n, with 1 and n included.

  2. The function pairs (f1,f2): M4→ C2 define a ring-like structure. Product and sum are well-defined for these pairs. The function pair related to (f1,f2) by a multiplication by a function pair (h1,h2), which vanishes nowhere in CD, defines the same space-time surface as the original one is equivalent with the original one. Note that also the powers (f1n,f2n) define the same 4-surfaces as (f1,f2).

    The degrees for the product of polynomial pairs (P1,P2) and (Q1,Q2) are additive. In the sum, the degree of the sum is not larger than the larger degree and it can happen that the highest powers sum up to zero so that the degree is smaller. This reminds of the properties of non-Archimedean norm for the p-adic numbers. The zero element defines the entire H as a root and the unit element does not define any space-time surface as a root.

    For the pairs (g1,g2) also functional composition is possible and the degrees are multiplicative in this operation.

  3. Functional primes (f1,f2) define analogs of ordinary primes and the polynomials with degrees associated with the 3 complex coordinates of H below the primes associated with these coordinates are analogous to pinary digits. Also the pairs (g1,g2) define functional primes both with respect to powers defined by element-wise product and functional composition.

Generalization of Witt polynomials

Could a representation of polynomials, in particular the analogs of Witt polynomials in terms of their roots in turn represented in terms of space-time surfaces, be a universal feature of mathematical cognition? If so, cognition would really create worlds! In Finland we have Kalevala as a national epic and it roughly says that things were discovered by first discovering the word describing the thing. Something similar appears in the Bible: "In the beginning was the Word, and the Word was with God, and the Word was God. Word is world!

Could p-adic numbers or their generalization for functional primes (f1,f2) have a representation in terms of Witt polynomials coded by their roots defining space-time surfaces.

  1. Wn is a polynomial of n arguments Xk whereas the arguments of the polynomials defining space-time surfaces correspond to 3 complex H coordinates. In the p-adic case the factors d are powers of p. Xd are analogous to elements of a finite field as coefficients of powers of p.
  2. There are two cases to consider. The Witt polynomials assignable to the space-time surfaces (f1,f2)=(0,0): H→ C2 using element-wise sum and product. For the pairs g=(g1,g2)=(0,0): C2→ C2 one can consider sum and element-wise product giving gn= (g1n,g2n) and the sum or functional composition giving g(g(...g)...). The latter option looks especially attractive. One reason is that by the previous considerations the prime surface pairs (f1,f2) might be two simple. For instance the iterations (g1,g2) with prime degree 2,3,.. could give a justification for the p-adic length scale hypothesis and its generalization.
Consider first the pairs (f1,f2): H→ C2.
  1. If the space-time surface (f1,f2)=(0,0) is prime with respect to the functional composition f→ g(f), it naturally generalizes the p-adic prime p so that one would have pk→ (f1,f2)k and n1=n2.

    Xk are the analogs of pinary digits as elements of finite fields. Could they correspond to polynomials with the 3 degrees smaller than the corresponding prime degree assignable to the prime polynomial (f1,f2)?

  2. With these identifications it might be possible to generalize the Witt polynomials to their functional variants as such and find its roots represented as space-time surfaces. These surfaces would represent the functional analog of the p-adic number field. One can also assign to the functional p-adic numbers ramified primes defining ordinary p-adic primes. Each functional p-adic number would define ramified primes and these would correspond to the p-adic primes.
  3. fi are labelled by 3 ordinary primes pr(fi), r=1,2,3, rather than single prime p and by the earlier argument one can restrict the condition to f1.

    Every functional p-adic number corresponds to its own ramified primes determined by the roots of its Witt polynomial. There is a huge number of these generalized p-adic numbers. Could some special functional p-adic primes correspond to elementary particles? The simplest generalized p-adic number corresponds to a functional prime and in this case the surface in question would correspond to (f1,f2)=(0,0) (could this be interpreted as stating the analog of mod ~p=0 condition). These prime surfaces might be too simple and it is not easy to understand how the large values of p--adic primes could be understood.

One can ask whether the analogs of ramified primes for the Witten polynomials assignable abstraction hierarchies g(g(...(f)...) and powers gn=(g1n,g2n) for which the degree of the polynomials is n×p, p the prime assignable to g.
  1. The ramified primes for the Witten polynomials for g(g(...(f)...) and gn defining analogs of powers pn of p-adic numbers. Note that the roots of g(g(...(f)...) are a property of g(g(...(g)...) and do not depend on f in case that they exist as surfaces inside the CD.
  2. The interesting question is whether and how the ramified primes could relate to the ramified primes assignable to a generalized Witt polynomial Wn. The iterated action of prime g giving g(g(...(f)...) is the best candidate. There is hope that even the p-adic length scale hypothesis could be understood as a ramified primes assignable to some functional prime. The large values of p-adic primes require that very large ramified primes for the functional primes (f1,f2). This would suggest that the iterate g(g....g(f)...) acting on prime f is involved. For p∼ qk, kth power of g characterized by prime g is the first guess.
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, March 19, 2025

Ytterbium anomaly

Sabine Hossenfelder talked about a very interesting nuclear physics anomaly known as Ytterbium anomaly (see this). There is a Phys. Rev. Lett. article about this anomaly titled "Probing New Bosons and Nuclear Structures with Ytterbium Isotope Shifts" (see this). What makes this anomaly so interesting is that it is reported to have a significance level of 23 sigmas! 5 sigmas is usually regarded as the significance, which makes it possible to speak of a discovery.

  1. Ytterbium (see this) is a heavy nuclear with charge Z=70 and mass number of A=173 so that neutron number would be N= 103 for the most general isotope (it seems that the definition of isotope number varies depending on whether its defined in terms of mass or the actual number of nucleons). The mass numbers of Yb vary in the range 168-176. Ytterbium is a rare earth metal with electron configuration [Xe] 4f146s2.
  2. The electronic state seems to be very sensitive to the number of neurons in Yb and this is why Yb is so interesting from the point of view of atomic physics. The anomaly is related to the isotope shift for the electron energies. So called Frequency Comb method amplifies the mismatch with the standard theory. Mismatch indeed exists and could be understood in terms of a new particle with a mass of few MeVs.
  3. There is an earlier anomaly known as Atomki anomaly (see this and this) explained by what is called X boson in mass range 16-17 meV and the Yb anomaly could be explained in terms of X boson (see this).
I have discussed the X boson in the TGD framework and proposed that it could be a pion-like state (see this).
  1. The proposed model provides new insights on the relation between weak and strong interactions. One can pose several questions. Could X could be a scaled variant of pion? Or could weak interaction physics have a scaled down copy in nuclear scale?
  2. The latter option would mean that some weak boson could become dark and its Compton length would be scaled up by factor heff/h to nuclear p-adic length scale. I have proposed a scaled up copy of hadron physics characterized by Mersenne prime M89 with a mass scale, which is 512 higher than for ordinary hadrons with M107. In the high energy nuclear collisions in which quark-gluon plasma is believed to be created at quantum criticality, M89 hadrons would be generated. They would be dark and have heff/h=512 so that the scaled up hadronic Compton length would be the same as for ordinary hadrons (see this and this). M89 hadron physics would explain a large number of particle physics anomalies and there is considerable evidence for it. The most radical implications of M89 hadron physics would be for solar physics (see this).
Could also weak interaction physics have dark variants at some kind of quantum criticality and could the Yb anomaly and Atomki anomaly be understood as manifestations of this new physics.
  1. Weak bosons are characterized by p∼ 2k, k=91, from the mass scale of weak bosons. A little confession: for a long time I believed that the p-adic mass scale of weak bosons corresponds to k=89 rather than k=91. For the Higgs boson the mass scale would correspond to M89. TGD also predicts a pseudoscalar variant πW of the Higgs boson. Could the dark variant of the pseudoscalar Higgs boson πW with Compton length assignable to the X boson be involved?
  2. The scaled up weak boson should have a p-adic length scale which equals nuclear length scale or is longer. Dark πW could become effectively massless below its Compton length. The mass of X boson about 16-17 MeV corresponds to a Compton length of 6× 10-14 m and is by factor 3 longer than the nuclear p-adic length scale L(k=113)∼ 2× 10-14 m. For kπW=91 would give heff/h= 2(113-91)/2=211 to give the nuclear Compton scale. k=115 would give the length scale 4× 10-14 m not far from 6× 10-14 m and require heff/h= 2(115-91)/2=212. If πW corresponds to k=89 then heff/h= 2(115-89)/2=213.

    If the πW Compton scale is scaled from M91 by factor 2(113-89)/2=212 to a Compton length corresponding for ordinary Planck constant to the mass of about mπW/212. This would give mπW∼ 64 GeV, essentially one half of the mass of the ordinary Higgs about 125.35 GeV and corresponds to the p-adic length scale k=91 just like other weak bosons.

  3. Could dark πW give rise to a dark weak force between electron and nucleus inducing the anomalous isotope shift to electron energies? The increase of the Compton length would suggest the scaling up of the electron nucleus weak interaction geometric cross section by the ratio (mX/mW)4 ∼ 1020? The weak cross section for dark πW have the same order of magnitude than nuclear strong interaction cross sections since, apart from numerical factors depending on the incoming four-momenta, the weak cross section for electron-nucleon scattering goes like GF∼ 1/TeV2. This would scale up to 2113-91=22/TeV2= 4/MeV2 (see this). The alternative manner to understand the enhancement could be by assuming that weak bosons are massless below the nuclear scale.
  4. Quantum criticality would be essential for the generation of dark phases and long range quantum fluctuations about which the emergence of scaled up dark weak bosons would be an example. Yb is indeed a critical system in the sense that the isotope shift effects are large: this is one reason for why it is so interesting. Why would the quantum criticality of some Yb isotopes induce the generation of dark weak bosons?
It should be noticed that for k=512 assigned with dark M89 hadrons, the scaled down Compton length of dark πW would be by a factor 8 shorter and correspond to mass 128 MeV not far from the pion mass. What could this mean? Could ordinary pion correspond to a dark dark variant of πW? This looks strange since usually strong and weak interactions are regarded as completely separate although the CVC and PCAC suggest that they are closely related. In TGD, the notion of induced gauge field implies extremely tight connections between strong and weak interactions.

See the article X boson as evidence for nuclear string model or the chapter Nuclear string hypothesis.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.