https://matpitka.blogspot.com/2025/04/

Monday, April 21, 2025

Critical summary of the problems associated with the physical interpretation of the number theoretical vision

The physical interpretation of the number theoretical vision involves several ideas and assumptions which can be criticized.

p-Adic primes and p-adic length scale hypothesis

The basic notions are p-adic length scale hypothesis and hierarchy of Planck constants and they are motivated by the empirical input. p-Adic length scale hypothesis was originally motivated by p-adic mass calculations and p-adic length scale hypothesis and has now developed to to a rather nice picture (see this) solving the original interpretational problem due to needed tachyonic ground states.

  1. The proposed interpretation of p-adic primes has been as ramified primes. The identification of ramified primes is however far from obvious since they are assignable to polynomials of a single complex variable: how this polynomial is determined. There are also huge number of polynomials that one must consider and it seems that the notion of p-adic prime should characterize large class of polynomials and be therefore rather universal.
  2. A more promising interpretation is in terms of functional primes, which under some assumptions are mappable to ordinary primes by a morphism. The maps f are primes is they correspond to irreducible polynomials or ratios of such polynomials and if it is not possible to express f as f= gº h. f is characterized by at most 3 primes corresponding to the 3 complex coordinates of H. Also for g primeness can be defined and if only f1 is involved, ordinary prime characterizes it.

    There would be morphism mapping these functional primes to ordinary primes perhaps identifiable as p-adic primes. This could also fit with the p-adic length scale hypothesis suggesting the pairing of a large prime pl and small prime ps: plarge∼ psmallk would be true. One expects that g=(g1,g2) with g2 fixed is the physically motivated option and one assign primes pl near powers of small prime ps to functional primes gpsº k/gr.

The hierarchy of Planck constants heff=nh0

Consider first the evidence.

  1. The quantal effects of ELF em fields on the brain provide support for very large values of heff of order 1014 scaling the Compton length and giving rise to long scale quantum coherence. There is also evidence for small values of heff.
  2. There is also evidence for the gravitational resp. electric Planck constants ℏgr resp.em, which are proportional to the product of large and small mass resp. charge) and therefore depend on the quantum numbers for the interacting particles. This distinguishes these parameters from ordinary Planck constant and its possible analog heff. The support for ℏgr and ℏem emerges from numerical coincidences and success in explaining features of certain astrophysical systems and bio-systems.

    The proposal is that ℏgr and ℏem emerge in Yangian symmetries which replace single particle symmetries with multi-local symmetries acting at the local level on several particles simultaneously. One should be able to formulate this idea in a more precise manner.

The basic mathematical ideas are following.
  1. The proposal is that the scaling Lp→ (heff,2/heff,1)Lp(heff,1) takes place in the the transition heff,1→ heff,2 and increases the scale of quantum coherence. One cannot exclude Lp→ (heff,2/heff,1)1/2Lp as an alternative.
  2. The number theoretical vision motivates the proposal that heff corresponds to the order of the Galois group of a polynomial. It is however far from clear how one can assign to the space-time surface this kind of polynomial and I have made several proposals and the situation is unclear.
There are two sectors to be considered corresponding to the dynamical symmetries defined by g and the prime maps fP. Consider first the g sector.
  1. For g=(g1,Id), the situation reduces to that for a single polynomial and heff could correspond the order of the Galois group of g1 would define the dimension of the corresponding algebraic extension. The motivation is that the condition f2=0 would define TGD counterpart of dynamical cosmological constant.
  2. The first proposal was that heff/h0 corresponds to the number of space-time sheets for the space-time surface, which can be connected and indeed is so for fP. This number is the order of the polynomial involved in a single variable case and is in general much smaller than the order n of the Galois group which for polynomials with degree d has maximal value dmax=d!.

    If the Galois group is cyclic, one has n=d. Could the proposal that for functional primes, the coefficients pk appearing in gkº gpº k commute with gp and each other, imply this? This condition might be seen as a theoretical counterpart for the assumption that the Abelian Cartan algebra of the symmetry group defines the set of mutually commuting observables.

Consider next the f sector.
  1. For a prime map fP=(f1,f2), P could correspond to 3 ordinary primes assignable to the 3 complex coordinates of H: f1 and f2 could be prime polynomials with respect to all these coordinates. Does this mean that 3 p-adic length scales are involved or is there some criterion selecting one p-adic length scale, say assignable to the M4 complex coordinate or to the hypercomplex coordinate u?
  2. For a prime map fP, the space-time surface as a root is connected. The original hypothesis would state that heff/h0 corresponds to the number space-time regions representing roots of fP rather than to the order of the generalized Galois group associated to the surface fP=0 and permuting the roots as space-time regions to each other. Again the cyclicity of the generalized Galois group would guarantee the consistency of the two views. Now however the polynomials are ordinary polynomials obeying ordinary commutative arithmetics. But is there any need to assign heff to fP? As far as applications are considered, g seems to be enough.
  3. gpkº f has pk disjoint roots of gk. f=(gpk/gr)º h has pk roots and r poles as roots of gr. Also these are disjoint so that functional primeness for g does not imply connectedness. Functional primeness for f would be required.
Does Mother Nature love her theoreticians?

The hypothesis that Mother Nature is theoretician friendly (see this) and this) involves quantum field theoretic thinking, which can be motivated in TGD by the assumption that the long length scale limit of TGD is approximately described by quantum field theory. What this principle states is the following.

  1. When the quantum states are such that perturbative quantum field theory ceases to converge, a phase transition heff→ nheff occurs and reduces the value of the coupling strength αK ∝ 1/ℏeff by factor a 1/n so that the perturbation theory converges. This can take place when the coupling constant defined by the product of charges involved is so large that convergence is lost or at least that unitarity fails. The phase transition gives rise to quantum states, which are Galois singlets for the larger Galois group.
  2. The classical interpretation would be that the number of space-time surfaces as roots of g1 º f1 increases by factor n, where n is the order of polynomial g1. The total classical action should be unchanged. This is the case if at the criticality for the transition the n space-time surfaces are identical.
Can the transition take place in BSFR or even SSFR? Can one associate a smooth classical time evolution with f→ gpº kº f producing p copies of the original surface at each step such that the replacement αk → αK/p occurs at each step?
  1. The transition should correspond to quantum criticality, which should have classical criticality in algebraic sense as a correlate. Polynomials xn have x=0 as an n-fold degenerate root. In mathematics degenerate roots are regarded as separate. Now they would correspond to identical space-time surfaces on top of each other such that even an infinitesimal deformation can separate them. If the copies are identical at quantum criticality, a smooth evolution leading to an approximate n-multiple of a single space-time surface is possible. The action would be preserved approximately and the proposed scaling down of αK would guarantee this.
  2. The catastrophe theoretic analogy is the vertex of a cusp catastrophe. At the vertex of the cusp 3 roots coincide and at the V-shaped boundary of the plane projection of the cusp 2 roots coincide. More generally, the initial state should be quantum critical with pk degenerate roots. In the simplest one would have p degenerate roots and p=2 and p=3 and their powers are favored empirically and by the very special cognitive properties of these options (the roots can be solved analytically). Also this suggest that Mother Nature loves theoreticians.
  3. g1(f1)=f1p would satisfy the condition. An arbitrary small deformation of f1p by replacing it with akº f1kp would remove the degeneracy. The functional counterpart of the p-adic number would be +e sum of g1,k= akº f1kp as product ∏k g1k. Each power would correspond to its own almost critical space-time surface and ak=1 would correspond to maximal criticality. This would correspond to the number ∑ pk and one would obtain Mersenne primes and general versions for p>2 naturally from maximal criticality giving rise to functional p-adicity. The classical non-determinism due to criticality would correspond naturally to p-adic non-determinism.

To sum up, the situation concerning the relationship between number theoretic and geometric views of TGD looks rather satisfactory but there are many questions to be asked and answered. The understanding of M8-H duality as one aspect of the duality between number theory and geometry as analog of momentum-position duality generalizes from point-like particles to 3-surfaces is far from complete: one can even ask whether the M8 view produces more problems than it solves.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, April 19, 2025

Is the brain quantum critical?

Sabine Hoosenfelder had an interesting posting (see this) telling about a real progress in understanding consciousness (see the article Complex harmonics reveal lower-dimensional manifolds of critical brain behavior).

The first basic mystery is that the brain can react extremely rapidly in situations, which require rapid action such being in jungle and suddenly encountering a tiger. This rapid reaction extremely difficult to understand in the standard neuroscience framework. A lot of signalling inside brain and between brain and body is required and the rate of neural processing of information seems to hopelessly slow. You could not generate a bodily expression of horror before you would be dead.

Second mystery is the extremely low metabolic energy rate of the brain: about .1 kW and extremely small as compared to the power needed by ordinary computers. Supercomputer uses a power which is measured in Megawats. This is one of the key problems of the AI based on classical computation: nuclear power plants are proposed of being built to satisfy the needes of large language models. The extremel low dissipation rate of the brain suggests that long range quantum coherence is involved. But this is impossible in standard quantum theory.

For decades (since 1995 when I started to develop TGD inspired theory of consciousness) I have tried to tell that quantum coherence in the scale of brain could be part of the solution to the problem. The entire brain or even entire body could acts as a single coherent whole and act instantaneously.

Unfortunately, standard quantum theory (or should one say colleagues) does not allow quantum coherence in the bodily nor even brain scale. Quantum coherence at the level of the ordinary biomatter is not needed. Nonquantal coherence could be induced by quantum coherence at the level of field body, the TGD counterpart of classical fields in TGD, having much larger size than the biological body. This would explain EEG, which in neuroscience is still seen often as somekind of side effect although it carries information.I t is difficult to see why brain as a master energy saver would sent information to outer space just for fun. But so do many neuroscientists believe.

Quantum criticality induces ordinary criticality in the TGD Universe

Quantum criticality implies long range quantum fluctuations and quantum coherence: the entire system behaves like single quantum unit. The TGD Universe as a whole is quantum critical fractal structure with quantum criticality realized in various scales. The degree of quantum criticality and its scale depends on the parameters of the system and can be assigned with field body of a system carrying heff> h phases of ordinary matter at it. These phases behaving like dark matter (but not identifiable as galactic dark matter in TGD) have higher IQ than ordinary biomatter and control it. Field body itself is a hierarchical structure.

Quantum criticality is perhaps the basic aspect of TGD inspired theory of consciousness and of living matter. The number theoretical vision of TGD predicts a hierarchy of Planck constants heff=nh0, where h0 satisfying h=(7!)2×h0, is the minimal value of heff. This gives rise to a hierarchy of phases of ordinary matter behaving in many respects like dark matter but very probably not identifiable as galactic dark matter. The role of metabolism is to provide the energy needed to increase the value of heff, which spontaneously decreases. The physiological temperature is one critical parameter.

The complexity associated with the quantum criticality corresponds to the algebraic complexity assignable to the polynomials determining the space-time surface. The degree of polynomials equals to the number of roots and the dimension of extension of rationals corresponds to the order of Galois group given by n=heff/h0.

Algebraic complexity can only increase since there the number for systems more complex than a given system is infinitely larger than the number with smaller complexity. This reduces evolution to number theory and quantum mechanics. For the mathematical background of TGD inspired quantum theory of conscious and quantum biology, see for instance this , this , this , this , and this.

Classical signaling takes place with light velocity

Also the classical processing of information might take place much faster then in neuroscience picture. The existence of biophotons has been known for about century but still neuroscience refuse to take them seriously. TGD indeed suggests also that neuroscience view about classical information processing is wrong (see for instance this , this this, this). Nerve pulses need not be the primeary carriers of sensory information. The function of nerve pulses could be only to serve as relays at the synaptic contacts. This would make possible the real information transfer as dark photons (photons with large value of heff) along monopole flux tubes associated with axons and also leading to the field body of the brain. Biophotons, whose origin is not understood, would result in the transformation of dark photons to biophotons (see this).

The brain could use dark photon signals propagating along monopole flux tubes carrying information. The information would be coded by the frequency modulation of the Josephson frequencies associated with the cell membranes with large value of heff making the Josephson frequency small. The signal would be received at the field body of appropriate subsystem of the brain by cyclotron resonance and the signal would give rise to a sequence of pulses, which would define the feedback to the brain and possibly generate nerve pulses. Light velocity is by factor 106-107 higher than the velocity of the nerve pulses about 10-100 m/s. This allows a lot of data processing involving forth and back-signalling in order to build standardized mental images from the incoming sensory information (see this).

In TGD, this would allow classical signalling with a field body of size scale of the Earth in time scale shorter than .1 seconds (alpha frequency), which is roughly the duration of what might be called chronon of human conscious experience. After this signal is received by the field body, a phase transition generating quantum coherence at the level of field body takes place and induces coherence in the scale of brain or even body.

Zero energy ontology predicts time reversals in the counterparts of ordinary state function reductions

Also zero energy ontology in which "big" state function reductions (BSFRs) as the TGD counterparts of ordinary state function reductions change the arrow of time can make the information processing faster and the proposal is that the motor responses involve signals propagating with the reverse arrow of geometric time so that the response would start already in the geometric past. Libet's findings that volitional action is preceded by brain activity support this view.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, April 18, 2025

Could one understand p-adic length scale hypothesis in terms of functional arithmetics?

Holomorphy= holography vision reduces the gravitation as geometry to gravitations as algebraic geometry and leads to exact general solution of geometric field equations as local algebraic equations for the roots and poles rational functions and possibly also their inverses.

The function pairs f=(f1,f2): H→ C2 define a function field with respect to element-wise sum and multiplication. This is also true for the function pairs g=(g1,g2): C2→ C2. Now functional composition º is an additional operation. This raises the question whether ordinary arithmetics and p-adic arithmetics might have functional counterparts.

One implication is quantum arithmetics as a generalization of ordinary arithmetics (see this). One can define the notion of primeness for polynomials and define the analogs of ordinary number fields.

What could be the physical interpretation of the prime polynomials (f1,f2) and (g1,g2), in particular (g1,Id) and how it relates to the p-adic length scale hypothesis (see this)?

  1. p-Adic length scale hypothesis states that the physically preferred p-adic primes correspond to powers p∼ 2k. Also powers p∼ qk of other small primes q can be considered (see this) and there is empirical evidence of time scales coming as powers of q=3 (see this and this). For Mersenne primes Mn= 2n-1, n is prime and this inspires the question whether k could be prime quite generally.
  2. Probably the primes as orders of prime polynomials do not correspond to very large p-adic primes (M127=2127-1 for electron) assigned in p-adic mass calculations to elementary particles.

    The proposal has been that the p and k would correspond to a very large and small p-adic length scale. The short scale would be near the CP2 length scale and large scale of order elementary particle Compton length.

Could small-p p-adicity make sense and could the p-adic length scale hypothesis relate small-p p-adicity and large-p p-acidity?
  1. Could the p-adic length scale hypothesis in its basic form reflect 2-adicity at the fundamental level or could it reflect that p=2 is the degree for the lowest prime polynomials, certainly the most primitive cognitive level. Or could it reflect both?
  2. Could p∼ 2k emerge when the action of a polynomial g1 of degree 2 with respect to say the complex coordinate w of M4 on polynomial Q is iterated functionally: Q→ P circ Q → ...P º...Pº Q and give n=2k disjoint space-time surfaces as representations of the roots. For p=2 the iteration is the procedure giving rise to Mandelbrot fractals and Julia sets. Electrons would correspond to objects with 127 iterations and cognitive hierarchy with 127 levels! Could p= M127 be a ramified prime associated with Pº ...º P.

    If this is the case, p∼ 2k and k would tell about cognitive abilities of an electron and not so much about the system characterized by the function pair (f1,f2) at the bottom. Could the 2k disjoint space-time surfaces correspond to a representation of p∼ 2k binary numbers represented as disjoint space-time surfaces realizing binary mathematics at the level of space-time surfaces? This representation brings in mind the totally discontinuous compact-open p-adic topology. Cognition indeed decomposes the perceptive field into objects.

  3. This generalizes to a prediction of hierarchies p∼ qk, where q is a small prime as compared to p and identifiable as the prime order of a prime polynomial with respect to, say, variable w.
I have considered several identification of the p-adic primes and arguments for why the p-adic length scale hypothesis should be true.
  1. One can imagine I have tentatively identified p-adic primes as ramified primes (see this) appearing as divisors of the discriminant Dof a polynomials define as the product of root differences, which could correspond to that for g=(g1,Id).

    Could the 3 primes characterizing the prime polynomials fi:H→ C2 correspond to the small primes q? Could the ramified primes p∼ 2k as divisors of a discriminant D defined by the product of non-vanishing root differences be assigned with the polynomials obtained to their functional composites with iterates of a suitable g?

    Similar hypotheses can be studied for the iterates of g:C2→ C2 alone. The study of this hypothesis in a special case g=P2= x(x-1) described in an earlier section did not give encouraging results. Perhaps the identification of p-adic prime as ramified primes is ad hoc. There is also the problem that there are several ramified primes, which suggests multi-p-p-adicity. The conjecture also fails to specify how the ramified prime emerges from the iterate of g.

  2. A new identification of p-adic primes suggested by quantum p-adics is that p-adic primes correspond to primes defining the degrees of prime polynomials g and that the Mersenne primes Nn= 2n-1 correspond to rational functions P2º n/P1, where / corresponds to element-wise-division and P2 can be any polynomials of degree 2. This would mean category theoretic morphism of quantum p-adics to ordinary p-adics. A more general form of the conjecture is that the rational functions Ppº n/Pk correspond to preferred p-adic primes.

    The reason could be that for these quantum primes it is possible to solve the roots as zeros and poles analytically for p<5. This might make them cognitively very special. The primes p=2 and p=3 would be in a unique role information theoretically. For these primes there is indeed evidence for the p-adic length scale hypothesis and these primes are also highly relevant for the notion of music harmony (see this, this and this).

    See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

About quantum arithmetics

Holomorphy= holography vision reduces the gravitation as geometry to gravitations as algebraic geometry and leads to exact general solution of geometric field equations as local algebraic equations for the roots and poles rational functions and possibly also their inverses.

The function pairs f=(f1,f2): H→ C2 define a function field with respect to element-wise sum and multiplication. This is also true for the function pairs g=(g1,g2): C2→ C2. Now functional composition º is an additional operation. This raises the question whether ordinary arithmetics and p-adic arithmetics might have functional counterparts.

Functional (quantum) counterparts of integers, rational and algebraic numbers

Do the notions of integers, rationals and algebraic numbers generalize so that one could speak of their functional or quantum counterparts? Here the category theoretical approach suggesting that degree of the polynomial defines a morphism from quantum objects to ordinary objects leads to a unique identification of the quantum objects.

  1. For maps g: C2→ C2, both the ordinary element-wise product and functional composition º define natural products. The element-wise product does not respect polynomial irreducibility as an analog of primeness for the product of polynomials. Degree is multiplicative in º. In the sum, call it +e, the degree should be additive. This leads to the identification of +e as an elementwise product. One can identify neutral element 1º of º as 1º=Id and the neutral element 0e of +e as ordinary unit 0e=1. This is a somewhat unexpected conclusion.

    The inverse of g with respect to º corresponds to g-1 for º, which is a many-valued algebraic function and to 1/g for +e. The maps g, which do not allow decomposition g= hº i, can be identified as functional primes and have prime degree. If one restricts the product and sum to g1 (say), the degree of a functional prime g corresponds to an ordinary prime. These functional integers/rationals can be mapped to integers by a morphism mapping their degree to integer/rational. f is a functional prime with respect to º if it does not allow a decomposition f= gº h. One can construct integers as products of functional primes.

  2. The non-commutativity of º could be seen as a problem. The fact that the maps g act like operators suggest that for the functional primes gp the primes in the product commute. Since g is analogous to an operator, this can be interpreted as a generalization of commutativity as a condition for the simultaneous measurability of observables.
  3. One can also define functional polynomials P(X), quantum polynomials, using these operations. In the terms pnº Xn pn and g should commute and the sum ∑e pnXn corresponds to +e. The zeros of functional polynomials satisfy the condition P(X)=0e=1 and give as solutions roots Xk as functional algebraic numbers. The fundamental theorem of algebra generalizes at least formally if Xk and X commute. The roots have representations as space-time surfaces. One can also define functional discriminant D as the º product of root differences Xk-e Xl, with -e identified as element-wise division.
About the notion of functional primeness

There are two cases to consider corresponding to f and g. Consider first the pairs (f1,f2): H→ C2.

  1. Primeness could mean that f does not have a composition f=gº h. Second notion of primeness is based on irreducibility, which states that f does not reduce to an elementwise product of f= g× h. Concerning the definition of powers of functional primes in this case, a possible problem is that the power (f1n,f2n) defines the same surface as (f1,f2) as a root with n-fold degeneracy. Irreducibility eliminates this problem but does not allow defining the analog of p-adic numbers using (f1n,f2n) as analog of pn.

  2. Since there are 3 complex coordinates of H, fi are labelled by 3 ordinary primes pr(fi), r=1,2,3, rather than single prime p. By the earlier physical argument related to cosmological constant one could assume f2 fixed, and restrict the consideration to f1. Every functional p-adic number, in particular functional prime, corresponds to its own ramified primes. The simplest functional would correspond to (f1,f2)=(0,0) (could this be interpreted as stating the analog of mod ~p=0 condition).

  3. The degrees for the product of polynomial pairs (P1,P2) and (Q1,Q2) are additive. In the sum, the degree of the sum is not larger than the larger degree and it can happen that the highest powers sum up to zero so that the degree is smaller. This reminds of the properties of non-Archimedean norm for the p-adic numbers. The zero element defines the entire H as a root and the unit element does not define any space-time surface as a root.
Also the pairs (g1,g2) can be functional primes, both with respect to powers defined by element-wise product and functional composition º.
  1. The ordinary sum is the first guess for the sum operation in this case. Category theoretical thinking however suggests that the element-wise product corresponds to sum, call it +e. In this operation degree is additive so that products and +e sums can be mapped to ordinary integers. The functional p-adic number in this case would correspond to an elementwise product ∏ Xn º Ppn, where Xn is a polynomial with degree smaller than p defining a reducible polynomial.
  2. A natural additional assumption is that the coefficient polynomials Xn commute with each other and Pp. This is natural since the Xn and Pp act like operators and in quantum theory a complete set of commuting observables is a natural notion. This motivates the term quantum p-adics. The space-time surface is a disjoint union of space-time surfaces assignable to the factors Xk º Ppk º f. In quantum theory, quantum superpositions of these surfaces are realized. If the surface associated with Xk º Ppk º f is so large that it cannot be realized inside the CD, it is effectively absent from the pinary expansion. Therefore the size of the CD defines a pinary cutoff.
The notion of functional p-adics

What about functional p-adics?

  1. The functional powers gpº k of prime polynomials gp define analogs of powers of p-adic primes and one can define a functional generalization of p-adic numbers as quantum p-adics. The coefficients Xk in Xkº gpk are polynomials with degree smaller than p. The first idea which pops up in mind is that ordinary sum of these powers is in question. What is however required is the sum +e so that the roots are disjoint unions of the roots of the +e summands Xkº gpk. The disjointness corresponds to the fact that cognition can be said to be an analysis decomposing the system into pieces.
  2. Large powers of prime appearing in p-adic numbers must approach 0e with respect to the p-adic norm so that gPn must effectively approach Id with respect to º. Intuitively, a large n in gPn corresponds to a long p-adic length scale. For large n, gPn cannot be realized as a space-time surface in a fixed CD. This would prevent their representation and they would correspond to 0e and Id. During the sequence of SSFRs the size of CD increases and for some critical SSFRs a new power can emerge to the quantum p-adic.
The very inspiring discussions with Robert Paster, who advocates the importance of universal Witt Vectors (UWVs) and Witt polynomials (see this) in the modelling of the brain, forced me to consider Witt vectors as something more than a technical tool. As the special case Witt vectors code for p-adic number fields.
  1. Both the product and sum of ordinary p-adic numbers require memory digits and are therefore technically problematic. This is the case also for the functional p-adics. Witten polynomials solve this problem by reducing the product and sum purely digit-wise operations.
  2. Universal Witt vectors and polynomials can be assigned to any commutative ring R, not only p-adic integers. Witt vectors Xn define sequences of elements of a ring R and Universal Witt polynomials Wn(X1,X2,...,Xn) define a sequence of polynomials of order n. In the case of p-adic number field Xn correspond to the pinary digit of power pn and can be regarded as elements of finite field Fp,n, which can be also mapped to phase factors exp(ik 2π/p). The motivation for Witt polynomials is that the multiplication and sum of p-adic numbers can be done in a component-wise manner for Witt polynomials whereas for pinary digits sum and product affect the higher pinary digits in the sum and product.
  3. In the general case, the Witt polynomial as a polynomial of several variables can be written as Wn(X0,X1,...)=∑d|n d Xdn/d, where d is a divisor of n, with 1 and n included. For p-adic numbers n is power of p and the factors d are powers of p. Xd are analogous to elements of a finite field Gp,n as coefficients of powers of p.
Witt polynomials are characterized by their roots, and the TGD view about space-time surfaces both as generalized numbers and representations of ordinary numbers, inspires the idea how the roots of for suitably identified Witt polynomials could be represented as space-time surfaces in the TGD framework. This would give a representation of generalized p-adic numbers as space-time surfaces making the arithmetics very simple. Whether this representation is equivalent with the direct representation of p-adic number as surfaces, is not clear.

Could the prime polynomial pairs (g1,g2): C2→ C2 and (f1,f2): H=M4× CP2→ C2 (perhaps states of pure, non-reflective awareness) characterized by ordinary primes give rise to functional p-adic numbers represented in terms of space-time surfaces such that these primes could correspond to ordinary p-adic primes?

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Holography= holomorphy vision and functional generalization of arithmetics and p-adic number fields

In TGD, geometric and number theoretic visions of physics are complementary. This complementarity is analogous to momentum position duality of quantum theory and implied by the replacement of a point-like particle with 3-surface, whose Bohr orbit defines space-time surface.

At a very abstract level this view is analogous to Langlands correspondence. The recent view of TGD involving an exact algebraic solution of field equations based on holography= holomorphy vision allows to formulate the analog Langlands correspondence in 4-D context rather precisely. This requires a generalization of the notion of Galois group from 2-D situation to 4-D situation: there are 2 generalizations and both are required.

  1. The first generalization realizes Galois group elements, not as automorphisms of a number field, but as analytic flows in H=M4× CP2 permuting different regions of the space-time surface identified as roots for a pair f=(f1,f2) of pairs f=(f1,f2): H→ C2, i=1,2. The functions fi are analytic functions of one hypercomplex and 3 complex coordinates of H.

  2. Second realization is for the spectrum generating algebra defined by the functional compositions gº f, where g: C2→ C2 is analytic function of 2 complex variables. The interpretation is as a cognitive hierarchy of function of functions of .... and the pairs (f1,f2) which do not allow a composition of form f=gº h correspond to elementary function and to the lowest levels of this hierarchy, kind of elementary particles of cognition. Also the pairs g can be expressed as composites of elementary functions.

    If g1 and g2 are polynomials with coefficients in field E identified as an extension of rationals, one can assign to g º f root a set of pairs (r1,r2) as roots f1,f2)= (r1,r2) and ri are algebraic numbers defining disjoint space-time surfaces. One can assign to the set of root pairs the analog of the Galois group as automorphisms of the algebraic extension of the field E appearing as the coefficient field of (f1,f2) and (g1,g2). This hierarchy leads to the idea that physics could be seen as an analog of a formal system appearing in Gödel's theorems and that the hierarchy of functional composites could correspond to a hierarchy of meta levels in mathematical cognition.

  3. The quantum generalization of integers, rationals and algebraic numbers to their functional counterparts is possible for maps g: C2→ C2. The counterpart of the ordinary product is functional composition º for maps g. Degree is multiplicative in º. In sum, call it +e, the degree should be additive, which leads to the identification of the sum +e as an element-wise product. The neutral element 1º of º is 1º=Id and the neutral element 0e of +e is the ordinary unit 0e=1.

    The inverse corresponds to g-1 for º, which in general is a many-valued algebraic function and to 1/g for times. The maps g, which do not allow decomposition g= hº i, can be identified as functional primes and have prime degree. f:H→ C2 is prime if it does not allow composition f= gº h. Functional integers are products of functional primes gp.

    The non-commutativity of º could be seen as a problem. The fact that the maps g act like operators suggest that the functional primes gp in the product commute. Functional integers/rationals can be mapped to ordinary by a morphism mapping their degree to integer/rational.

  4. One can define functional polynomials P(X), quantum polynomials, using these operations. In P(X), the terms pnº Xºn, pn and X should commute. The sum ∑e pnXn corresponds to +e. The zeros of functional polynomials satisfy the condition P(X)=0e=1 and give as solutions roots Xk as functional algebraic numbers. The fundamental theorem of algebra generalizes at least formally if Xk and X commute. The roots have representation as a space-time surface. One can also define functional discriminant D as the º product of root differences Xk-e Xl, with -e identified as element-wise division and the functional primes dividing it have space-time surface as a representation.
What about functional p-adics?
  1. The functional powers gpº k of primes gp define analogs of powers of p-adic primes and one can define a functional generalization of p-adic numbers as quantum p-adics. The coefficients Xk Xkºgpk are polynomials with degree smaller than p. The sum +e so that the roots are disjoint unions of the roots of Xkºgpºk.

  2. Large powers of prime appearing in p-adic numbers must approach 0e with respect to the p-adic norm so that gPºn must effectively approach Id with respect toº. Intuitively, a large n in gPºn corresponds to a long p-adic length scale. For large n, gPºn cannot be realized as a space-time surface in a fixed CD. This would prevent their representation and they would correspond to 0e and Id. During the sequence of SSFRs the size of CD increases and for some critical SSFRs a new power can emerge to the quantum p-adic.
  3. Universal Witt polynomials Wn define an alternative representation of p-adic numbers reducing the multiplication of p-adic numbers to elementwise product for the coefficients of the Witt polynomial. The roots for the coefficients of Wn define space-time surfaces: they should be the same as those defined by the coefficients of functional p-adics.
There are many open questions.
  1. The question whether the hierarchy of infinite primes has relevance to TGD has remained open. It turns out that the 4 lowest levels of the hierarchy can be assigned to the rational functions fi: H→ C2, i=1,2 and the generalization of the hierarchy can be assigned to the composition hierarchy of prime maps gp.
  2. >Could the transitions f→ gº f correspond to the classical non-determinism in which one root of g is selected? If so, the p-adic non-determinism would correspond to classical non-determinism. Quantum superposition of the roots would make it possible to realize the quantum notion of concept.

  3. What is the interpretation of the maps g-1 which in general are many-valued algebraic functions if g is rational function? g increases the complexity but g-1 preserves or even reduces it so that its action is entropic. Could selection between g and g-1 relate to a conscious choice between good and evil?
  4. Could one understand the p-adic length scale hypothesis in terms of functional primes. The counter for functional Mersenne prime would be g2ºn/g1, where division is with respect to elementwise product defining +e? For g2 and g3 and also their iterates the roots allow analytic expression. Could primes near powers of g2 and g3 be cognitively very special?
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, April 15, 2025

Impossible device creates free energy in the Earth's magnetic field

Sabine Hossenfelder has a Youtube talk (see this) with title ""Impossible" Device Creates Free Electricity from Earth's Magnetic Field". It tells about a very interesting anomaly, which could be the anomaly found already by Faraday but forgotten since it does not quite fit the framework of Maxwellian electrodynamics. I learned of this phenomenon during my student days. The effect was an exercise in an electrodynamics course but neither I nor others realized that the effect seems to be in conflict with Maxwell's theory!

If I understood correctly, the effect has been now firmly re-established in the case of the Earth's electric field (see this). The electric field would be created by a static dipole assignable to the magnetic field of the Earth with respect to which the Earth rotates.

If my interpretation is correct, an analogous effect occurs also for the Faraday disk, which is a conductor disk rotating around its symmetry axis. Faraday observed that a very small radial electric field is generated with magnetic Eρ= ω B (c=1). This radial electric field field can be obtained from a vector potential At= ρω B. This generates electric charge density ρ= ω B inside the disk. This looks strange: how can rotation generate electric charge? Does this conform with Maxwell's laws?

  1. What comes to mind is that Maxwell's induction law implied by special relativity explains the effect. However, the rotation is not a rectilinear motion although the magnitude of the velocity is constant so that the effect is more general than predicted by the Faraday law. Furthermore, the magnetic field rotates and at least in quantum theory, nothing should happen if the rotational symmetry is exact.
  2. Could the charge generation be a dynamical phenomenon? Could there be a generation of a surface charge compensating for the charge density in the interior? The sign of this charge density depends on the direction of the rotation so that surface charge would be positive for the second direction of rotation. One would expect that the surface charge is negative since electrons are the charge carriers. Also a large parity violation would take place.
One could understand the effect in terms of the notion of induced gauge field. The explanation of Faraday effect was one of the first applications of TGD (see this). The phenomenon is familiar for free energy researchers, whom academic researchers do now what count as real researchers, and also technological applications have been proposed (see this).
  1. In the TGD framework, space-time is a 4-surface and gauge fields are induced. so that their geometrization is obtained. This means that the electroweak vector potentials are the projection of the spinor connection of CP2. Let (cos(Θ),φ) be spherical coordinates for the geodesic sphere S2 of CP2. The Kähler gauge potential is Aφ= cos(Θ) and the Kähler form is JΘφ= sin(Θ). Introduce cylindrical coordinates (t,z,ρ,φ) for M4 and space-time surface.
  2. The simplest space-time surface describing the situation without rotation corresponds to the embedding (cos(Θ),φ) = (f(ρ),nφ), n integer. The non-vanishing component of the induced gauge potential is (Aφ= nf(ρ) and induced magnetic field is Bz= n∂ρf. The choice f=Bρ gives a constant magnetic field.
  3. The rotation of the space-time surface implies φ \rightarrow φ-ω t= nφ-ω t so that induced vector potential gets time component At= fω giving rise to electric field E= ρ ω B. This is what the Faraday law extended to curvilinear motion would give. One could interpret the Faraday effect as a direct evidence for the notion of induced gauge field (see this).
How to describe the generation of the em charge? Is the charge purely geometric vacuum charge without any charger carriers or are charged carriers involved?
  1. Could there be a charge transfer between the disk and a third party? In TGD, the third party would be what I call field body, which plays a key role in the explanation of numerous anomalies. TGD predicts the possibility of both electric and magnetic bodies and magnetic bodies, which are space-time surfaces giving rise to the TGD counterparts of Maxwellian fields and gauge fields.
  2. The field bodies are carriers of macroscopic quantum phases with large effective Planck constant heff=nh0, h= (7!)2h0 (a good guess). For the electric field body, ℏem wouldbe proportional to a product of elementary particle charge q and large em charge Q associated with a negatively charged system such as DNA, cell, Earth, capacitor,.. giving rise to large scale electric field. For the gravitational magnetic body ℏgr would be proportional to a large mass M, such as the mass of the Earth or Sun and small mass m.
  3. Both signs for the charge for the rotating disk are in principle possible and are determined by the direction or the rotation but in living matter negative charge is typical and could be generated by Pollack effect transforming ordinary protons to dark protons at the gravitational or electric magnetic body associated with the system and inducing the generation of exclusion zone (EZ) with negative charge giving rise to electric field body carrying dark electrons. Reversal of the Pollack effect would bring the protons back. Electrons could be transferred to the electric body or return from it. This effect would mean a large parity breaking effect and could relate closely to the chiral selection in living matter. TGD indeed predicts large parity breaking effects since macroscopic electroweak fields are predicted to be possible.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

About the physical interpretation of gravastar in the TGD framework

The gravastar model for a blackhole-like object describes stellar interior as de-Sitter space and exterior as Schwarscild metric. The surface of the blackhole is predicted to carry exotic phase. In the previous post (see this) I demonstrated de-Sitter metric allows a realization as space-time surface and that blackhole-objects and in fact all stars could be modelled in this way.

In the sequel I will consider the physical interpretation of de-Sitter space-time represented as a 4-surface in the TGD framework.

  1. In TGD, twistor lift predicts cosmological constant Λ with a correct sign (see this and this). The twistor lift of TGD predicts that Λ= 3/α2, where α is a length scale is dynamical and has a spectrum. The mass density ρ is associated with the volume term of the dimensionally reduced action having 3/(8π Gα2) as coefficient. Also Kähler action is present and contains CP2 part and possibly also M4 part.

    Λ is not a universal constant in TGD but depends on the size scale of the space-time sheet. The naive estimate is that it corresponds to the size scale of the space-time sheet associated with the system or its field body of the system, which can be much larger than the system.

    p-Adic length scale hypothesis suggests that apart from a numerical constant the scale LΛ=(1/Λ)1/2 equals to the p-adic length scale Lp characterizing the space-time sheet. If p-adic length scale hypothesis L(k)= p1/2, where the prime p satisfies p∼ 2k, it implies L(k)= 2(k-151)/2 L(151), L(151)∼ 10 nm.

  2. How does the average density of an astrophysical object or even smaller object relate to the vacuum energy density determined by Λ. There are two options: vacuum energy density corresponds to an additional contribution to the average energy energy density or determines it completely in which case one must assume quantum classical correspondence stating that the quantal fermionic contributions to the energy and other conserved quantum numbers are identical with the classical contributions so that there would be kind of duality. This would hold true only for eigenvalues of charges of the Cartan algebra.
  3. One can assign to the cosmological constant a length scale as the geometric mean

    lΛ= (lP LΛ)1/2 ,

    where Planck length is defined as lP= (ℏ G)1/2. One obtains therefore 3 length scales, Planck length, the big length scales LΛ and their geometric mean lΛ.

  4. What is the relationship to the spectrum of Planck constants predicted by the number theoretical vision of TGD? If one replaces ℏ with ℏeff=nh0, one obtains a spectrum of gravitational constants G and of Planck length scales. CP2 size scale R ∼ 104lP is a fundamental length scale in TGD. One can argue that G is expressible in terms of R=lP as Geff=lP/(ℏeff1/2 and that the CP2 length scale satisfies R=lP for the minimal value h0 of heff so that one obtains Geff= R/heff1/2. For h0 one obtains the estimate h= (7!)2h0 in terms of Planck constant h. This would predict a hierarchy of weakening values of G.

    Note that G=lP/ℏeff1/2 would predict the scaling lΛ∝ ℏeff1/4. Gravitational Planck constant ℏgr= GMm/\beta0 for the system formed by large mass M and small mass m has very large values.

It is interesting to look at what values of lΛ are associated with LΛ , characterizing the size scale of a physical system or possibly of its field body.
  1. For the "cosmological" cosmological constant one has LΛ∼ 1061lP giving lΛ∼ 1031.5lP ∼ 2× 10-4 m. This corresponds to the size scale of a neuron. LΛ could characterize the largest layer of its field body with a cosmological size scale.
  2. A blackhole with the mass of the Sun has Scwartschild radius rS= 3 km. Λ=rS gives lΛ∼ 2.19× 10-16 m. The Compton length of the proton is lp=2.1× 10-16 m. This estimate motivated the proposal that stellar blackholes could correspond to volume filling flux tubes containing a sequence of protons with one proton per Compton length of proton. This monopole flux tube would correspond to a very long nuclear string defining a gigantic nucleus. This result conforms with quantum classical correspondence stating that vacuum energy density corresponds to the density of fermions.
  3. One can also look at what one obtains for the Sun with radius RS= 6.9× 108 m, which is in a good approximation 100 times the radius RE= 6.4× 106 m of the Earth. lΛ scales up by the ratio (RS/rS)1/2 to lΛ ∼ 5.7× 102× lP∼ 1.3× 10-14 m. This corresponds to a nuclear length scale and the corresponding particle would have a mass of about 17 MeV. Is it mere coincidence that there is recent very strong evidence (23 sigmas!) from the so called Ytterbium anomaly (see this) for so called X boson with mass 16-17 MeV (see this and this).

    The corresponding vacuum energy density ℏ/Λ4 would be about 8× 1038 mp/m3. This is 12 orders of magnitude higher than the average density .9× 1027 mp/m3 of the Sun. Since lΛ ∝ LΛ1/2 and ρ ∝ lΛ-4∝ LΛ-2 one obtains LΛ≥ 1012RS∼ 1020 m ∼ 105 ly, which corresponds to the size scale of the Milky Way.

    The only reasonable interpretation seems to be that LΛ characterizes the lengths of monopole flux tubes which fill the volume only for blackhole-like objects. The TGD based model for the Sun involves monopole flux tubes connecting the Sun with the galactic nucleus or blackhole-like object (see this){Haramein. In this case the density of matter at the flux tubes would be much higher since protons would be replaced with their M89 counterparts 512 higher mass. For this estimate, the vacuum energy density along flux tubes would be the average density of the Sun. At least two kinds of flux tubes would be required and this is consistent with the notion of many-sheeted space-time.

    The proposed solar model in which the solar wind and energy would be produced in the transformation of M89 nuclei to ordinary M107 nuclei allows to consider the possibility that the Sun and stars are blackhole-like objects in the sense that the interior correspond contains a volume filling flux tube tangle carrying vacuum energy density which is the average value of the solar mass density. I have considered this kind of model in (see this).

    One can wonder whether the scaling up the value of h to heff help to reduce the vacuum energy density assigned to the Sun? From lΛ∝ ℏeff1/4 the density proportional to ℏeff/lΛ4 does not depend on the value of heff.

To sum up, TGD could allow the interior of the gravastar solution as a space-time surface and this would correspond to the simplest imaginable model for the star. It is not clear whether Einstein's equations can be satisfied for some action based on the induced geometry but volume action is an excellent candidate even if cosmological constant is not allowed. In the TGD framework, the cosmological constant would correspond to the volume action as a classical action.

Schwartschild metric as exterior metric is representable as a space-time surface (see this) although it need not be consistent with any classical action principle and it could indeed make sense only at the quantum field theory limit when the many-sheeted space-time is replaced with a region of M4 made slightly curved. The spherical coordinates for the Schwartschild metric correspond to spherical coordinates for the Minkowski metric and Schwartschild radius is associated with the radial coordinate of M4. The exotic matter at the surface of the star as a blackhole-like entity could have a counterpart in the TGD based model of star (see this).

See the article Does the notion of gravastar make sense in the TGD Universe? or the chapter Some Solar Mysteries.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, April 14, 2025

Does the notion of gravastar make sense in the TGD Universe?

Mark McWilliams asked for my TGD based opinion about gravastar as a competing candidate for the blackhole (see this and this). The metric of the Gravastar model would be the de-Sitter metric in the interior of the gravastar. The density would be constant and there would be no singularity at the origin. The condition ρ=-p would be true for de-Sitter and there would be analogy with dark energy, which in TGD framework contributes to galactic dark matter identified as classical volume and magnetic energies for what I call cosmic strings, which are 4-surfaces with string world sheet as M4 projection. The condition ρ=p would hold true for ultrarelativistic matter at the surface, which indeed as a light-like metric if the infinite value of the radial component grr of the Schwartschild metric is deformed to a finite value at the horizon. In the exterior one would have ρ=p=0.

TGD suggests a model of a blackhole-like object as volume filling monopole flux tube tangle, which carries a constant mass density, which can be interpreted as dark energy as a sum of classical magnetic and volume energies. Quantum classical correspondence forces us to ask whether the description in terms of sequences of nucleons and in terms of classical energy are equivalent or whether the possibly dark nucleons must be added as a separate contribution. I have discussed the TGD based model of blackhole-like objects in \cite{btart{Haramein.

It became as a surprise for me that the gravastar could serve as a simple model for this structure and describe the space-time sheet at which the monopole flux tube tangle is topologically condensed. TGD also suggests that the surface of the star carries a layer of M89 matter consisting of scaled variants of ordinary hadrons with mass scale which is 512 times higher than that for ordinary hadrons. This would be the counterpart for the exotic matter and the surface of the gravastar \cite{btart{Haramein. This model predicts that the nuclear fusion at the core of the star is replaced with a transformation of M89 hadrons to ordinary hadrons. This would explain the energy production of the star and also the stellar wind and question the structure of the interior. I have proposed that it could be a quantum coherent system analogous to a cell.

Consider now the TGD counterpart of the gravastar model at quantitative level.

  1. The metric of AdSn (anti de-Sitter) resp. dSn (de-Sitter) can be represented as space-like resp. time-like hyperboloid resp. of n+1-dimensional Minkowski space with one time-like dimension. The metric is induced metric

    dx02-∑ i=1ndxi2 ,

    with metric tensor deducible from the representation

    x02-∑ i=1nxi2= ε α 2 ,

    as a surface. Here one has ε =-1 AdSn and ε=1 for dSn.

    It should be warned that the Wikipedia definition of the dSn (see this) contains the right-hand side with a wrong sign (there is ε=-1 instead of ε=1) whereas the definition of AdSn (see this) is correct. For n=4 this could realize AdS4 resp. dS4 as a space-like resp. time-like hyperboloid of 5-D Minkowski space.

  2. In TGD this representation as surface is not possible as such. One can however compactify the 5:th space-like dimension and represent it as a geodesic circle of CP2. dx52 is replaced with R22 and x52 with R2φ2. The contribution of S1 to the induced metric is very small since R corresponds to CP2 radius. The space-time surface would be defined by the condition

    a2= R2φ2+ε α2 ,

    where a2=t2-x2-y2-z2 defines light-cone proper time a. In TGD it would be associated with the second half of the causal diamond (CD). A more convenient form is following

    R2φ2= a2-ε α 2 ,

    where a is the light-cone proper time coordinate of M4. This requires a2≥ε α2. For ε=1 this implies a2≥ α2. For ε=1 one has a2≥ -α2 so that also space-like hyperboloids are possible.

  3. If the embedding is possible, one obtains an infinite covering of S1 by mass shells a2= R2φn2+ε α2, where one has φn= φ +n2π. For φ → ∞ one has a → nR. Hyperboloids associated with φn define a lattice of hyperboloids at this limit, a kind of time crystal.
  4. If the classical action is Kähler action of CP2, this surface is a vacuum extremal since the CP2 projection is 1-dimensional. If also the contribution M4 Kähler action to Kähler action suggested by the twistor lift of TGD is allowed, the situation the action is instanton action and vanishes although the induced M4 Kähler form does not vanish and defines self dual abelian field. It is not quite clear whether this is vacuum extremal anymore.

    If the Kähler action vanishes, volume action is the natural guess for the classical action and minimal surface equations are indeed satisfied if S1 is a geodesic circle. The mass density associated with this action would be constant in accordance with the de-Sitter solution.

  5. Consider next the induced metric. One has

    φn= n2π + [(a/R)2-ε (α/R)2]1/2 .

    This gives Rdφn/da= +/- a/[a2-ε α2]1/2. Note that a2≥ ε α2 is required to guarantee the reality of dφ/da. The gaa component of the induced metric (Robertson-Walker metric with k=-1 sub-critical mass density) is

    gaa=1-R2(dφn/da)2= 1- a2/(a2+ε α2)= εα2/(a2+εα2) .

It is useful to consider AdS4 and dS4 separately.
  1. For AdS4 with ε=-1, the reality of dφ/da implies a2>-α2 implying gaa<0 so that the induced metric has an Euclidean signature. This is mathematically possible and CP2 type extremals with Euclidean signature are in an important role in the TGD based model of elementary particles. What Euclidian cosmology could mean physically, is however not clear.
  2. For dS4 with ε=1, dφ/da is real for a22>0 implying a2≥ -α2. This allows all time-like hyperboloids and also some space-like hyperboloids. One has

    gaa=1-R2(dφn/da)2= 1- a2/(a22)= α2/(a22) .

    gaa is positive in the range allowed by the reality of dφ/da.

  3. The mass density of Robertson-Walker cosmology is obtained from the standard expression of the metric (note that one has dt2=gaada2)is given by

    ρ =(3/8πG)[[(da/dt)/a)2-1/a2]= (3/8πG)[1/(gaaa2) -1/a2]=(3/8πG α2) .

    The mass density is constant and could be interpreted in terms of a dynamically generated cosmological constant in GRT framework. This is not what happens usually in the Big Bang cosmology but would conform with a model of a star in an expanding Universe.

Somewhat surprisingly, TGD could allow the interior of the gravastar solution as a space-time surface and this would correspond to the simplest imaginable model for the star. It is not clear whether Einstein's equations can be satisfied for some action based on the induced geometry but volume action is an excellent candidate even if cosmological constant is not allowed. In the TGD framework, the cosmological constant would correspond to the volume action as a classical action.

Schwartschild metric as exterior metric is representable as a space-time surface \cite{allb/tgdgrt} although it need not be consistent with any classical action principle and it could indeed make sense only at the quantum field theory limit when the many-sheeted space-time is replaced with a region of M4 made slightly curved. The spherical coordinates for the Schwartschild metric correspond to spherical coordinates for the Minkowski metric and Schwartschild radius is associated with the radial coordinate of M4. The exotic matter at the surface of the star as a blackhole-like entity could have a counterpart in the TGD based model of star \cite{btart/Haramein}.

See the article Does the notion of gravastar make sense in the TGD Universe? or the chapter Some Solar Mysteries.

For a summary of the earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, April 12, 2025

The rotation of galaxies in the same direction in giga-ly scale as evidence for the TGD view of space-time and cosmic quantum coherence

The rotation of galaxies in the same direction in giga-ly scale as evidence for the TGD view of space-time and cosmic quantum coherence.

Sabine Hossenfelder told in her Youtube video (see this) about the recent finding of Lior Shamir (see the article) that galaxies have a clear tendency to rotate in the same direction in Giga light-year length scales. There is also a popular article (see this) about this.

The following is the abstract of the article of Shamir.

JWST provides a view of the Universe never seen before, and specifically fine details of galaxies in deep space. JWST Advanced Deep Extragalactic Survey (JADES) is a deep field survey, providing an unprecedentedly detailed view of galaxies in the early Universe. The field is also in relatively close proximity to the Galactic pole. Analysis of spiral galaxies by their direction of rotation in JADES shows that the number of galaxies in that field that rotate in the opposite direction relative to the Milky Way galaxy is 50 per cent higher than the number of galaxies that rotate in the same direction relative to the Milky Way. The analysis is done using a computer-aided quantitative method, but the difference is so extreme that it can be noticed and inspected even by the unaided human eye. These observations are in excellent agreement with deep fields taken at around the same footprint by the Hubble Space Telescope and JWST. The reason for the difference may be related to the structure of the early Universe, but it can also be related to the physics of galaxy rotation and the internal structure of galaxies. In that case the observation can provide possible explanations to other puzzling anomalies such as the tension and the observation of massive mature galaxies at very high redshifts.

The popular article says that the fractions of the galaxies rotating to opposite directions with respect to the Milky Way are 2/3 and 1/3 and there is no doubt that the observation is real. The Doppler effect allows us to deduce the rotation direction for a given galaxy. Blueshift occurs in the opposite direction of rotation. The scale occurs in the scale of Giga light year.

From the article of Shamir one learns that these kinds of observations have been made already earlier, as early as 1985. Already Zeldowich observed that galaxies are associated with long linear structures and tend to propagate in the same direction. I have proposed a TGD based explanation in terms of long cosmic strings whose tangles give rise to the generation of galaxies along them (see this).

Several explanations for the findings of Shamir have been proposed. The entire universe has been proposed to rotate. Also a fractal Universe has been proposed in which case the rotating structures would appear in all scales. TGD predicts that space-times are 4-surfaces in H=M4×CP2. This leads to the notion of many-sheeted space-time strongly suggesting the possibility of fractal structures in all scales. A fractal structure in a given scale would correspond to a quantum coherence region. The realization that holography= holomorphy principle reduces the extremely non-linear field equations of TGD to algebraic equations led to the surprising conclusion that the fractality in question is a 4-D generalization of the fractality of Mandelbrot fractals and Julia sets (see this). The number theoretic vision predicts a hierarchy of effective Planck constants and provides a precise formulation for what the long range quantum coherence means.

Galaxies as 4-surfaces assignable with monopole flux tubes obtained by a thickening of a cosmic string are predicted to be organized along long string-like objects, cosmic strings, and be highly correlated, for instance having correlated spin directions. This explains the findings of Zeldowich and quantum large scale coherence allows us to understand the more general findings. In the very early Universe cosmic strings with 2-D M^4 projection would have dominated.

For the TGD based cosmology and astrophysics see for instance this. For the recent number theoretic vision of TGD see this.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, April 11, 2025

Some questions related to the maps g defining cognitive hierarchies realized as space-time surfaces

Rational maps g and possibly also their inverses which would be central in the realization of cognition and reflective hierarchies. These ideas are however far from their final form and in the following I try to imagine and exclude various alternatives. Some new results emerge.
  1. Quantum realization of concepts as superpositions in the set of space-time surfaces defining the classical concept is more natural than the classical realization.
  2. The roots of gºf correspond to classical non-determinism and would naturally correspond to generalized p-adicity and could also explain the p-adic length scale hypothesis.
  3. The inverses g-1 of the rational maps g correspond to algebraic functions unless g is analog of Möbius transformation. g-1ºf preserves the number of roots for f and decreases it for the iterate of g and in this case reduced complexity and negentropy. In the framework of TGD inspired theory of consciousness, this raises the question whether the quantum correlates for good and evil deeds as SFRs could correspond to maps of type g increasing algebraic complexity, information and quantum coherence and g-1 possibly reducing them.

1. What could happen in the transition f→ gºf?

The proposal is that in SSFR the transition f→ gºf takes place. The number of roots becomes n-fold if g is a rational function of form P/Q. What could this transition mean physically? One can consider two options.

1.1 The option allowing quantum realization of concept

The nm roots (poles and zeros) for g ºf , where f as m roots would be alternative outcomes of SSFR of which only a single outcome, or possible quantum superposition of the outcomes would be selected. What is so nice is that the classical non-determinism crucial for the TGD view of consciousness would follow automatically from the holography= holomorphy hypothesis without any additional assumptions.

Conservation laws conform with this view. All the alternative Bohr orbits would have the same classical conserved charges. The quantum superposition of the roots would represent a particular quantum realization of a concept and f→ gºf would mean a refinement of the quantum concept defined by f.

The hypothesis that the classical non-determinism correspond to the p-adic non-determinism would transform to a statement that different Bohr orbits associated gºk define analogs for the sequences of k pinary digits if there are p outcomes for gºf. A possible interpretation would be in terms of a k-digit pinary digit sequence in powers of p. The largest integer would correspond to n=2k for gºk. The generalization of the notion of the notion of p-adic numbers for which p is replaced by a functional prime g and based on the generalization of Witt polynomials is suggestive. It remains unclear whether this could allow us to understand the generalization of the p-adic length scale hypothesis stating that a large prime p∼ pk can be assigned to this set of Bohr orbits.

1.2 The option allowing a classical realization of concept

The union of nm space-time surfaces, where n is the degree of g and m is the number of roots of f, is generated in the step f→ gºf. The set of the nm space-time surfaces would give a classical realization of a concept as a set. Does this make sense? The first grave objection is that there is no continuous time evolution between f and gºf multiplying the number of space-time surfaces by n. Second objection relates to the conservation laws which seem to be violated. The third objection is that classical non-determinism is lost. It seems that this objection cannot be circumvented.

One can try to imagine ways to overcome the first two objections.

Option I: ZEO interpreted in the "eastern" sense in principle allows the creation of n space-time surfaces from each of the m space-time surfaces assignable with f. This is because the total classical charges of the zero energy states as sums of those for states at the boundaries of CD vanish. Zero energy state would be analogous to a quantum fluctuation.

Option II: In standard ontology, the classical realization of the concept as union of space-time surfaces defining its instances is possible only in a situation in which space-time surfaces are vacua or nearly vacua. Could this kind of surface serve as a template for the non-vacuum physical systems?

Cell replication, which would correspond to n=2 for g, was motivated by the consideration of both options, at least half-seriously. The instantaneous replication of the space-time surface representing the cell does not look sensible since the generation of biomatter requires a feed of metabolics and metabolic energy. Could a replicated field body serve as a kind of template for the formation of a final state involving two cells generated in f→ gºf? Could the replication occur at the level of the field body, proposed to control the biological body?

For Option II, conservation laws pose a problem for replication. In ZEO the classical charges of the space-time nm surfaces should be those associated with the passive boundary of CD and therefore same as those for f.

  1. Could the space-time surfaces be special in the sense that the classical charges vanish? The vanishing of classical conserved charges is not possible unless the classical action reduces to Kähler action allowing vacuum extremals. The finite size of CD indeed allows by Uncertainty Principle a slight violation of the classical conservation laws assignable to the Poincare invariance (see this). This cannot be excluded and the original proposal (see this and this) indeed was that Kähler action defines the classical action by its unique property of having huge classical non-determinism defining the 4-D analog of spin-glass degeneracy (see this/sg} which could play a key role in biology.

    If one assigns to M4 the analog of the Kähler structure (see this), this argument weakens since the induced M4 and CP2 Kähler forms must vanish for the vacuum extremals. However, for a given Hamilton-Jacobi structure defining the M4 Kähler form, there exist space-time surfaces of this kind. They are Cartesian products of Lagrangian 2-manifolds of M4 and CP2 defining vacuum string world sheets.

    Holography= holomorphy principle, implying that Bohr orbits are minimal surfaces, seems to hold true for any classical action, which is general coordinate invariant and is determined by the induced geometry. For the Kähler action, the coefficient Λ of the volume term, defining the analog of cosmological constant, would vanish. Holography= holomorphy principle does not allow Cartesian products of Lagrangian 2-manifolds of M4 and CP2. One could hope that their vacuum property could change the situation but this does not look an elegant option.

  2. For the standard ontology, one can also consider another option. The classical action, and therefore the classical conserved charges, are for the twistor lift proportional to 1/αK, where αK is Kähler coupling strength. The conservation of charges would suggest αK→ nαK requiring heff→ heff/n in the n-fold multiplication. For heff=h this would require h→ h/n. This looks strange.

    h need not however be the minimal value of heff and I have considered the possibility that one has h =n0h0 (see this), where n0 corresponds to the ratio R2(CP2)/lP2. CP2 size scale would be given by Planck length lP size scale but for h=n0h0 the size scale would be scaled up to R2 ∼ n0lP2, n0 ∈ [107,108]. The estimate for n0 is given by n0=(7!)2 having numbers 2,3,5,6,7 (primes 2,3,5,7) as factors (see this). R(CP2) would naturally correspond to the M4 size of a wormhole throat. h could be reduced by a factor appearing in n0 and there is some evidence for the reduction of heff by a small power of 2 (see this). This mechanism could work for a functional prime g characterized by prime p∈{2,3,5,7}.

To classical realization of concept does not look realistic except possibly for Option I.

2. About the interpretation of the inverses of the maps g

What could be the interpretation of the inverse maps g-1 for g=P/Q, assuming that they can occur? g-1 is a multivalued algebraic function analogous to z1/n. In f→ g-1ºf the roots rn of f are mapped to g(rn) so that their number does not increase. For the iterate of g, g-1 means the reduction of the number of roots by 1/n. The complexity does not increase and can even decrease.

This is just the opposite for what occurs in f→ gºf. The increase of complexity is assigned with number theoretic evolution and NMP. Suppose for a moment that the inverses g-1 are allowed. What could be their interpretation?

  1. The sequence of the inverses g-1 does not correspond to non-determinism and does not give rise to a refinement of either classical or quantum concept. There is no increase of complexity and it can be reduced for iterates.

  2. Could the reduction of the cell to stem cell level as a reverse of cell differentiation, which occurs by cell replications, correspond at the level of the field body to a sequence of g-1:s reducing the complexity. Could cancer correspond to this kind of process? This would conform with the interpretation in terms of the reduction of negentropy.

  3. The first option is that the maps of type g-1 are possible for both arrows of the geometric time. For the iterates of g, g-1 destroys complexity and information and reduces the level of cognition in this case. g-1 would obey anti-NMP in this case. Both maps g and g-1 make possible a trial and error process. If an iterate of g is not involved, the roots rn of hºf are mapped by g to roots g(rn) and the number of roots is preserved. It is not clear whether the algebraic complexity is increased or reduced.

    This suggests that NMP (see this) is not lost if both maps of type g and g-1 are allowed? Furthermore, there is a lower bound for algebraic complexity but no upper bound so that it seems that NMP remains true even if maps of type g-1 are allowed.

    Any quantum theory of consciousness should be able to say something about the quantum correlates of ethics (see this). In TGD, one can assign the notion of good to state function reductions (SFRs) inducing the increase of quantum coherence occurring in a statistical sense in SFRs. It would correspond to the increase of algebraic complexity and would be accompanied by the increase of heff and the amount of potentially conscious information. Is evil something something analogous to a thermodynamic fluctuation reducing entropy or can one speak of an active evil? Could the notion of evil as something active be assigned with the occurrence of maps of type g-1?

  4. The maps of type g and g-1 are reversals of each other and differ unless they act as symmetries analogous to M\"obius transformations. Could they be assigned with SSFRs with opposite arrows of geometric time? If so, negentropy would not increase for both arrows of the geometric time and there would be a universal arrow of time analogous to that assumed in standard thermodynamics and defined by negentropy increase. If a universal arrow of time exists, it should somehow relate to the violation of time reflection symmetry T. To me this option does not look plausible.

    If this is the case, the trial and error process allowed by ZEO and based on pairs of BSFRs would involve a map of type g-1 induced by SSFRs whereas the second BSFR would correspond to a map of type g. The sequence of SSFRs after the first BSFR would preserve or even reduce complexity and would mean starting from a new state at the passive boundary (PB) of CD. If the first BSFR is followed by a sequence of SSFRs of type g, it in general leads to a more negentropic new initial state at PB.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, April 09, 2025

Why very distant galaxies have very sharp boundaries?

Ethan Siegel has published an interesting popular article in Bigthink (see this). It states that the deepest of the Hubble deep fields, the Ultra Deep Field and the Extreme Deep Field, show compact, luminous galaxies amidst a sea of total darkness. They are visible against dark background as bright spots. How this is possible? One would except that their brightness gradually fades near the boundaries.

The explanation discussed in the article of Siegel is that much of the actual starlight had been oversubtracted as part of the field-flattening method used. When a proper reanalysis is conducted, the light is preserved, showing that the sky is brighter than anyone realized. The proposal for the subtractions is discussed in Astronomy Astrophysics article by Borlaff et al (see this).

TGD allows to consider an alternative explanation. There are observations about galaxies, which are so distant that they should not be visible at all since at at the moment of emission the Universe should have contained mostly neutral hydrogen absorbing the light and it would have been opaque. I considered the TGD explanation for these findings around 2018 (see this).

The light would arrive along monopole flux tubes connecting distant galaxies to our galaxy, to our solar system and to the Earth. These flux tubes correspond to 4-surfaces in H=M4×CP2, kind of space-time quanta, and would act like light cables. The intensity of the signal strength would not be reduced as inverse of the distance squared as the standard view of space-time and fields predicts. This would make possible to receive light from objects, which are beyond the distance, which corresponds to the time when ionization took place and the universe became transparent (see this). There is evidence for these object (see this, this and this).

These light cables would have sharp boundaries, which would explain the sharp boundaries of galaxies without need for subtraction. This would also give an estimate for the transversal size of the monopole flux tubes or flux tube bundless.

See the article Some Solar Mysteries or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.