https://matpitka.blogspot.com/?m=0

Sunday, April 27, 2025

Andromeda paradox

Sabine Hossenfelder has a nice Youtube video (see this) about Andromeda paradox (see this) introduced by Penrose. This paradox had escaped my attention.

Special relativity predicts that time= constant hyper-surfaces defining moments "Now" are different for observers in relative motion. It is of course far from clear whether moments "Now" correspond geometrically to this kind of hyper-surfaces. In special relativity one might argue that this is the case but in general relativity general coordinate invariance makes this kind of identification questionable since the Lorentz invariance of special relativity is only approximate.

If one assumes that the special relativistic notion is a good approximation, one ends up with the Andromeda paradox. Assume two observers at Earth moving towards and away from Andromeda. Their time= constant hypersurfaces are tilted with respect to each other. Suppose that Andromedans discuss sending a rocket to Earth. The first observer sees them still discussing whether to do this whereas the second observer sees that the rocket is already sent. This looks paradoxical.

What can one say about the Andromeda paradox in the TGD framework?

  1. TGD can be seen as a hybrid of special and general relativities so that Poincare invariance becomes exact at the level of the 8-D embedding space H=M4× CP2 but is lost at the level of space-time surfaces. Therefore the naive definition of simultaneity at the level of H could make sense.
  2. In TGD, the standard quantum ontology in which the physical states correspond to 3-D hypersurfaces is replaced with zero energy ontology (ZEO) in which the quantum states are superpositions of Bohr orbit-like 4-surfaces as time evolutions of 3-surfaces. The basic classical object is Bohr orbit satisfying holography = holography principle rather than a 3-surface: this conforms nicely with general coordinate invariance (see this).
  3. The "subjective now" in this framework is defined by the sensory input. As far as light signals are considered, it would correspond to a past-directed light-cone beginning from the position of the observer. The tips of these light-cones would be at slightly different positions for the two observers in the situation considered.
  4. This argument relates to consciousness and volitional action since there is a subjective moment at which some-one sends the rocket. In TGD inspired theory of consciousness, one must distinguish between subjective and geometric time and this moment corresponds to a moment of subjective time as a quantum jump in which the superposition of space-time surfaces involving sender and environment, in particular rocket, is replaced with a new one. In zero energy ontology (ZEO) this moment could correspond to a pair of "big" state function reduction (BSFR) changing temporarily the arrow of geometric time in Andromeda.

    The two observers would receive signals from zero energy states before and after the pair of BSFRs that is from geometric time before after the decision was made and these signals would correspond to topological light rays starting from zero energy states before and after the BSFR pair. These superpositions of space-time surfaces can be approximated with a single "average" space-time surface. There would be no paradox in ZEO.

To sum up, in general relativity the paradox results from the assumption that space-time is a fixed arena of dynamics. In TGD, the description of the act of free will as a pair of BSFRs however forces us to give up this assumption. The act of free will replaces the zero energy state with a new one and the superposition of space-time surfaces changes.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Separation of dynamics for cognitive and matter degrees of freedom and classical action as generalized discriminant

Holography= holomorphy hypothesis allows to solve field equations to purely algebraic equations. Space-time surfaces correspond to roots f=(f1,f2) for two analytic functions fi: H=M^4× CP2→ C2. Analyticity means that the functions are analytic functions of one hypercomplex coordinate and complex coordinate of M4 and two complex coordinates of CP2. The maps g:C2→ C2 are dynamical symmetries of the equations.

Interestingly, the dynamics associated with g and f separate in a well-defined sense. The roots of gº f=0 are roots of g independently of f. This has an analogy in computer science. f is analogous to the substrate and g to the program. The assignment of correlates of cognition to the hierarchies of functional compositions of g is analogous to this principle but does not mean that conscious experience is substrate independent.

This suggests that the exponential of the classical action exponential is expressible as a product of action exponentials associated with f and g degrees of freedom. The proposal that the classical action exponential corresponds to a power of discriminant as the product of ramified primes for a suitably identified polynomial carries the essential information about polynomials and is therefore very attractive and could be kept. The action exponentials would in turn be expressible as suitable powers of discriminants defined by the roots of f=(f1,f2) resp. g=(g1,g2): D(f,g)= D(f)D(g).

How to define the discriminants D(f) an D(g)?

  1. The starting point formula is the definition of D for the case of an ordinary polynomial of a single variable as a product of root differences D=∏i≠ j (ri-rj).
  2. How to generalize this? Restrict the consideration to the case f=(f1,f2). Now the roots are replaced with pairs (r1,i|r2,j, r2,j), where r1,i|r2,j are the roots of f1, when the root r2,j of f2 is fixed. For a fixed r2,j, one can define discriminant D1|r2,j using the usual product formula. The formula should be consistent with the strong correlation between the roots: the product of discriminants for f1 and f2 does not manifestly satisfy this condition. The discriminant should also vanish when two roots for f1 or f2 coincide.

  3. The first guess for the discriminant for f1,f2 is as the product D1| 2= ∏j≠ k (r2,jD1|r2,j- r2,kD1|r2,k) . This formula is bilinear in the roots and has the required antisymmetries under the exchange of f1 and f2. The product differences of r2,i-r2,j do not appear explicitly in the expression. However, this expression vanishes when two roots of f2 coincide, which is consistent with the symmetry under the exchange of f1 and f2. If this is not the case, the symmetry could be achieved by defining the discriminant as the product D1,2 == D1| 2D2|1.
The action exponential should also carry information about the internal properties of the roots f=(r1,j,r2,j).
  1. The assignment of action exponential, perhaps as a discriminant-like quantity, to each root f=(r1,j,r2,j) is non-trivial since the roots are now algebraic functions representing space-time regions the regions analogous to those associated with cusp catastrophe. The probably too naive guess is that the contribution to the action exponential is just 1: it would mean that this contribution to the action vanishes.
  2. An alternative approach would require an identification of some special points in these regions of a natural coordinate as the dependent variable, say the hypercompex coordinate, as analog to the behavior variable of cusp. The problem is that this option is not a general coordinate invariant.
  3. It would be nice if the proposed picture would generalize. The physical picture suggests that there is a dimensional hierarchy of surfaces with dimensions 4, 2, 0. The introduction of f3 would allow us to identify 2-D string world sheets as roots of (f1,f2,f3). The introduction of f4 would make it possible to identify points of string world sheets as roots of (f1,f2,f3,f4) having interpretation as fermionic vertices. One could assign to these sets of these 2-surfaces and points discriminants in the way already described. The action exponential would involve the product of all these 3 discriminants. This would correspond to the assignment of action exponentials to these surfaces and also this would conform with the physical picture.
  4. Locally, the analogs for the maps g for f3 would be analytic general coordinate transformations mapping space-time surfaces to themselves locally.
    1. If they are 1-1, they give rise to a generalization of conformal invariance. If they are many-to-one or vice versa, they have a physical effect. The roots of g would be 2-surfaces. 2-D analogs of functional p-adics, of quantum criticality, etc... that I have assigned to elementary particles would be well defined notions and this would mean a justification of the physical picture behind the p-adic mass calculations involving string world sheets and partonic 2-surfaces.
    2. The conformal algebras in TGD have non-negative conformal weights and have an infinite fractal hierarchy of half -Lie algebras isomorphic to the entire algebra (see this). These algebras contain a finite-dimensional subalgebra transformed from a gauge algebra to a dynamical symmetry algebra. The interpretation in terms of the many-to-1 property of polynomial transformations g is natural. The action of symmetries on the pre-image of 2-surfaces as roots of g would affect all images simultaneously and would therefore be poly-local. Could the origin of the speculated Yangian symmetry (see this) be here? Could this relate to the gravitational resp. electric Planck constants which depend on the masses resp. charges of the interacting pair of systems.
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, April 23, 2025

Considerable progress in the understanding of holography= holomorphy vision

The surprisingly successful p-adic mass calculations led to the hypothesis that elementary particles and also more general systems are characterized by p-adic primes which assign to these systems a p-adic length scale. The origin of the p-adic primes remained the problem.

The original hypothesis was that p-adic primes correspond to ramified primes appearing as divisors of the discriminant of a polynomial defined as the product of root differences. Assuming holography= holomorphy vision, the identification of the polynomial of a single variable in question is not trivial but is possible. The p-adic length scale hypothesis was that iterates of a suitable second-degree polynomial P2 could produce ramified primes close to powers of two. Tuomas Sorakivi helped with a large language model assisted calculation to study this hypothesis for the iterates of the chosen polynomial P2= x(x-1) did not support this hypothesis and I became skeptical.

This inspired the question whether the p-adic prime p correspond to a functional prime that is a polynomial Pp of degree p, which is therefore a prime in the sense that it cannot be written as functional composite of lower-degree polynomials. The concept of a prime would become much more general but these polynomials could be mapped to ordinary primes and this is in spirit with the notion of morphism in category theory.

This led to a burst of several ideas allowing to unify loosely related ideas of holography=holomorphy vision.

1. Functional primes and connection to quantum measurement theory

Could functional p-adic numbers correspond to "sums" of powers of the initial polynomial Pp multiplied by polynomials Q of lower degree than p. This is possible, but it must be assumed that the usual product is replaced by the function composition º and the usual sum by the product of polynomials. In the sum operation g=(g1,g2) and h=(h1,h2) the analytic functions gi: C2→ C2 and hi are multiplied and in the physically interesting special case the product reduces to the product of g1 and h1.

The non-commutativity for º is a problem. In functional composition f→ gº f the effect of g is analogous to the effect of an operator on quantum state in quantum mechanics and functions are like quantum mechanical observables represented as operators. In quantum mechanics, only mutually commuting observables can be measured simultaneously. The equivalent of this would be that when Pp is fixed, only the coefficients Q (lower degree polynomials) to powers of Pp are such that Qº Pp= Ppº Q and also the Qs commute with respect to º. One can talk about quantum padic numbers or functional p-adic numbers.

p-adic primes correspond to functional primes that can be described by ordinary primes: this is easy to understand if you think in category theoretical terms. All prime polynomials of degree p correspond to the same ordinary prime p. One can talk about universality. Number-theoretic physics, just like topological field theory, is the same for all surfaces that a polynomial of degree p corresponds to. Electrons, characterized by Mersenne prime p= M127= 2127-1, would correspond to an extremely large number of space-time surfaces as far as p-adic mass calculations are considered.

3. The arithmetic of functional polynomials is not conventional

Functional polynomials are polynomials of polynomials. This notion emerges also in the construction of infinite primes. Their roots are not algebraic numbers but algebraic functions as inverses of polynomials. They can be represented in terms of their roots which are space-time surfaces. In TGD, all numbers can be represented as spacetime surfaces. Mathematical thought bubbles are, at the basic level, spacetime surfaces (actually 4-D soap bubbles as minimal surfaces!;-).

For functional polynomials product and division are replaced with º. + and - operations are replaced with product and division of polynomials. Also rational functions R= P/Q must be allowed and this leads to the generalization of complex analysis from dimension D=2 to dimension D=4. This is an old dream that was now realized in a precise sense.

  1. This leads to an explicit formula for the functional analogs of Mersenne primes and more generally for primes close to powers of two, and even more generally for primes near powers of small primes. The functional Mersenne prime is P2(º n)/P1 and any P2 will do!
  2. The non-conventional arithmetic of functional polynomials makes it possible to understand the p-adic length-scale hypothesis. The same p-adic prime p corresponds to all polynomials Pp of degree p. p-Adic primes are universal and depend very little on the space-time surfaces associated with them: this is very important concerning p-adic mass calculations. The problem with the ramified prime option was that they depend strongly on the space-time surface determined as root of (f1,f2): the effect of (g1,Id) giving (g1º f1,f2) does not have particle mass at all.

4. Also inverse functions of polynomials are needed

The inverse element with respect to º corresponds to the inverse function of the polynomial, which is an n-valued algebraic function for an n-degree polynomial. They must also be allowed. Operating the polynomial g1 on f increases the degree and complexity. Operating with the inverse function preserves the number of roots or even reduces it if g1 operates on g1 iterated. The complexity can decrease. Complexity can be considered as a kind of universal IQ and evolution would correspond to the increase in complexity in statistical sense. Inverse polynomials can reduce it by dismantling algebraic structures.

In TGD inspired theory of consciousness I have associated ethics with the number theoretic evolution as increase of algebraic complexity. A good deed increases potential conscious information, i.e. algebraic complexity, and this is indeed what happens in a statistical sense. Could conscious and intentional evil deeds correspond to these inverse operations? Evil deeds would make good deeds undone. If so, it is easy to see that negentropy still increases in a statistical sense. This however would mean that an evil deed can be regarded as a genuine choice.

5. How quantum criticality, classical non-determinism and p-adic nondeterminism are related to each other

  1. The simplest representation of criticality is by means of a monomial xn. It has n identical roots at x=0 and extremely small perturbation can transform them to separate roots. Mathematicians consider them as separate, as if there were n copies of the root x=0 on top of each other. g1 =f1n as the equivalent of this gives n identical space-time surfaces as roots on top of each other. Are they the same surface or separate? A mathematician would say that they are separate. If the polynomial is slightly perturbed, there are n separate roots. This would be the classical equivalent of quantum criticality.

    In quantum criticality, the functional polynomials would have g1= f1n at quantum criticality. The corresponding spacetime surface would be susceptible to breaking up into separate spacetime surfaces when the monomial f1n becomes a more general polynomial and n roots are obtained as separate spacetime surfaces.

    There is a fascinating connection with cell replication. In TGD it would be controlled by the field body and one can ask whether f12=P22 as a critical polynomial representing the field body is perturbed and leads to two field bodies which become controllers of separate cells. One can ask whether in a cell replication sequence P22n becomes less critical step by step so that eventually there are 2n separate field bodies and cells.

    In zero energy ontology (ZEO) one can also ask whether the creation of a critical space-time surface characterized by f1n could give rise to n space-time surfaces when criticality is lost. Zero energy ontology understood in the Eastern sense would allow this without conflict with conservation laws.

  2. Mother Nature likes her theorists If the critical surface is considered as a single surface, the classical action associated with it is n-fold compared to the surface corresponding to one root. This means that the Kähler coupling strength alphaK is smaller by a factor of 1/n after the splitting. This was the basic idea in the hypothesis that I formulated by saying that Mother Nature likes theorists.

    When the perturbation theory ceases to converge (a catastrophe for the theorist), criticality arises, the polynomial takes the form Pp= f1p. Deformation and splitting of the surface into a p discrete surface follows, the coupling strength decreases by a factor of 1/p and the perturbation theory converges again. The theorist is happy again.

  3. Classical non-determinism corresponds to p-adic non-determinism Criticality is associated with non-determinism. In classical time evolution, mild non-determinism corresponds to such a criticality. In these phase transitions, a choice is made between p alternatives in the "small" state function reduction (SSFR). The essential thing is that this series of phase transitions can be realized as a classical time evolution. Without criticality, this would not be possible.

    The fact that a choice is made between p alternatives corresponds to the fact that the dynamics is effectively p-adic. So that classical non-determinism corresponds to p-adic non-determinism.

  4. Connection to the p-adic length-scale hypothesis What is particularly interesting is that if p= 2 or 3 then the roots of the polynomial Pp can be solved analytically. The same applies to the iterates of Pp. Therefore these cases are cognitively special as every mathematician knows from her own experience! The p-adic length-scale hypothesis says that p-adic primes p are close to powers of small prime q=2,3,.... Intriguingly, there is empirical evidence for the hypothesis in the cases q=2 and 3!

    In the cusp catastrophe, which is the Mother of all catastrophes, q=2 and q=3 occur. The cusp is V-shaped. At the tip of V, the polynomial determining the cusp takes the form x3, i.e. 3 roots converge and on the sides of V, the polynomial two roots converge to a third-degree polynomial.

    In the cusp catastrophe, the Mother of all catastrophes, which is 2-dimensional surface in the space (x,a,b) defined by the real roots xi, i=1,2,3 of P3(x,a,b), q=2 and q=3 occur. The projection of the cusp to the (a,b) plane is V-shaped. At the tip of V, the polynomial P3(x,a,b) determining the cusp is proportional to x3, i.e. 3 roots concide. At the two folds, whose projections to the (a,b) plane define the sides of V,  two roots conincide.

    It seems that finally the basic ideas of TGD have found each other and form a coherent whole. I managed also to clarify the relationship of M8-H duality to the holography=holomorphism hypothesis.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, April 21, 2025

Critical summary of the problems associated with the physical interpretation of the number theoretical vision

The physical interpretation of the number theoretical vision involves several ideas and assumptions which can be criticized.

p-Adic primes and p-adic length scale hypothesis

The basic notions are p-adic length scale hypothesis and hierarchy of Planck constants and they are motivated by the empirical input. p-Adic length scale hypothesis was originally motivated by p-adic mass calculations and p-adic length scale hypothesis and has now developed to to a rather nice picture (see this) solving the original interpretational problem due to needed tachyonic ground states.

  1. The proposed interpretation of p-adic primes has been as ramified primes. The identification of ramified primes is however far from obvious since they are assignable to polynomials of a single complex variable: how this polynomial is determined. There are also huge number of polynomials that one must consider and it seems that the notion of p-adic prime should characterize large class of polynomials and be therefore rather universal.
  2. A more promising interpretation is in terms of functional primes, which under some assumptions are mappable to ordinary primes by a morphism. The maps f are primes is they correspond to irreducible polynomials or ratios of such polynomials and if it is not possible to express f as f= gº h. f is characterized by at most 3 primes corresponding to the 3 complex coordinates of H. Also for g primeness can be defined and if only f1 is involved, ordinary prime characterizes it.

    There would be morphism mapping these functional primes to ordinary primes perhaps identifiable as p-adic primes. This could also fit with the p-adic length scale hypothesis suggesting the pairing of a large prime pl and small prime ps: plarge∼ psmallk would be true. One expects that g=(g1,g2) with g2 fixed is the physically motivated option and one assign primes pl near powers of small prime ps to functional primes gpsº k/gr.

The hierarchy of Planck constants heff=nh0

Consider first the evidence.

  1. The quantal effects of ELF em fields on the brain provide support for very large values of heff of order 1014 scaling the Compton length and giving rise to long scale quantum coherence. There is also evidence for small values of heff.
  2. There is also evidence for the gravitational resp. electric Planck constants ℏgr resp.em, which are proportional to the product of large and small mass resp. charge) and therefore depend on the quantum numbers for the interacting particles. This distinguishes these parameters from ordinary Planck constant and its possible analog heff. The support for ℏgr and ℏem emerges from numerical coincidences and success in explaining features of certain astrophysical systems and bio-systems.

    The proposal is that ℏgr and ℏem emerge in Yangian symmetries which replace single particle symmetries with multi-local symmetries acting at the local level on several particles simultaneously. One should be able to formulate this idea in a more precise manner.

The basic mathematical ideas are following.
  1. The proposal is that the scaling Lp→ (heff,2/heff,1)Lp(heff,1) takes place in the the transition heff,1→ heff,2 and increases the scale of quantum coherence. One cannot exclude Lp→ (heff,2/heff,1)1/2Lp as an alternative.
  2. The number theoretical vision motivates the proposal that heff corresponds to the order of the Galois group of a polynomial. It is however far from clear how one can assign to the space-time surface this kind of polynomial and I have made several proposals and the situation is unclear.
There are two sectors to be considered corresponding to the dynamical symmetries defined by g and the prime maps fP. Consider first the g sector.
  1. For g=(g1,Id), the situation reduces to that for a single polynomial and heff could correspond the order of the Galois group of g1 would define the dimension of the corresponding algebraic extension. The motivation is that the condition f2=0 would define TGD counterpart of dynamical cosmological constant.
  2. The first proposal was that heff/h0 corresponds to the number of space-time sheets for the space-time surface, which can be connected and indeed is so for fP. This number is the order of the polynomial involved in a single variable case and is in general much smaller than the order n of the Galois group which for polynomials with degree d has maximal value dmax=d!.

    If the Galois group is cyclic, one has n=d. Could the proposal that for functional primes, the coefficients pk appearing in gkº gpº k commute with gp and each other, imply this? This condition might be seen as a theoretical counterpart for the assumption that the Abelian Cartan algebra of the symmetry group defines the set of mutually commuting observables.

Consider next the f sector.
  1. For a prime map fP=(f1,f2), P could correspond to 3 ordinary primes assignable to the 3 complex coordinates of H: f1 and f2 could be prime polynomials with respect to all these coordinates. Does this mean that 3 p-adic length scales are involved or is there some criterion selecting one p-adic length scale, say assignable to the M4 complex coordinate or to the hypercomplex coordinate u?
  2. For a prime map fP, the space-time surface as a root is connected. The original hypothesis would state that heff/h0 corresponds to the number space-time regions representing roots of fP rather than to the order of the generalized Galois group associated to the surface fP=0 and permuting the roots as space-time regions to each other. Again the cyclicity of the generalized Galois group would guarantee the consistency of the two views. Now however the polynomials are ordinary polynomials obeying ordinary commutative arithmetics. But is there any need to assign heff to fP? As far as applications are considered, g seems to be enough.
  3. gpkº f has pk disjoint roots of gk. f=(gpk/gr)º h has pk roots and r poles as roots of gr. Also these are disjoint so that functional primeness for g does not imply connectedness. Functional primeness for f would be required.
Does Mother Nature love her theoreticians?

The hypothesis that Mother Nature is theoretician friendly (see this) and this) involves quantum field theoretic thinking, which can be motivated in TGD by the assumption that the long length scale limit of TGD is approximately described by quantum field theory. What this principle states is the following.

  1. When the quantum states are such that perturbative quantum field theory ceases to converge, a phase transition heff→ nheff occurs and reduces the value of the coupling strength αK ∝ 1/ℏeff by factor a 1/n so that the perturbation theory converges. This can take place when the coupling constant defined by the product of charges involved is so large that convergence is lost or at least that unitarity fails. The phase transition gives rise to quantum states, which are Galois singlets for the larger Galois group.
  2. The classical interpretation would be that the number of space-time surfaces as roots of g1 º f1 increases by factor n, where n is the order of polynomial g1. The total classical action should be unchanged. This is the case if at the criticality for the transition the n space-time surfaces are identical.
Can the transition take place in BSFR or even SSFR? Can one associate a smooth classical time evolution with f→ gpº kº f producing p copies of the original surface at each step such that the replacement αk → αK/p occurs at each step?
  1. The transition should correspond to quantum criticality, which should have classical criticality in algebraic sense as a correlate. Polynomials xn have x=0 as an n-fold degenerate root. In mathematics degenerate roots are regarded as separate. Now they would correspond to identical space-time surfaces on top of each other such that even an infinitesimal deformation can separate them. If the copies are identical at quantum criticality, a smooth evolution leading to an approximate n-multiple of a single space-time surface is possible. The action would be preserved approximately and the proposed scaling down of αK would guarantee this.
  2. The catastrophe theoretic analogy is the vertex of a cusp catastrophe. At the vertex of the cusp 3 roots coincide and at the V-shaped boundary of the plane projection of the cusp 2 roots coincide. More generally, the initial state should be quantum critical with pk degenerate roots. In the simplest one would have p degenerate roots and p=2 and p=3 and their powers are favored empirically and by the very special cognitive properties of these options (the roots can be solved analytically). Also this suggest that Mother Nature loves theoreticians.
  3. g1(f1)=f1p would satisfy the condition. An arbitrary small deformation of f1p by replacing it with akº f1kp would remove the degeneracy. The functional counterpart of the p-adic number would be +e sum of g1,k= akº f1kp as product ∏k g1k. Each power would correspond to its own almost critical space-time surface and ak=1 would correspond to maximal criticality. This would correspond to the number ∑ pk and one would obtain Mersenne primes and general versions for p>2 naturally from maximal criticality giving rise to functional p-adicity. The classical non-determinism due to criticality would correspond naturally to p-adic non-determinism.

To sum up, the situation concerning the relationship between number theoretic and geometric views of TGD looks rather satisfactory but there are many questions to be asked and answered. The understanding of M8-H duality as one aspect of the duality between number theory and geometry as analog of momentum-position duality generalizes from point-like particles to 3-surfaces is far from complete: one can even ask whether the M8 view produces more problems than it solves.

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, April 19, 2025

Is the brain quantum critical?

Sabine Hoosenfelder had an interesting posting (see this) telling about a real progress in understanding consciousness (see the article Complex harmonics reveal lower-dimensional manifolds of critical brain behavior).

The first basic mystery is that the brain can react extremely rapidly in situations, which require rapid action such being in jungle and suddenly encountering a tiger. This rapid reaction extremely difficult to understand in the standard neuroscience framework. A lot of signalling inside brain and between brain and body is required and the rate of neural processing of information seems to hopelessly slow. You could not generate a bodily expression of horror before you would be dead.

Second mystery is the extremely low metabolic energy rate of the brain: about .1 kW and extremely small as compared to the power needed by ordinary computers. Supercomputer uses a power which is measured in Megawats. This is one of the key problems of the AI based on classical computation: nuclear power plants are proposed of being built to satisfy the needes of large language models. The extremel low dissipation rate of the brain suggests that long range quantum coherence is involved. But this is impossible in standard quantum theory.

For decades (since 1995 when I started to develop TGD inspired theory of consciousness) I have tried to tell that quantum coherence in the scale of brain could be part of the solution to the problem. The entire brain or even entire body could acts as a single coherent whole and act instantaneously.

Unfortunately, standard quantum theory (or should one say colleagues) does not allow quantum coherence in the bodily nor even brain scale. Quantum coherence at the level of the ordinary biomatter is not needed. Nonquantal coherence could be induced by quantum coherence at the level of field body, the TGD counterpart of classical fields in TGD, having much larger size than the biological body. This would explain EEG, which in neuroscience is still seen often as somekind of side effect although it carries information.I t is difficult to see why brain as a master energy saver would sent information to outer space just for fun. But so do many neuroscientists believe.

Quantum criticality induces ordinary criticality in the TGD Universe

Quantum criticality implies long range quantum fluctuations and quantum coherence: the entire system behaves like single quantum unit. The TGD Universe as a whole is quantum critical fractal structure with quantum criticality realized in various scales. The degree of quantum criticality and its scale depends on the parameters of the system and can be assigned with field body of a system carrying heff> h phases of ordinary matter at it. These phases behaving like dark matter (but not identifiable as galactic dark matter in TGD) have higher IQ than ordinary biomatter and control it. Field body itself is a hierarchical structure.

Quantum criticality is perhaps the basic aspect of TGD inspired theory of consciousness and of living matter. The number theoretical vision of TGD predicts a hierarchy of Planck constants heff=nh0, where h0 satisfying h=(7!)2×h0, is the minimal value of heff. This gives rise to a hierarchy of phases of ordinary matter behaving in many respects like dark matter but very probably not identifiable as galactic dark matter. The role of metabolism is to provide the energy needed to increase the value of heff, which spontaneously decreases. The physiological temperature is one critical parameter.

The complexity associated with the quantum criticality corresponds to the algebraic complexity assignable to the polynomials determining the space-time surface. The degree of polynomials equals to the number of roots and the dimension of extension of rationals corresponds to the order of Galois group given by n=heff/h0.

Algebraic complexity can only increase since there the number for systems more complex than a given system is infinitely larger than the number with smaller complexity. This reduces evolution to number theory and quantum mechanics. For the mathematical background of TGD inspired quantum theory of conscious and quantum biology, see for instance this , this , this , this , and this.

Classical signaling takes place with light velocity

Also the classical processing of information might take place much faster then in neuroscience picture. The existence of biophotons has been known for about century but still neuroscience refuse to take them seriously. TGD indeed suggests also that neuroscience view about classical information processing is wrong (see for instance this , this this, this). Nerve pulses need not be the primeary carriers of sensory information. The function of nerve pulses could be only to serve as relays at the synaptic contacts. This would make possible the real information transfer as dark photons (photons with large value of heff) along monopole flux tubes associated with axons and also leading to the field body of the brain. Biophotons, whose origin is not understood, would result in the transformation of dark photons to biophotons (see this).

The brain could use dark photon signals propagating along monopole flux tubes carrying information. The information would be coded by the frequency modulation of the Josephson frequencies associated with the cell membranes with large value of heff making the Josephson frequency small. The signal would be received at the field body of appropriate subsystem of the brain by cyclotron resonance and the signal would give rise to a sequence of pulses, which would define the feedback to the brain and possibly generate nerve pulses. Light velocity is by factor 106-107 higher than the velocity of the nerve pulses about 10-100 m/s. This allows a lot of data processing involving forth and back-signalling in order to build standardized mental images from the incoming sensory information (see this).

In TGD, this would allow classical signalling with a field body of size scale of the Earth in time scale shorter than .1 seconds (alpha frequency), which is roughly the duration of what might be called chronon of human conscious experience. After this signal is received by the field body, a phase transition generating quantum coherence at the level of field body takes place and induces coherence in the scale of brain or even body.

Zero energy ontology predicts time reversals in the counterparts of ordinary state function reductions

Also zero energy ontology in which "big" state function reductions (BSFRs) as the TGD counterparts of ordinary state function reductions change the arrow of time can make the information processing faster and the proposal is that the motor responses involve signals propagating with the reverse arrow of geometric time so that the response would start already in the geometric past. Libet's findings that volitional action is preceded by brain activity support this view.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, April 18, 2025

Could one understand p-adic length scale hypothesis in terms of functional arithmetics?

Holomorphy= holography vision reduces the gravitation as geometry to gravitations as algebraic geometry and leads to exact general solution of geometric field equations as local algebraic equations for the roots and poles rational functions and possibly also their inverses.

The function pairs f=(f1,f2): H→ C2 define a function field with respect to element-wise sum and multiplication. This is also true for the function pairs g=(g1,g2): C2→ C2. Now functional composition º is an additional operation. This raises the question whether ordinary arithmetics and p-adic arithmetics might have functional counterparts.

One implication is quantum arithmetics as a generalization of ordinary arithmetics (see this). One can define the notion of primeness for polynomials and define the analogs of ordinary number fields.

What could be the physical interpretation of the prime polynomials (f1,f2) and (g1,g2), in particular (g1,Id) and how it relates to the p-adic length scale hypothesis (see this)?

  1. p-Adic length scale hypothesis states that the physically preferred p-adic primes correspond to powers p∼ 2k. Also powers p∼ qk of other small primes q can be considered (see this) and there is empirical evidence of time scales coming as powers of q=3 (see this and this). For Mersenne primes Mn= 2n-1, n is prime and this inspires the question whether k could be prime quite generally.
  2. Probably the primes as orders of prime polynomials do not correspond to very large p-adic primes (M127=2127-1 for electron) assigned in p-adic mass calculations to elementary particles.

    The proposal has been that the p and k would correspond to a very large and small p-adic length scale. The short scale would be near the CP2 length scale and large scale of order elementary particle Compton length.

Could small-p p-adicity make sense and could the p-adic length scale hypothesis relate small-p p-adicity and large-p p-acidity?
  1. Could the p-adic length scale hypothesis in its basic form reflect 2-adicity at the fundamental level or could it reflect that p=2 is the degree for the lowest prime polynomials, certainly the most primitive cognitive level. Or could it reflect both?
  2. Could p∼ 2k emerge when the action of a polynomial g1 of degree 2 with respect to say the complex coordinate w of M4 on polynomial Q is iterated functionally: Q→ P circ Q → ...P º...Pº Q and give n=2k disjoint space-time surfaces as representations of the roots. For p=2 the iteration is the procedure giving rise to Mandelbrot fractals and Julia sets. Electrons would correspond to objects with 127 iterations and cognitive hierarchy with 127 levels! Could p= M127 be a ramified prime associated with Pº ...º P.

    If this is the case, p∼ 2k and k would tell about cognitive abilities of an electron and not so much about the system characterized by the function pair (f1,f2) at the bottom. Could the 2k disjoint space-time surfaces correspond to a representation of p∼ 2k binary numbers represented as disjoint space-time surfaces realizing binary mathematics at the level of space-time surfaces? This representation brings in mind the totally discontinuous compact-open p-adic topology. Cognition indeed decomposes the perceptive field into objects.

  3. This generalizes to a prediction of hierarchies p∼ qk, where q is a small prime as compared to p and identifiable as the prime order of a prime polynomial with respect to, say, variable w.
I have considered several identification of the p-adic primes and arguments for why the p-adic length scale hypothesis should be true.
  1. One can imagine I have tentatively identified p-adic primes as ramified primes (see this) appearing as divisors of the discriminant Dof a polynomials define as the product of root differences, which could correspond to that for g=(g1,Id).

    Could the 3 primes characterizing the prime polynomials fi:H→ C2 correspond to the small primes q? Could the ramified primes p∼ 2k as divisors of a discriminant D defined by the product of non-vanishing root differences be assigned with the polynomials obtained to their functional composites with iterates of a suitable g?

    Similar hypotheses can be studied for the iterates of g:C2→ C2 alone. The study of this hypothesis in a special case g=P2= x(x-1) described in an earlier section did not give encouraging results. Perhaps the identification of p-adic prime as ramified primes is ad hoc. There is also the problem that there are several ramified primes, which suggests multi-p-p-adicity. The conjecture also fails to specify how the ramified prime emerges from the iterate of g.

  2. A new identification of p-adic primes suggested by quantum p-adics is that p-adic primes correspond to primes defining the degrees of prime polynomials g and that the Mersenne primes Nn= 2n-1 correspond to rational functions P2º n/P1, where / corresponds to element-wise-division and P2 can be any polynomials of degree 2. This would mean category theoretic morphism of quantum p-adics to ordinary p-adics. A more general form of the conjecture is that the rational functions Ppº n/Pk correspond to preferred p-adic primes.

    The reason could be that for these quantum primes it is possible to solve the roots as zeros and poles analytically for p<5. This might make them cognitively very special. The primes p=2 and p=3 would be in a unique role information theoretically. For these primes there is indeed evidence for the p-adic length scale hypothesis and these primes are also highly relevant for the notion of music harmony (see this, this and this).

    See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

About quantum arithmetics

Holomorphy= holography vision reduces the gravitation as geometry to gravitations as algebraic geometry and leads to exact general solution of geometric field equations as local algebraic equations for the roots and poles rational functions and possibly also their inverses.

The function pairs f=(f1,f2): H→ C2 define a function field with respect to element-wise sum and multiplication. This is also true for the function pairs g=(g1,g2): C2→ C2. Now functional composition º is an additional operation. This raises the question whether ordinary arithmetics and p-adic arithmetics might have functional counterparts.

Functional (quantum) counterparts of integers, rational and algebraic numbers

Do the notions of integers, rationals and algebraic numbers generalize so that one could speak of their functional or quantum counterparts? Here the category theoretical approach suggesting that degree of the polynomial defines a morphism from quantum objects to ordinary objects leads to a unique identification of the quantum objects.

  1. For maps g: C2→ C2, both the ordinary element-wise product and functional composition º define natural products. The element-wise product does not respect polynomial irreducibility as an analog of primeness for the product of polynomials. Degree is multiplicative in º. In the sum, call it +e, the degree should be additive. This leads to the identification of +e as an elementwise product. One can identify neutral element 1º of º as 1º=Id and the neutral element 0e of +e as ordinary unit 0e=1. This is a somewhat unexpected conclusion.

    The inverse of g with respect to º corresponds to g-1 for º, which is a many-valued algebraic function and to 1/g for +e. The maps g, which do not allow decomposition g= hº i, can be identified as functional primes and have prime degree. If one restricts the product and sum to g1 (say), the degree of a functional prime g corresponds to an ordinary prime. These functional integers/rationals can be mapped to integers by a morphism mapping their degree to integer/rational. f is a functional prime with respect to º if it does not allow a decomposition f= gº h. One can construct integers as products of functional primes.

  2. The non-commutativity of º could be seen as a problem. The fact that the maps g act like operators suggest that for the functional primes gp the primes in the product commute. Since g is analogous to an operator, this can be interpreted as a generalization of commutativity as a condition for the simultaneous measurability of observables.
  3. One can also define functional polynomials P(X), quantum polynomials, using these operations. In the terms pnº Xn pn and g should commute and the sum ∑e pnXn corresponds to +e. The zeros of functional polynomials satisfy the condition P(X)=0e=1 and give as solutions roots Xk as functional algebraic numbers. The fundamental theorem of algebra generalizes at least formally if Xk and X commute. The roots have representations as space-time surfaces. One can also define functional discriminant D as the º product of root differences Xk-e Xl, with -e identified as element-wise division.
About the notion of functional primeness

There are two cases to consider corresponding to f and g. Consider first the pairs (f1,f2): H→ C2.

  1. Primeness could mean that f does not have a composition f=gº h. Second notion of primeness is based on irreducibility, which states that f does not reduce to an elementwise product of f= g× h. Concerning the definition of powers of functional primes in this case, a possible problem is that the power (f1n,f2n) defines the same surface as (f1,f2) as a root with n-fold degeneracy. Irreducibility eliminates this problem but does not allow defining the analog of p-adic numbers using (f1n,f2n) as analog of pn.

  2. Since there are 3 complex coordinates of H, fi are labelled by 3 ordinary primes pr(fi), r=1,2,3, rather than single prime p. By the earlier physical argument related to cosmological constant one could assume f2 fixed, and restrict the consideration to f1. Every functional p-adic number, in particular functional prime, corresponds to its own ramified primes. The simplest functional would correspond to (f1,f2)=(0,0) (could this be interpreted as stating the analog of mod ~p=0 condition).

  3. The degrees for the product of polynomial pairs (P1,P2) and (Q1,Q2) are additive. In the sum, the degree of the sum is not larger than the larger degree and it can happen that the highest powers sum up to zero so that the degree is smaller. This reminds of the properties of non-Archimedean norm for the p-adic numbers. The zero element defines the entire H as a root and the unit element does not define any space-time surface as a root.
Also the pairs (g1,g2) can be functional primes, both with respect to powers defined by element-wise product and functional composition º.
  1. The ordinary sum is the first guess for the sum operation in this case. Category theoretical thinking however suggests that the element-wise product corresponds to sum, call it +e. In this operation degree is additive so that products and +e sums can be mapped to ordinary integers. The functional p-adic number in this case would correspond to an elementwise product ∏ Xn º Ppn, where Xn is a polynomial with degree smaller than p defining a reducible polynomial.
  2. A natural additional assumption is that the coefficient polynomials Xn commute with each other and Pp. This is natural since the Xn and Pp act like operators and in quantum theory a complete set of commuting observables is a natural notion. This motivates the term quantum p-adics. The space-time surface is a disjoint union of space-time surfaces assignable to the factors Xk º Ppk º f. In quantum theory, quantum superpositions of these surfaces are realized. If the surface associated with Xk º Ppk º f is so large that it cannot be realized inside the CD, it is effectively absent from the pinary expansion. Therefore the size of the CD defines a pinary cutoff.
The notion of functional p-adics

What about functional p-adics?

  1. The functional powers gpº k of prime polynomials gp define analogs of powers of p-adic primes and one can define a functional generalization of p-adic numbers as quantum p-adics. The coefficients Xk in Xkº gpk are polynomials with degree smaller than p. The first idea which pops up in mind is that ordinary sum of these powers is in question. What is however required is the sum +e so that the roots are disjoint unions of the roots of the +e summands Xkº gpk. The disjointness corresponds to the fact that cognition can be said to be an analysis decomposing the system into pieces.
  2. Large powers of prime appearing in p-adic numbers must approach 0e with respect to the p-adic norm so that gPn must effectively approach Id with respect to º. Intuitively, a large n in gPn corresponds to a long p-adic length scale. For large n, gPn cannot be realized as a space-time surface in a fixed CD. This would prevent their representation and they would correspond to 0e and Id. During the sequence of SSFRs the size of CD increases and for some critical SSFRs a new power can emerge to the quantum p-adic.
The very inspiring discussions with Robert Paster, who advocates the importance of universal Witt Vectors (UWVs) and Witt polynomials (see this) in the modelling of the brain, forced me to consider Witt vectors as something more than a technical tool. As the special case Witt vectors code for p-adic number fields.
  1. Both the product and sum of ordinary p-adic numbers require memory digits and are therefore technically problematic. This is the case also for the functional p-adics. Witten polynomials solve this problem by reducing the product and sum purely digit-wise operations.
  2. Universal Witt vectors and polynomials can be assigned to any commutative ring R, not only p-adic integers. Witt vectors Xn define sequences of elements of a ring R and Universal Witt polynomials Wn(X1,X2,...,Xn) define a sequence of polynomials of order n. In the case of p-adic number field Xn correspond to the pinary digit of power pn and can be regarded as elements of finite field Fp,n, which can be also mapped to phase factors exp(ik 2π/p). The motivation for Witt polynomials is that the multiplication and sum of p-adic numbers can be done in a component-wise manner for Witt polynomials whereas for pinary digits sum and product affect the higher pinary digits in the sum and product.
  3. In the general case, the Witt polynomial as a polynomial of several variables can be written as Wn(X0,X1,...)=∑d|n d Xdn/d, where d is a divisor of n, with 1 and n included. For p-adic numbers n is power of p and the factors d are powers of p. Xd are analogous to elements of a finite field Gp,n as coefficients of powers of p.
Witt polynomials are characterized by their roots, and the TGD view about space-time surfaces both as generalized numbers and representations of ordinary numbers, inspires the idea how the roots of for suitably identified Witt polynomials could be represented as space-time surfaces in the TGD framework. This would give a representation of generalized p-adic numbers as space-time surfaces making the arithmetics very simple. Whether this representation is equivalent with the direct representation of p-adic number as surfaces, is not clear.

Could the prime polynomial pairs (g1,g2): C2→ C2 and (f1,f2): H=M4× CP2→ C2 (perhaps states of pure, non-reflective awareness) characterized by ordinary primes give rise to functional p-adic numbers represented in terms of space-time surfaces such that these primes could correspond to ordinary p-adic primes?

See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Holography= holomorphy vision and functional generalization of arithmetics and p-adic number fields

In TGD, geometric and number theoretic visions of physics are complementary. This complementarity is analogous to momentum position duality of quantum theory and implied by the replacement of a point-like particle with 3-surface, whose Bohr orbit defines space-time surface.

At a very abstract level this view is analogous to Langlands correspondence. The recent view of TGD involving an exact algebraic solution of field equations based on holography= holomorphy vision allows to formulate the analog Langlands correspondence in 4-D context rather precisely. This requires a generalization of the notion of Galois group from 2-D situation to 4-D situation: there are 2 generalizations and both are required.

  1. The first generalization realizes Galois group elements, not as automorphisms of a number field, but as analytic flows in H=M4× CP2 permuting different regions of the space-time surface identified as roots for a pair f=(f1,f2) of pairs f=(f1,f2): H→ C2, i=1,2. The functions fi are analytic functions of one hypercomplex and 3 complex coordinates of H.

  2. Second realization is for the spectrum generating algebra defined by the functional compositions gº f, where g: C2→ C2 is analytic function of 2 complex variables. The interpretation is as a cognitive hierarchy of function of functions of .... and the pairs (f1,f2) which do not allow a composition of form f=gº h correspond to elementary function and to the lowest levels of this hierarchy, kind of elementary particles of cognition. Also the pairs g can be expressed as composites of elementary functions.

    If g1 and g2 are polynomials with coefficients in field E identified as an extension of rationals, one can assign to g º f root a set of pairs (r1,r2) as roots f1,f2)= (r1,r2) and ri are algebraic numbers defining disjoint space-time surfaces. One can assign to the set of root pairs the analog of the Galois group as automorphisms of the algebraic extension of the field E appearing as the coefficient field of (f1,f2) and (g1,g2). This hierarchy leads to the idea that physics could be seen as an analog of a formal system appearing in Gödel's theorems and that the hierarchy of functional composites could correspond to a hierarchy of meta levels in mathematical cognition.

  3. The quantum generalization of integers, rationals and algebraic numbers to their functional counterparts is possible for maps g: C2→ C2. The counterpart of the ordinary product is functional composition º for maps g. Degree is multiplicative in º. In sum, call it +e, the degree should be additive, which leads to the identification of the sum +e as an element-wise product. The neutral element 1º of º is 1º=Id and the neutral element 0e of +e is the ordinary unit 0e=1.

    The inverse corresponds to g-1 for º, which in general is a many-valued algebraic function and to 1/g for times. The maps g, which do not allow decomposition g= hº i, can be identified as functional primes and have prime degree. f:H→ C2 is prime if it does not allow composition f= gº h. Functional integers are products of functional primes gp.

    The non-commutativity of º could be seen as a problem. The fact that the maps g act like operators suggest that the functional primes gp in the product commute. Functional integers/rationals can be mapped to ordinary by a morphism mapping their degree to integer/rational.

  4. One can define functional polynomials P(X), quantum polynomials, using these operations. In P(X), the terms pnº Xºn, pn and X should commute. The sum ∑e pnXn corresponds to +e. The zeros of functional polynomials satisfy the condition P(X)=0e=1 and give as solutions roots Xk as functional algebraic numbers. The fundamental theorem of algebra generalizes at least formally if Xk and X commute. The roots have representation as a space-time surface. One can also define functional discriminant D as the º product of root differences Xk-e Xl, with -e identified as element-wise division and the functional primes dividing it have space-time surface as a representation.
What about functional p-adics?
  1. The functional powers gpº k of primes gp define analogs of powers of p-adic primes and one can define a functional generalization of p-adic numbers as quantum p-adics. The coefficients Xk Xkºgpk are polynomials with degree smaller than p. The sum +e so that the roots are disjoint unions of the roots of Xkºgpºk.

  2. Large powers of prime appearing in p-adic numbers must approach 0e with respect to the p-adic norm so that gPºn must effectively approach Id with respect toº. Intuitively, a large n in gPºn corresponds to a long p-adic length scale. For large n, gPºn cannot be realized as a space-time surface in a fixed CD. This would prevent their representation and they would correspond to 0e and Id. During the sequence of SSFRs the size of CD increases and for some critical SSFRs a new power can emerge to the quantum p-adic.
  3. Universal Witt polynomials Wn define an alternative representation of p-adic numbers reducing the multiplication of p-adic numbers to elementwise product for the coefficients of the Witt polynomial. The roots for the coefficients of Wn define space-time surfaces: they should be the same as those defined by the coefficients of functional p-adics.
There are many open questions.
  1. The question whether the hierarchy of infinite primes has relevance to TGD has remained open. It turns out that the 4 lowest levels of the hierarchy can be assigned to the rational functions fi: H→ C2, i=1,2 and the generalization of the hierarchy can be assigned to the composition hierarchy of prime maps gp.
  2. >Could the transitions f→ gº f correspond to the classical non-determinism in which one root of g is selected? If so, the p-adic non-determinism would correspond to classical non-determinism. Quantum superposition of the roots would make it possible to realize the quantum notion of concept.

  3. What is the interpretation of the maps g-1 which in general are many-valued algebraic functions if g is rational function? g increases the complexity but g-1 preserves or even reduces it so that its action is entropic. Could selection between g and g-1 relate to a conscious choice between good and evil?
  4. Could one understand the p-adic length scale hypothesis in terms of functional primes. The counter for functional Mersenne prime would be g2ºn/g1, where division is with respect to elementwise product defining +e? For g2 and g3 and also their iterates the roots allow analytic expression. Could primes near powers of g2 and g3 be cognitively very special?
See the article A more detailed view about the TGD counterpart of Langlands correspondence or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, April 15, 2025

Impossible device creates free energy in the Earth's magnetic field

Sabine Hossenfelder has a Youtube talk (see this) with title ""Impossible" Device Creates Free Electricity from Earth's Magnetic Field". It tells about a very interesting anomaly, which could be the anomaly found already by Faraday but forgotten since it does not quite fit the framework of Maxwellian electrodynamics. I learned of this phenomenon during my student days. The effect was an exercise in an electrodynamics course but neither I nor others realized that the effect seems to be in conflict with Maxwell's theory!

If I understood correctly, the effect has been now firmly re-established in the case of the Earth's electric field (see this). The electric field would be created by a static dipole assignable to the magnetic field of the Earth with respect to which the Earth rotates.

If my interpretation is correct, an analogous effect occurs also for the Faraday disk, which is a conductor disk rotating around its symmetry axis. Faraday observed that a very small radial electric field is generated with magnetic Eρ= ω B (c=1). This radial electric field field can be obtained from a vector potential At= ρω B. This generates electric charge density ρ= ω B inside the disk. This looks strange: how can rotation generate electric charge? Does this conform with Maxwell's laws?

  1. What comes to mind is that Maxwell's induction law implied by special relativity explains the effect. However, the rotation is not a rectilinear motion although the magnitude of the velocity is constant so that the effect is more general than predicted by the Faraday law. Furthermore, the magnetic field rotates and at least in quantum theory, nothing should happen if the rotational symmetry is exact.
  2. Could the charge generation be a dynamical phenomenon? Could there be a generation of a surface charge compensating for the charge density in the interior? The sign of this charge density depends on the direction of the rotation so that surface charge would be positive for the second direction of rotation. One would expect that the surface charge is negative since electrons are the charge carriers. Also a large parity violation would take place.
One could understand the effect in terms of the notion of induced gauge field. The explanation of Faraday effect was one of the first applications of TGD (see this). The phenomenon is familiar for free energy researchers, whom academic researchers do now what count as real researchers, and also technological applications have been proposed (see this).
  1. In the TGD framework, space-time is a 4-surface and gauge fields are induced. so that their geometrization is obtained. This means that the electroweak vector potentials are the projection of the spinor connection of CP2. Let (cos(Θ),φ) be spherical coordinates for the geodesic sphere S2 of CP2. The Kähler gauge potential is Aφ= cos(Θ) and the Kähler form is JΘφ= sin(Θ). Introduce cylindrical coordinates (t,z,ρ,φ) for M4 and space-time surface.
  2. The simplest space-time surface describing the situation without rotation corresponds to the embedding (cos(Θ),φ) = (f(ρ),nφ), n integer. The non-vanishing component of the induced gauge potential is (Aφ= nf(ρ) and induced magnetic field is Bz= n∂ρf. The choice f=Bρ gives a constant magnetic field.
  3. The rotation of the space-time surface implies φ \rightarrow φ-ω t= nφ-ω t so that induced vector potential gets time component At= fω giving rise to electric field E= ρ ω B. This is what the Faraday law extended to curvilinear motion would give. One could interpret the Faraday effect as a direct evidence for the notion of induced gauge field (see this).
How to describe the generation of the em charge? Is the charge purely geometric vacuum charge without any charger carriers or are charged carriers involved?
  1. Could there be a charge transfer between the disk and a third party? In TGD, the third party would be what I call field body, which plays a key role in the explanation of numerous anomalies. TGD predicts the possibility of both electric and magnetic bodies and magnetic bodies, which are space-time surfaces giving rise to the TGD counterparts of Maxwellian fields and gauge fields.
  2. The field bodies are carriers of macroscopic quantum phases with large effective Planck constant heff=nh0, h= (7!)2h0 (a good guess). For the electric field body, ℏem wouldbe proportional to a product of elementary particle charge q and large em charge Q associated with a negatively charged system such as DNA, cell, Earth, capacitor,.. giving rise to large scale electric field. For the gravitational magnetic body ℏgr would be proportional to a large mass M, such as the mass of the Earth or Sun and small mass m.
  3. Both signs for the charge for the rotating disk are in principle possible and are determined by the direction or the rotation but in living matter negative charge is typical and could be generated by Pollack effect transforming ordinary protons to dark protons at the gravitational or electric magnetic body associated with the system and inducing the generation of exclusion zone (EZ) with negative charge giving rise to electric field body carrying dark electrons. Reversal of the Pollack effect would bring the protons back. Electrons could be transferred to the electric body or return from it. This effect would mean a large parity breaking effect and could relate closely to the chiral selection in living matter. TGD indeed predicts large parity breaking effects since macroscopic electroweak fields are predicted to be possible.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

About the physical interpretation of gravastar in the TGD framework

The gravastar model for a blackhole-like object describes stellar interior as de-Sitter space and exterior as Schwarscild metric. The surface of the blackhole is predicted to carry exotic phase. In the previous post (see this) I demonstrated de-Sitter metric allows a realization as space-time surface and that blackhole-objects and in fact all stars could be modelled in this way.

In the sequel I will consider the physical interpretation of de-Sitter space-time represented as a 4-surface in the TGD framework.

  1. In TGD, twistor lift predicts cosmological constant Λ with a correct sign (see this and this). The twistor lift of TGD predicts that Λ= 3/α2, where α is a length scale is dynamical and has a spectrum. The mass density ρ is associated with the volume term of the dimensionally reduced action having 3/(8π Gα2) as coefficient. Also Kähler action is present and contains CP2 part and possibly also M4 part.

    Λ is not a universal constant in TGD but depends on the size scale of the space-time sheet. The naive estimate is that it corresponds to the size scale of the space-time sheet associated with the system or its field body of the system, which can be much larger than the system.

    p-Adic length scale hypothesis suggests that apart from a numerical constant the scale LΛ=(1/Λ)1/2 equals to the p-adic length scale Lp characterizing the space-time sheet. If p-adic length scale hypothesis L(k)= p1/2, where the prime p satisfies p∼ 2k, it implies L(k)= 2(k-151)/2 L(151), L(151)∼ 10 nm.

  2. How does the average density of an astrophysical object or even smaller object relate to the vacuum energy density determined by Λ. There are two options: vacuum energy density corresponds to an additional contribution to the average energy energy density or determines it completely in which case one must assume quantum classical correspondence stating that the quantal fermionic contributions to the energy and other conserved quantum numbers are identical with the classical contributions so that there would be kind of duality. This would hold true only for eigenvalues of charges of the Cartan algebra.
  3. One can assign to the cosmological constant a length scale as the geometric mean

    lΛ= (lP LΛ)1/2 ,

    where Planck length is defined as lP= (ℏ G)1/2. One obtains therefore 3 length scales, Planck length, the big length scales LΛ and their geometric mean lΛ.

  4. What is the relationship to the spectrum of Planck constants predicted by the number theoretical vision of TGD? If one replaces ℏ with ℏeff=nh0, one obtains a spectrum of gravitational constants G and of Planck length scales. CP2 size scale R ∼ 104lP is a fundamental length scale in TGD. One can argue that G is expressible in terms of R=lP as Geff=lP/(ℏeff1/2 and that the CP2 length scale satisfies R=lP for the minimal value h0 of heff so that one obtains Geff= R/heff1/2. For h0 one obtains the estimate h= (7!)2h0 in terms of Planck constant h. This would predict a hierarchy of weakening values of G.

    Note that G=lP/ℏeff1/2 would predict the scaling lΛ∝ ℏeff1/4. Gravitational Planck constant ℏgr= GMm/\beta0 for the system formed by large mass M and small mass m has very large values.

It is interesting to look at what values of lΛ are associated with LΛ , characterizing the size scale of a physical system or possibly of its field body.
  1. For the "cosmological" cosmological constant one has LΛ∼ 1061lP giving lΛ∼ 1031.5lP ∼ 2× 10-4 m. This corresponds to the size scale of a neuron. LΛ could characterize the largest layer of its field body with a cosmological size scale.
  2. A blackhole with the mass of the Sun has Scwartschild radius rS= 3 km. Λ=rS gives lΛ∼ 2.19× 10-16 m. The Compton length of the proton is lp=2.1× 10-16 m. This estimate motivated the proposal that stellar blackholes could correspond to volume filling flux tubes containing a sequence of protons with one proton per Compton length of proton. This monopole flux tube would correspond to a very long nuclear string defining a gigantic nucleus. This result conforms with quantum classical correspondence stating that vacuum energy density corresponds to the density of fermions.
  3. One can also look at what one obtains for the Sun with radius RS= 6.9× 108 m, which is in a good approximation 100 times the radius RE= 6.4× 106 m of the Earth. lΛ scales up by the ratio (RS/rS)1/2 to lΛ ∼ 5.7× 102× lP∼ 1.3× 10-14 m. This corresponds to a nuclear length scale and the corresponding particle would have a mass of about 17 MeV. Is it mere coincidence that there is recent very strong evidence (23 sigmas!) from the so called Ytterbium anomaly (see this) for so called X boson with mass 16-17 MeV (see this and this).

    The corresponding vacuum energy density ℏ/Λ4 would be about 8× 1038 mp/m3. This is 12 orders of magnitude higher than the average density .9× 1027 mp/m3 of the Sun. Since lΛ ∝ LΛ1/2 and ρ ∝ lΛ-4∝ LΛ-2 one obtains LΛ≥ 1012RS∼ 1020 m ∼ 105 ly, which corresponds to the size scale of the Milky Way.

    The only reasonable interpretation seems to be that LΛ characterizes the lengths of monopole flux tubes which fill the volume only for blackhole-like objects. The TGD based model for the Sun involves monopole flux tubes connecting the Sun with the galactic nucleus or blackhole-like object (see this){Haramein. In this case the density of matter at the flux tubes would be much higher since protons would be replaced with their M89 counterparts 512 higher mass. For this estimate, the vacuum energy density along flux tubes would be the average density of the Sun. At least two kinds of flux tubes would be required and this is consistent with the notion of many-sheeted space-time.

    The proposed solar model in which the solar wind and energy would be produced in the transformation of M89 nuclei to ordinary M107 nuclei allows to consider the possibility that the Sun and stars are blackhole-like objects in the sense that the interior correspond contains a volume filling flux tube tangle carrying vacuum energy density which is the average value of the solar mass density. I have considered this kind of model in (see this).

    One can wonder whether the scaling up the value of h to heff help to reduce the vacuum energy density assigned to the Sun? From lΛ∝ ℏeff1/4 the density proportional to ℏeff/lΛ4 does not depend on the value of heff.

To sum up, TGD could allow the interior of the gravastar solution as a space-time surface and this would correspond to the simplest imaginable model for the star. It is not clear whether Einstein's equations can be satisfied for some action based on the induced geometry but volume action is an excellent candidate even if cosmological constant is not allowed. In the TGD framework, the cosmological constant would correspond to the volume action as a classical action.

Schwartschild metric as exterior metric is representable as a space-time surface (see this) although it need not be consistent with any classical action principle and it could indeed make sense only at the quantum field theory limit when the many-sheeted space-time is replaced with a region of M4 made slightly curved. The spherical coordinates for the Schwartschild metric correspond to spherical coordinates for the Minkowski metric and Schwartschild radius is associated with the radial coordinate of M4. The exotic matter at the surface of the star as a blackhole-like entity could have a counterpart in the TGD based model of star (see this).

See the article Does the notion of gravastar make sense in the TGD Universe? or the chapter Some Solar Mysteries.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, April 14, 2025

Does the notion of gravastar make sense in the TGD Universe?

Mark McWilliams asked for my TGD based opinion about gravastar as a competing candidate for the blackhole (see this and this). The metric of the Gravastar model would be the de-Sitter metric in the interior of the gravastar. The density would be constant and there would be no singularity at the origin. The condition ρ=-p would be true for de-Sitter and there would be analogy with dark energy, which in TGD framework contributes to galactic dark matter identified as classical volume and magnetic energies for what I call cosmic strings, which are 4-surfaces with string world sheet as M4 projection. The condition ρ=p would hold true for ultrarelativistic matter at the surface, which indeed as a light-like metric if the infinite value of the radial component grr of the Schwartschild metric is deformed to a finite value at the horizon. In the exterior one would have ρ=p=0.

TGD suggests a model of a blackhole-like object as volume filling monopole flux tube tangle, which carries a constant mass density, which can be interpreted as dark energy as a sum of classical magnetic and volume energies. Quantum classical correspondence forces us to ask whether the description in terms of sequences of nucleons and in terms of classical energy are equivalent or whether the possibly dark nucleons must be added as a separate contribution. I have discussed the TGD based model of blackhole-like objects in \cite{btart{Haramein.

It became as a surprise for me that the gravastar could serve as a simple model for this structure and describe the space-time sheet at which the monopole flux tube tangle is topologically condensed. TGD also suggests that the surface of the star carries a layer of M89 matter consisting of scaled variants of ordinary hadrons with mass scale which is 512 times higher than that for ordinary hadrons. This would be the counterpart for the exotic matter and the surface of the gravastar \cite{btart{Haramein. This model predicts that the nuclear fusion at the core of the star is replaced with a transformation of M89 hadrons to ordinary hadrons. This would explain the energy production of the star and also the stellar wind and question the structure of the interior. I have proposed that it could be a quantum coherent system analogous to a cell.

Consider now the TGD counterpart of the gravastar model at quantitative level.

  1. The metric of AdSn (anti de-Sitter) resp. dSn (de-Sitter) can be represented as space-like resp. time-like hyperboloid resp. of n+1-dimensional Minkowski space with one time-like dimension. The metric is induced metric

    dx02-∑ i=1ndxi2 ,

    with metric tensor deducible from the representation

    x02-∑ i=1nxi2= ε α 2 ,

    as a surface. Here one has ε =-1 AdSn and ε=1 for dSn.

    It should be warned that the Wikipedia definition of the dSn (see this) contains the right-hand side with a wrong sign (there is ε=-1 instead of ε=1) whereas the definition of AdSn (see this) is correct. For n=4 this could realize AdS4 resp. dS4 as a space-like resp. time-like hyperboloid of 5-D Minkowski space.

  2. In TGD this representation as surface is not possible as such. One can however compactify the 5:th space-like dimension and represent it as a geodesic circle of CP2. dx52 is replaced with R22 and x52 with R2φ2. The contribution of S1 to the induced metric is very small since R corresponds to CP2 radius. The space-time surface would be defined by the condition

    a2= R2φ2+ε α2 ,

    where a2=t2-x2-y2-z2 defines light-cone proper time a. In TGD it would be associated with the second half of the causal diamond (CD). A more convenient form is following

    R2φ2= a2-ε α 2 ,

    where a is the light-cone proper time coordinate of M4. This requires a2≥ε α2. For ε=1 this implies a2≥ α2. For ε=1 one has a2≥ -α2 so that also space-like hyperboloids are possible.

  3. If the embedding is possible, one obtains an infinite covering of S1 by mass shells a2= R2φn2+ε α2, where one has φn= φ +n2π. For φ → ∞ one has a → nR. Hyperboloids associated with φn define a lattice of hyperboloids at this limit, a kind of time crystal.
  4. If the classical action is Kähler action of CP2, this surface is a vacuum extremal since the CP2 projection is 1-dimensional. If also the contribution M4 Kähler action to Kähler action suggested by the twistor lift of TGD is allowed, the situation the action is instanton action and vanishes although the induced M4 Kähler form does not vanish and defines self dual abelian field. It is not quite clear whether this is vacuum extremal anymore.

    If the Kähler action vanishes, volume action is the natural guess for the classical action and minimal surface equations are indeed satisfied if S1 is a geodesic circle. The mass density associated with this action would be constant in accordance with the de-Sitter solution.

  5. Consider next the induced metric. One has

    φn= n2π + [(a/R)2-ε (α/R)2]1/2 .

    This gives Rdφn/da= +/- a/[a2-ε α2]1/2. Note that a2≥ ε α2 is required to guarantee the reality of dφ/da. The gaa component of the induced metric (Robertson-Walker metric with k=-1 sub-critical mass density) is

    gaa=1-R2(dφn/da)2= 1- a2/(a2+ε α2)= εα2/(a2+εα2) .

It is useful to consider AdS4 and dS4 separately.
  1. For AdS4 with ε=-1, the reality of dφ/da implies a2>-α2 implying gaa<0 so that the induced metric has an Euclidean signature. This is mathematically possible and CP2 type extremals with Euclidean signature are in an important role in the TGD based model of elementary particles. What Euclidian cosmology could mean physically, is however not clear.
  2. For dS4 with ε=1, dφ/da is real for a22>0 implying a2≥ -α2. This allows all time-like hyperboloids and also some space-like hyperboloids. One has

    gaa=1-R2(dφn/da)2= 1- a2/(a22)= α2/(a22) .

    gaa is positive in the range allowed by the reality of dφ/da.

  3. The mass density of Robertson-Walker cosmology is obtained from the standard expression of the metric (note that one has dt2=gaada2)is given by

    ρ =(3/8πG)[[(da/dt)/a)2-1/a2]= (3/8πG)[1/(gaaa2) -1/a2]=(3/8πG α2) .

    The mass density is constant and could be interpreted in terms of a dynamically generated cosmological constant in GRT framework. This is not what happens usually in the Big Bang cosmology but would conform with a model of a star in an expanding Universe.

Somewhat surprisingly, TGD could allow the interior of the gravastar solution as a space-time surface and this would correspond to the simplest imaginable model for the star. It is not clear whether Einstein's equations can be satisfied for some action based on the induced geometry but volume action is an excellent candidate even if cosmological constant is not allowed. In the TGD framework, the cosmological constant would correspond to the volume action as a classical action.

Schwartschild metric as exterior metric is representable as a space-time surface \cite{allb/tgdgrt} although it need not be consistent with any classical action principle and it could indeed make sense only at the quantum field theory limit when the many-sheeted space-time is replaced with a region of M4 made slightly curved. The spherical coordinates for the Schwartschild metric correspond to spherical coordinates for the Minkowski metric and Schwartschild radius is associated with the radial coordinate of M4. The exotic matter at the surface of the star as a blackhole-like entity could have a counterpart in the TGD based model of star \cite{btart/Haramein}.

See the article Does the notion of gravastar make sense in the TGD Universe? or the chapter Some Solar Mysteries.

For a summary of the earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.