Thursday, April 30, 2020

Could brain be represented by a hyperbolic geometry?

There are proposals (see this) that the lattice-like structures formed by neurons in some brain regions could be mapped to discrete sets of 2-D hyperbolic space H2, possibly tesselations analogous to lattices of 2-D plane. The standard representations for 2-D hyperbolic geometry are 2-D Poincare plane and Poincare disk. The map is rather abstract: the points of tesselation would correlate with the statistical properties of neurons rather than representing their geometric positions as such.

Remark: There is a painting of Escher visualizing Poincare disk. From this painting one learns that the density of points of the tesselation increases without limit as one approaches the boundary of the Poincare disk.

In TGD framework zero energy ontology (ZEO) suggests a generalization of replacing H2 with 3-D hyperbolic space H3. The magnetic body (MB) of any system carrying dark matter as heff=nh0 provides a representation of any system (or perhaps vice versa). Could MB provide this kind of representation as a tesselation at 3-D hyperboloid of causal diamond (cd) defined as intersection of future and past directed light-cones of M4? The points of tesselation labelled by a subgroup of SL(2,Z) or it generalization replacing Z with algebraic integers for an extension of rationals would be determined by its statistical properties.

The positions of the magnetic images of neurons at H3 would define a tesselation of H3. The tesselation could be mapped to the analog of Poincare disk - Poincare ball - represented as t=T snapshot (t is the linear Minkowski time) of future light-cone. After t=T the neuronal system would not change in size. Tesselation could define cognitive representation as a discrete set of space-time points with coordinates in some extension of rationals assignable to the space-time surface representing MB. One can argue that MB has more naturally cylindrical instead of spherical symmetry so that one can consider also a cylindrical representation at E1× H2 so that symmetry would be broken from SO(1,3) to SO(1,2).

M8-H duality would allow to interpret the special value t=T in terms of special 6-D brane like solution of algebraic equations in M8 having interpretation as a "very special moment of consciousness" for self having CD as geometric correlate. Physically it could correspond to a (biological) quantum phase transition decreasing the value of length scale dependent cosmological constant Λ in which the size of the system increase by a factor, which is power of 2. This proposal is extremely general and would apply to cognitive representations at the MB of any system.

See the article Could brain be represented by a hyperbolic geometry?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Thursday, April 23, 2020

Decapitated wasp grasping its head and flying away


Runcel Arcaya gave a link to very interesting popular article telling about the rather surreal behavior of decapitated wasps. The wasp just grabs its head and flies away! Also decapitated hen can fly and I remember the story that some decapitated animals start to move towards nearby water.

The standard explanation for the ability of insect to move would be that insect brain is far from being so important than brains in higher animals. Ganglions in their spine take care of motor control. This looks reasonable.

One can of course wonder how the insect can fly if it does not see - eyes are in the head which it lost. Flying could be of course completely random.

These findings force to challenge the belief that brain is the seat of consciousness. Actually one must challenge also the belief that biological body is the seat of consciousness.


  1. The notion of magnetic body (MB) is more or less forced by the fact that brain codes information to EEG and sends it to space: the waste of metabolic energy in this manner makes no sense if there is no receiver. Also the sensory data is fraction of second old: this finds explanation since it takes some time to communicate it from brain to MB. This allows to estimate the size of MB and it has layers of size scale of Earth and even bigger.

  2. The macroscopic coherence of biological body is not possible without macroscopic coherence at control level and standard quantum mechanics does not provide it: Planck constant is simply too small. Hence dark matter at MB.
    <
  3. Even further, the idea that consciousness is a property of physical system must be challenged at fundamental level. Conscious experience is in quantum jump replacing the quantum state with a new one, between the old and new world, in a moment of creation. This picture solves the logical paradoxes of physicalistic and idealistic paradigms.

TGD based view about quantum jump provides another view about the situation.
  1. "Big" (ordinary) state function reductions in zero energy ontology change the arrow of time. This is essential for the new view about self-organization apparently breaking second law. Time evolution obeying second law in non-standard time direction looks in standard time direction like self-organization generating order and coherence and dissipation of energy looks in standard time direction like extracting energy from environment - feed of metabolic energy.

  2. This explains Libet's experiments apparently showing that experience of free will is caused by neural activity. The macroscopic quantum jump would correspond to this experience and the time evolutions starting from the final state would lead to geometric past and cause brain activity.

  3. Motor actions would be realizations of free will induced by "big" (ordinary) quantum jumps at MB carrying dark matter as heff=n×h0 phases and inducing coherent actions at the level of ordinary matter.Also effective change of arrow of time would be induced at the level of ordinary bio-matter.

  4. In the case of decapitated insect motor actions would involve similar macrossopic quantum jumps. The effects of motor activity propagating backwards time would start from the level of body but would not reach the brain but this would not be a problem!

  5. In TGD framework one can wonder whether eyes still see and the information about visual percepts still goes to the magnetic body (MB) of the insect, which controls the biological body? It would be enough to keep the head and just this the wasp does!

See the article Getting philosophical: some comments about the problems of physics, neuroscience, and biology.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, April 20, 2020

Could ZEO provide a new approach to the quantization of fermions?


The exact details of the quantization of fermions have remained open in TGD framework. The basic problem is the possibility of divergences coming from anti-commutators of fermions expected to involve delta functions in continuum case. In standard framework normal ordering saves from these divergences for the "free" part of the action but higher order terms give the usual divergences of quantum field theories. In supersymmetric theories the normal ordering divergences however cancel.

What happens in TGD?

  1. The replacement of point like particles with 3-surfaces replaces the dynamics of fields with that of surfaces. The resulting non-locality in the scale of 3-surfaces gives excellent hopes about the cancellation of divergences in the bosonic sector. The situation is very similar to that in super-string models.

  2. What about fermions? The TGD counterpart of Dirac action - modified Dirac action - is dictated uniquely by the bosonic action which is induced from twistor lift of TGD as sum of Kähler action analogous to Maxwell action and of volume term (see this). Supersymmetry in TGD sense is proposed here.

    In the second quantization based on cognitive representations (see this) as unique discretization of the space-time surface for an adele defined by extension of rationals superpartners would correspond to local composites of quarks and anti-quarks. TGD variant of super-space of SUSY approach so that space-time as 4-surface is replaced with its super-variant identified as union of surfaces associated with the components of super coordinates. Fermions are correlates of quantum variant of Boolean logic which can be seen as square root of Riemann geometry. There is no need for Majorana fermions.

    This approach replaced the earlier view in which right-handed neutrinos served a role as generators of N=2 SUSY. In the approach to be discussed their counterparts as local 3-quarks composites make comepack in a more precise formulation of the picture first discussed in .

    The simplest option involves only quarks as fundamental fermions and leptons would be local composites of 3 quarks: this is possibly by the TGD based view about color. Quark oscillator operators are enough for the construction of gamma matrices of "world of classical worlds" (WCW, see this) and they inherit their anti-commutators from those of fermionic oscillator operators. Even the super-variant of WCW can be considered. The challenge is to fix these anti-commutation relations for oscillator operator basis at 3-D surface: the modified Dirac equation would dictate the commutation relations later. This is not a trivial problem. One can also wonder whether one avoid the normal ordering divergences.

  3. In a discretization the anti-commutators of fermions and antifermions by cognitive representations (see this, this, this, and this) do not produce problems but in the continuum variant of this approach one obtains normal ordering divergences. Adelic approach (see this) suggests that also continuum variant of the theory must exists as also that of WCW so that one should find a manner to get rid of the divergences by defining the quantization of fermions in such a manner that one gets rid of divergences.

One can start by collecting a list of reasonable looking conditions possibly leading to the understanding of the fermionic quantization, in particular anticommutation relations.
  1. The quantization should be consistent with the number theoretic vision implying discretization in terms of cognitive representations. Could one assume that anti-commutators for the quark field for discretization is just Kronecker delta so that the troublesome squares of delta function could be avoided already in Dirac action and expressions of conserved quantities unless one performs normal ordering which is somewhat ad hoc procedure.

    The anti-commutators of induced spinor fields located at opposite boundaries of CD and quite generally, at points of H=M4xCP2 (or in M8 by MH duality) with non-space-like separation should be determined by the time evolution of induced spinor fields given by modified Dirac equation.

    In the case of cognitive representation could fix the anti-commutators for given time slice in M4× CP2 as usual Kronecker delta for the set of points with algebraic coordinates so that if anti-commutators of fermionic operators between opposite boundaries of CD were not needed, everything would be well-defined. By solving the modified Dirac equation for the induced spinors one can indeed express the induced spinor field at the opposite boundary of CD in terms of its values at given boundary. Doing this in practice is however difficult.

  2. Situation gets more complex if one requires that also the continuum variant of the theory exists. One encounters problems with fermionic quantization since one expects delta function singularities giving rise to at least normal ordering singularities. The most natural manner to quantize quarks fields is as a free field in H= M4 × CP2 expanded as harmonics of H. This however implies 7-D delta functions and bad divergences from them. Can one get rid of these divergences by changing the standard quantization recipes based on ordinary ontology in which one has initial value problem in time= constant snapshot of space-time to a quantization more appropriate in zero energy ontology (ZEO)?

Induction procedure plays a key role in the construction of classical TGD. The longstanding question has been whether the induction of spinor structure could be generalized to the induction of second quantization of free fermions at the level of 8-D imbedding space to the level of space-time so that induced spinor field Ψ (x) would be identified as Ψ(h(x)), where h(x) corresponds to the imbedding space coordinates of the space-time point. One would have restrictions of free fermion theory from imbedding space H to space-time surface.

The problem is that the anticommutators are 8-D delta functions in continuum case and could induce rather horrible divergences. It will be found that zero energy ontology (ZEO) (see this and the new view about space-time and particles allow to modify the standard quantization procedure by making modified Dirac action bi-local so that one gets rid of divergences. The rule is simple: given partonic 2-surface or even more generally given point of partonic surfaces contains either creation operators or annihilation operators but not both. Also the multi-local Yangian algebras proposed on basis of physical intuition to be central in TGD emerge naturally.

See the article Could ZEO provide a new approach to the quantization of fermions? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, April 15, 2020

A solution of the Hubble constant discrepancy?


This comment was inspired by an interesting popular article about a possible explanation of Hubble constant discrepancy (see this). The article told about a proposal of Lucas Lombriser (see the article Consistency of the local Hubble constant with the cosmic microwave background) for an explanation of this discrepancy in terms of local region around our galaxy having size of order few hundred Mly - this is the scale of the large voids forming a lattice like structure containing galaxies at their boundaries - and having average density of matter 1/2 of that elsewhere.

Consider first the discrepancy. The determination of Hubble constant characterizing the expansion rate of the Universe can be deduced from cosmic microwave background (CMB). This corresponds to long length scales and gives value Hcosmo= 67.4 km/s/Mpc. Hubble constant can be also deduced from local measurements using so called standard candles in the scales of large voids. This gives Hubble constant Hloc=75.7 km/s/Mpc, which is by 10 percent higher.

The argument of the article is rather simple.

  1. It is a well-known fact that Universe decomposes into giant voids with size scale of 108 light years. The postulated local region would have this size and mass density would be reduced by factor 1/2.

    Suppose that standard candles used to determine Hubble constant belong to this void so that density is lower than average density. This would mean that the Hubble constant Hloc for local measurements of Hubble constant using standar candles would be higher than Hcosmo from measurements of CMB.

  2. Consider the geometry side of Einstein's equations. Hubble constant squared is given by

    H2= (dlog(a)/dt)2= 1/(gaa× a2) .

    Here one has dt2 = gaada2. t is proper time for comoving observer and a is the scale factor in Robertson-Walker metric. The reduction of H2 is caused by increase of gaa as density decreases. At the limit of empty cosmology (future light-one) gaa=1. Hubble constant is largest at this limit for given a, which in TGD framework corresponds to light-cone proper time coordinate.

  3. The matter side of Einstein's equations gives

    H2= (8π G/3)ρm +Λ/3 .

    The first contribution corresponds to matter and second dark energy, which dominates.

  4. It turns out that be reducing ρm by factor 1/2, the value of Hloc is reduced by 10 percent so that Hloc agrees with Hcosmo.

Could one understand the finding in TGD framework? It seems that Hubble constant depends on scale. This would be natural in TGD Universe since TGD predicts p-adic hierarchy of scales coming as half octaves. One can say that many-sheeted space-time gives rise to fractal cosmology or Russian doll cosmology.

Cosmological parameters would depend on scale. For instance, cosmological constant would come naturally as octave of basic values and approach to zero in long length scales. Usually it is constant and this leads to the well-known problem since its value would be huge by estimates in very short length scales. Also its sign comes out wrong in super string theories whereas twistor lift of TGD predicts its sign correctly.

I have already earlier tried to understand the discrepancy in TGD framework in terms of many-sheeted space-time suggesting that Hubble constant depends on space-time sheet - first attempts were first applications of TGD inspired cosmology for decades ago - but have not found a really satisfactory model. The new finding involving factor 1/2 characteristic for p-adic length scale hierarchy however raises hopes about progress at the level of details.

  1. TGD predicts fractal cosmology as a kind of Russian doll cosmology in which the value of Hubble constant depends on the size scale of space-time sheet. p-Adic length scale hypothesis states that the scale comes in octaves. One could therefore argue that the reduction of mass density by factor 1/2 in the local void is natural. One can however find objections.

  2. The mass density scales as 1/a3 and one could argue that the scaling could be like 2-3/2. Here one can argue that in TGD framework matter is at magnetic flux tubes and the density therefore scales down by factor 1/2.

  3. One can argue that also the cosmological term in mass density would naturally scale down by 1/2 as p-adic length scales is scaled up by 2. If this happened the Hubble constant would be reduced by factor 1/21/2 roughly since dark energy dominates. This does not happen.

    Should one assign Ω to a space-time sheet having scale considerably larger than that those carrying the galactic matter? Should one regarded large void as a local sub-cosmology topologically condensed on much larger cosmology characterized by Ω? But why not use Ω associated with the sub-cosmology? Could it be that the Ω of sub-cosmology is included in ρm?

Could the following explanation work. TGD predicts two kinds of magnetic flux tubes: monopole flux tubes, which ordinary cosmologies and Maxwellian electrodynamics do not allow and ordinary flux tubes representing counterparts of Maxwellian magnetic fields. Monopole flux tubes need not currents to generate their magnetic fields and this solves several mysteries related to magnetism: for instance, one can understand why Earth's magnetic field has not decayed long time ago by the dissipation of the currents creating it. Also the existences of magnetic fields in cosmic scales impossible in standard cosmology finds explanations.
  1. First kind of flux tubes carry only volume energy since the induced Kähler form vanishes for them and Kähler action is vanishing. There are however induced electroweak gauge fields present at them. I have tentatively identified the flux tubes mediating gravitational interaction with these as these flux tubes.

    Could Ω correspond to cosmological constant assignable to gravitational flux tubes involving only volume energy and be same also in the local void. This because they mediate very long range and non-screened gravitational interaction and correspond to very long length scales.

  2. Second kind of flux tubes carry non-vanishing monopole flux associated with the Kähler form and the energy density is sum of volume term and Kälhler term. These flux tubes would be carriers of dark energy generating gravitational field orthogonal to the flux tubes explaining the flat velocity spectrum for distant stars around galaxies. These flux tubes be present in all scales would play central role in TGD based model of galaxies, stars, planets, quantum biology, molecular and atomic physics, nuclear physics and hadron physics.

    These flux tubes suffer phase transitions increasing their thickness by factor 2 and reducing the energy density by factor 1/2. This decreases gradually the value of energy density associated with them.

    Could the density ρm of matter correspond to the density of matter containing contributions from monopole flux tubes and their decay products: ρm would therefore contain also the contribution from both magnetic and volume energy of flux tubes. Could it have been scaled down in a phase transition reducing locally the value of string tension for these flux tubes. Our local void would be one step further in the cosmic evolution by reductions and have experience one more expansions of flux tube thickness by half octave than matter elsewhere.

To sum up, this model would rely on the prediction that there are two kinds of flux tubes and that the cosmic evolution proceeds by phase transitions increasing p-adic length scale by half octave reducing the energy density by factor 1/2 at flux tubes. The local void would be one step further in cosmic evolution as compared to a typical void.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wolfram's proposal for discrete space-time dynamics

Wolfram, introduced cellular automatons based on a graph like structure and rules for what can happen at the nodes of the graph. The graph like structure was fixed. The proposal was that physics reduces to the basic rules of cellular automaton.

Cellular automaton dynamics is a good model for high level self-organized systems: the laws of physics at this level are "traffic rules" selected by a convention. Moral in society is an abstract example. Whether they are obeyed is not always obvious. The dynamics at fundamental level is to my opinion more naturally given by a variational principle. In TGD variational principle would determine spacetime as 4-surface in M4xCP2 or equivalently by algebraic equations in 4-surfaces in M8 with associative tangent space.

In the recent proposal Wolfram starts from graphs with relationships between points and makes graphs and thus dynamical and able to change. He tries to get continuum, curved space-time of general relativity as a limit of graph with very large number of nodes. There would be some rules to produce new graphs - kind of space-time dynamics. I think here one ends up with problems or at least challenges: there is no limit for possible rules. There are objections.

  1. How to obtain 3-space and 4-D space-time? Restrictions on homology could give analog of n-D manifold which is approximated by collection of simplexes. Why just these dimensions? This is very tough problem.

  2. Difficulties begin as one tries to get the notion of distance and metric to get Einstein's theory. One must assign a unit distance between nearest neighbours. Distance d between to points would be minimal number of steps between them. This looks fine at first.

    But what if one adds one point x between neighbours a and b? Is the distance between a x d/2 halved? Or is the distance between a and b now 2d. Both interpretations would mean dramatic change of the discretized metric in an addition of single point.

    One might intuitively argue that the addition of new points improves resolution and one adds new points by some rule everywhere so that distances by nearby points would be naturally scaled down by same factor. But talking about resolution means that one already talks about conscious entities. And now there is danger that one thinks the structure as imbedded in continuous space in order to make the rule determining the distances realistic. In other words, one is discretizing continuous surface!

    In any case, the idea is that adding more and more points to the graph one obtains at the limit of infinitely many points standard physics and the graphs looks like 3-manifold. I am skeptic: the arguments involving infrared limit as it is called are hand weaving.

  3. A further basic argument against discretization at fundamental level is that one loses the nice symmetries of continuum space-time. Even more, geometrization of these symmetries leads to a unique choice of imbedding space in TGD framework forced both by both standard model physics and by the existence of twistor lift of TGD.

    Not surprisingly therefore, Wolfram has really hard time in trying to convince the reader that energy and momenta emerge from his theory. Some kind of flows in the graphs must be introduced. The reason is obvious. Noether's theorem giving extremely deep connection between symmetries and conservation laws is lost and one can only try to invent purely ad hoc arguments, which are doomed to be wrong.

    One could of course consider discretization of the fundamental symmetries for the graphs fixing also the distances between the points of graphs but this would lead to the identification of them as surfaces in some space - say M4xCP2 or equivalently M8 - to get the distances between the points of discretization correctly.

I have been preaching about completely different approach: based on the properties of cognition. Cognition must have physical correlates. Every person doing numerics knows that cognition is discrete and finite. Cognitive representations should make sense and should be discrete and perhaps even finite. Could continuum physics have unique cognitive representations? Certainly not in standard physics.
  1. As a conscious theorist I have been talking a lot about adelic physics as number theoretical generalization of TGD in which space-times are 4-D continuous surfaces in certain 8-D imbedding space H=M4xCP2 or complexified M8 having interpretation in terms of octonions - the choices are equivalent by M8-duality - having symmetries of special relativity and standard model. One introduces besides reals also p-adic number fields as correlates of physics of cognition. p-Adic variants of space-time surfaces obey same field equations as their real counterparts and one can say that they mimic the real physics.

    At M8 level one can say that space-time surfaces are roots of octonionic continuations of polynomials having rational coefficients thus define extensions of rationals via their roots. Physics at M8 level would be purely algebraic. At M8 H-level space-time surfaces would be minimal surfaces with string world sheets as singularities. Minimal surface is a geometric analog ofn massless field.

  2. Extensions of rationals assignable to polynomials with rational coefficients play a key role in the theory and the space-time surfaces allow a unique discretization in in highly unique preferred coordinates made possible by the symmetries of imbedding space. At the level of M8 the coordinates are determined apart from time translation.

    The points of discretization have coordinates in the extension of rationals considered. I call these discretizations cognitive representations. What is remarkable that cognitive representation makes sense both as points of real space-time surfaces and its p-adic counterparts. It is in the intersection of reality and various p-adicities (or their extensions induced by the extension of rationals defining the adele).

  3. The space-time would not be discrete at fundamental level. The cognitive representations about it would be however discrete. At the limit of algebraic numbers the cognitive representation would be a dense set of algebraic points of space-time surfaces. The hierarchy of extensions of rationals would define evolutionary hierarchy since in quantum jumps the dimension of extension must increase in statistical sense. Dimension d of extension would actually correspond to effective Planck constant h_eff=nh_0 assignable to dark matter as phases of ordinary matter. This would give direct connection with quantum physics.

This approach is a diametric opposite of Wolfram. It produces general relativity and standard model at quantum field theory limit. From TGD point of view the mistake of Wolfram is to forget cognition and consciousness alltogether. Wolfram has become one item in the long list of gullible victims of physicalism.

See for instance The philosophy of adelic physics .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, April 14, 2020

About the description of rotating magnetic systems in zero energy ontology (ZEO)

I have worked for decades in an attempt to understand the findings of Godin and Roschin about strange effects in rotating magnetic systems. I have also discussed the possible connections with TGD inspired quantum biology from the point of view of heff=nh0 hierarchy. The developments in zero energy ontology (ZEO) and increased understanding of magnetic fields in TGD framework allow to look at the situation again. It seems that the strange findings can be understood as beining related to a macroscopic variant of "big" (ordinary) statefunction reduction in which the arrow of time is changed. I am not an engineer but more precise model might allow development of simpler systems catching just the essentials and also scaling down of the system of Godin and Roschin perhaps allowing easier testing of the model.

See the article About the description of rotating magnetic systems in zero energy ontology (ZEO) or the chapter The anomalies in rotating magnetic systems as a key to the understanding of morphogenesis.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, April 07, 2020

Mathemematical bridge connecting Diophantine equations and spectrum of automorphic functions


I received a link to a popular article published in Quanta Magazine with title ‘Amazing’ Math Bridge Extended Beyond Fermat’s Last Theorem suggesting that Fermat's last theorem could generalize and provide a bridge between two very different pieces of mathematics suggested also by Langlands correspondence.

I would be happy to have the technical skills of real number theorist but I must proceed using physical analogies. What the theorem states is that one has two quite different mathematical systems, which have a deep relationship between each other.

  1. Diophantine equations for natural numbers, which are determined by polynomials. Their solutions can be regarded as roots of a polynomial P(x) containing second variable y as parameter. The roots which are pairs of integers are of interest now. One could consider also all roots as function of y.

  2. Second system consists of automorphic functions in lattice like systems, tesselations. They are encountered in Langlands conjecture, whose possible physical meaning I still fail to really understand.

    The hyperboloid L ( L for Lobatchevski space) defined as t2-x2-y2-z2=constant surface of Minkowski space (particle physicist talks about mass shell) is good example about this kind of system. One can define in this kind of tesselation automorphic functions, which are quasiperiodic in sense that the values of function are fixed once one knows them for single cell of the lattice. Bloch waves serve as condensed matter analog.

    One can assign to automorphic function what the article calls its "energy spectrum". In the case of hyperboloid it could correspond to the spectrum of d'Alembertian - this is physicist's natural guess. Automorphic function could be analogous to a partition function build from basic building brickes invariant under the sub-group of Lorentz group leaving the fundamental cell invariant. Zeta function assignable to extension of rationasl as generaliztion of Riemann zeta is one example.

What the discovery could be? I can make only humble guesses. The popular article tells that the "clock solutions" of given Diophantine equation in various finite fields Fp are in correspondence with the "energy" spectra of some automorphic form defined in some space.

The problem of finding the automorphic forms is difficult and the message is that here a great progress has occurred. So called torsion coefficients for the modular form would correspond the integer value roots of Diophantine equations for various finite fields Fp. What could this statement mean?

Trying to understand basic concepts

Consider first basic concepts.

  1. What does automorphic form mean? One has a non-compact group G and functions from G to some vector space V. For instance, spinor modes could be considered. Automorphic forms are eigenfunctions of Casimir operators of G, d'Alembert type operator is one such operator and in TGD framework G=SO(1,3) is the interesting group to consider. There is also discrete infinite subgroup Γ⊂ G under which the eigenfunctions are not left invariant but transform by factor j(γ) of automorphy acting as matrix in V - one speaks of twisted representation.

    Basic space of this kind of is upper half plane of complex plane in which G=SL(2,C) acts as also does γ=SL(2,Z) and various other discrete subgroups of SL(2,C) and defines analog of lattice consisting of fundamental domains γ∖ G as analogs of lattice cells. 3-D hyperboloid of M4 allows similar structures and is especially relevant from TGD point of view. When j(γ) is non-trivial one has analogy of Bloch waves.

    Modular invariant functions is second example. They are defined in the finite-D moduli space for the conformal structures of 2-D surfaces with given genus. Automorphic forms transform by a factor j(γ) under modular transformations which do not affect the conformal equivalence class. Modular invariants formed from the modular forms can be constructed from these and the TGD based proposal for family replication phenomenon involves this kind invariants as elementary particle vacuum functions in the space of conformal equivalence classes of partonic 2-surfaces (see this).

    One can also pose invariance under a compact group K acting on G from right so that one has automorphic forms
    in G/K. In the case of SO(3,1) this would give automorphic forms on hyperboloid H3 ("mass shell") and this is of special interest in TGD. One could also require invariance under discrete finite subgroup acting from the left so that j(γ)=1 would be true for these transformations. Here especially interesting is the possibility that Galois group of extension of rationals is represented as this group. The correct prediction of Newton's constant from TGD indeed assumes this (see this).

  2. What does the spectrum (see this) mean? Spectrum would be defined by the eigenvalues of Casimir operators of G: simplest of them is analog of d'Alembertian for say SO(3,1). The number of these operators equals to the dimension of Cartan sub-algebra of G. Additional condition is posed by the transformation properties under Γ characterized by j(γ).

One can assign to automorphic forms so called torsion coefficients in various finite fields Fp and to the eigen functions of d'Alembertian and other Casimir operators in coset space G/K. Consider discrete but infinite subgroup Γ such that solutions are apart from the factor j(γ) of automorphy left invariant under Γ. For trivial j(γ) they would be defined in double coset space Γ ∖ G/K. Besides this Galois group represented as finite discrete subgroup of SU(2) would leave the eigenfunctions invariant.
  1. Torsion group T is for the first homotopy group Π1 (fundamental group) a finite Abelian subgroup decomposing Zn to direct summands Zp, p prime. The fundamental group in the recent case would be naturally that of double coset space Γ∖ G/K.

  2. What could torsion coefficients be (see this)? Π1 is Abelian an representable as a product T× Zs. Zs is the dimension of Π1 - rank - as a linear space over Z and T=Zm1× Zm2×....Zmn is the torsion subgroup. The torsion coefficients mi satisfy the conditions m1⊥ m2⊥... ⊥ mn. The torsion coefficients in Fp would be naturally mi mod p.

    The torsion coefficients characterize also the automorphic functions since they characterize the first homotopy group of Γ ∖ G/K . If I have understood correctly, torsion coefficients mi for various finite fields Fp for given automorphic form correspond to a sequence of solutions of Diophantine equation in Fp. This is the bridge.

  3. How are the Galois groups related to this (see this)? Representations of Galois group Gal(F) for finite-D extension F of rationals could act as a discrete finite subgroup of SO(3)⊂ SO(1,3) and would leave eigenfunctions invariant: these ADE groups form appear in McKay correspondence and in inclusion hierarchy of hyper-finite factors of type II1 (see this).

    The invariance under Gal(F) would correspond to a special case of what I call Galois confinement, a notion that I have considered earlierhere) with physical motivations coming partially from the TGD based model of genetic code based on dark photon triplets.

    The problem is to understand how dark photon triplets occur as asymptotic states - one would expect many-photon states with single photon as a basic unit. The explanation would be completely analogous to that for the appearance of 3-quark states as asymptotic states in hadron physics - the analog of color confinement. Dark photons would form Z3 triplets under Z3 subgroup of Galois group associated with corresponding space-time surface, and only Z3 singlets realized as 3-photon states would be possible.

    Mathematicians talk also about the Galois group Gal(Q) of algebraic numbers regarded as an extension of finite extension F of rationals such that the Galois group Gal(F) would leave eigenfunctions invariant - this would correspond to what I have called Galois confinement.

  4. In TGD framework Galois group Gal(F) has natural action on the cognitive representation identified as a set of points of space-time surface for which preferred imbedding space coordinates belong to given extension of rationals (see this). In general case the action of Galois group gives a cognitive representation related to a new space-time surface, and one can construct representations of Galois group as superpositions of space-time surfaces and they are effectively wave functions in the group algebra of Gal(F). Also the action of discrete subgroup of SO(3)⊂ SO(1,3) gives a new space-time surface.

    There would be two actions of Gal(F): one at the level of imbedding spaces at H3 and second at the level of cognitive representations. Possible applications of Langlands correspondence and generalization of Fermat's last theorem in TGD framework should relate to these two representations. Could the action of Galois group on cognitive representation be equivalent with its action as a discrete subgroup of SO(3)⊂ SO(1,3)? This would mean concrete geometric constraint on the preferred extremals.

The analog for Diophantine equations in TGD

What could this discovery have to do with TGD?

  1. In adelic physics of TGD M8-H duality is in key role. Space-time surfaces can be regarded either as algebraic 4-surfaces in complexified M8 determined as roots of polynomial equations. Second representation is as mimimal surfaces with 2-D singularities identified as preferred extremals of action principle: analogs of Bohr orbits are in question.

  2. The Diophantine equations generalize. One considers the roots of polynomials with rational coefficients and extends them to 4-D space-time surfaces defined as roots of their continuations to octonion polynomials in the space of complexified octonions. Associativity is basic dynamical principle: the tangent space of these surfaces is quaternionic. Each irreducible polynomial defines extension of rationals via its roots and one obtains a hierarchy of them having physical interpretation as evolutionary hierarchy. These surface can be mapped to surface in H= M4×CP2 by M8-H duality.

  3. So called cognitive representations for given space-time surface are identified as set of points for which points have coordinate in extension of rationals. They realize the notion of finite measurement resolution and scattering ampludes can be expressed using the data provided by cognitive representations: this is extremely strong form of holography.

  4. Cognitive representation generalizes the solutions of Diophantine equation: instead of integers one allows points in given extension of rationals. These cognitive representations determine the information that conscious entity can have about space-time surface. As the extensions approaches algebraic numbers, the information is maximal since cognitive representation defines a dense set of space-time surface.

The analog for automorphic forms in TGD
  1. The above mentioned hyperboloids of M4 are central in zero energy ontology (ZEO) of TGD: in TGD based cosmology they correspond to cosmological time constant surfaces. Also the tesselations of hyperboloids are expected to have a deep physical meaning - quantum coherence even in cosmological scales is possible and there are pieces of evidence about the lattice like structures in cosmological scales.

  2. Also the finite lattices defined by finite discrete subgroups of SU(3) in CP2 analogous to Platonic solids and and regular polygons for rotation group are expected to be important.

  3. One can imagine analogs of automorphic forms for these tesselations. The spectrum would correspond to that for massless d'Alembertian of L×CP2, where L denotes the hyperboloid, satisfying the boundary conditions given by tesselation. In condensed matter physics solutions of Schroedinger equation consistant with lattice symmetries would be in question: Bloch waves. The spectrum would correspond to mass squared eigenvalues and to the spectra for observables assignable to the discrete subgroup of Lorentz group defining the tesselation.

  4. The theorem described in the article suggests a generalization in TGD framework based on physical motivations. The "energy" spectrum of these automorphic forms identified as mass squared eigenvalues and other quantum numbers characterized by the subgroup of Lorentz group are at the other side of the bridge.

    At the other side of bridge could be the spectrum of the roots of polynomials defining space-time surfaces. A more general conjecture would be that the discrete cognitive representations for space-time surfaces as "roots" of octonionic polynomial are at the other side of bridge. These two would correspond to each other.

    Cognitive representations at space-time level would code for the spectrum of d'Alembertian like operator at the level of imbedding space. This could be seen as example of quantum classical correspondence (QCC) , which is basic principle of TGD.

What is the relation to Langlands conjecture (LC)?

I understand very little about LC at technical level but I can try to relate it to TGD via physical analogies.

  1. LC relates two kinds of groups.

    1. Algebraic groups satisfying certain very general additional conditions (complex nxn matrices is one example). Matrix groups such as Lorentz group are a good example.

      The Cartesian product of future light-cone and CP2 would be the basic space. d'Alembertian inside future light-cone in the variables defined by Robertson- Walker coordinates. The separation of variables a as light-cone proper time and coordinates of H3 for given value of a assuming eigenfunction of H3 d'Alembertian satisfying additional symmetry conditions would be in question. The dependence on a is fixed by the separability and by the eigenvalue value of CP2 spinor Laplacian.

    2. So called L-groups assigned with extensions of rationals and function fields defined by algebraic surfaces as as those defined by roots of polynomials. This brings in adelic physics in TGD.

  2. The physical meaning in TGD could be that the discrete the representations provided by the extensions of rationals and function fields on algebraic surfaces (space-time surfaces in TGD) determined by them. Function fields might be assigned to the modes of induce spinor fields.

    The physics at the level of imbedding space (M8 or H) described in terms of real and complex numbers - the physics as we usually understand it - would by LC corresponds to the physics provided by discretizations of space-time surfaces as algebraic surfaces. This correspondence would not be 1-1 but many-to-one. Discretization provided by cognitive representations would provide hierarchy of approximations. Langlands conjecture would justify this vision.

  3. Galois groups of extensions are excellent examples of L-groups an indeed play central role in TGD. The proposal is that Galois groups provide a representation for the isometries of the imbedding space and also for the hierarchy of dynamically generated symmetries. This is just what the Langlands conjecture motivates to say.

    Amusingly, just last week I wrote an article deducing the value of Newton's constant using the conjecture that discrete subgroup of isometries common to M8 and M4×CP2 consisting of a product of icosahedral group with 3 copies of its covering corresponds to Galois group for extension of rationals. The prediction is correct (see this). The possible connection with Langlands conjecture came into my mind while writing this.

To sum up, Langlands correspondence would relate two descriptions. Discrete description for cognitive representations at space-time level and continuum description at imbedding space level in terms of eigenfunctions of spinor d'Alembertian.

See the article Generalization of Fermat's last theorem and TGD or the chapter Langlands Program and TGD: Years Later.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, April 05, 2020

Can TGD predict the value of Newton's constant?

Newton's constant G cannot be a fundamental constant in TGD framework, where CP2 radius R and Kähler coupling strength as the analog of fine structure constant are the fundamental constants. Dimensionally G corresponds to R2/ℏ. This gives guidelines for predicting G. TGD predicts a hierarchy of effective Planck constants heff/h0=n, where n is the order of Galois group of Galois extension defining extension of rationals. Dimension n factorizes to a product n=n1n2... for extension E1 of extension E2 of .... rationals. M8-H correspondence allows to associate the Galois group with an irreducible polynomial characterizing space-time surface as an algebraic surface in M8. The gradual increase of extension by forming a functional composite of a new polynomial with the already existing one (P→ Pnew○P) would be analogous to the evolution of genome: earlier extensions would be analogous to conserved genes.

The proposal modifying the earlier proposal is G= R2/ngr0, where ngr is the order of Galois group Ggr "at the bottom" of the hierarchy of extensions, and one has ℏ=6h0. One would have n=n1n2...ngr. Ggr "at the bottom" is proposed to represented number theoretically geometric information about the imbedding space by providing a discretization for the product of maximal finite discrete sub-group of isometries and tangent space rotations of imbedding space. By M8-H duality these sub-groups should be identical for H and M8. The prediction is that maximal Ggr is product of icosahedral group I with 3 copies of coverings of I.

Rather remarkably, the prediction for G is correct if one assumes that the value of R is what p-adic mass calculation for electron mass gives.

Since the hierarchy of Planck constants relates to number theoretical physics proposed to describe the correlates of cognition, the connection with cognition strongly suggests itself. Icosahedral and tetrahedral geometries occur also in the TGD based model of genetic code in terms of bio-harmony, which suggests that genetic code represents geometric information about imbedding space symmetries. These connections are discussed in detail.

See the article Can TGD predict the value of Newton's constant? or the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2
length R are related
.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.