https://matpitka.blogspot.com/2023/08/

Thursday, August 24, 2023

How would TGD have managed in the competion between theories of consciousness?

Templeton World Charity foundation held a kind of competition for two neuro science based theory of consciousness: namely, IIT (Integrated information theory) and GNWT (Global neural work space theory). Quanta Magazine article discusses the outcome of the competion.

IIT and GNWT met several hurdles in the competition.

  1. The first hurdle checked how well each theory decoded the categories of the objects that the subjects saw in the presented images. Both theories performed well here, but IIT was better at identifying the orientation of objects.
  2. The second hurdle tested the timing of the signals. IIT predicted sustained, synchronous firing in the hot zone for the duration of the conscious state. While the signal was sustained, it did not remain synchronous. GNWT predicted an “ignition” of the workspace followed by a second spike when the stimulus disappeared. Only the initial spike was detected. In the on-screen scoring for the NYU audience, IIT pulled ahead.
  3. The third hurdle concerned overall connectivity across the brain. GNWT scored better than IIT here, largely because some analyses of the results supported GNWT predictions while the signals across the hot zone were not synchronous.
Could TGD  inspired theory of consciousness  overcome these hurdles?
  1. TGD  inspired theory of consciousness is essentially quantum measurement theory involving what I call zero energy ontology, which solves the basic paradox of quantum measurement. Also quantum coherence in arbitrarily long scales is possible by the hierarchy of Planck constants labelling phases of ordinary matter behaving like dark matter.   Not only the brain but also the magnetic body of the brain are involved with the process. 
  2. Generation of  mental images, perception, is  basically a quantum measurement i.e.  state function  reduction (SFR) in TGD sense. There are  "small" SFRs (SSFRs) and "big" SFRs (BSFRs).  A  sequence of SSFRs corresponds to a sequence of repeated measurements  of sets of observables which commute with each other (this set can gradually increase as perception becomes sharper): a generalization of  the Zeno effect or  rather of weak measurement of quantum optics is in question. Each SSFR in the sequence  gives rise to qualia.   This process gives rise to a mental image   as a conscious entity, subself of self.  Each SSFR in the sequence  gives rise to qualia. Attention as a  sequence of repeated measurements   involving the  same observables would correspond to a sequence of  SSFRs.

    BSFR, which generalizes ordinary quantum measurement, occurs  when new observables not commuting with those measured in previous SSFRs are measured. In BSFR the arrow of time changes.  In a  pair of BSFRs in which the arrow of time temporarily changes and  it gives rise to new percept.  This certainly involves firing.    The original mental image as a conscious entity dies and reincarnates with the opposite arrow of time (also I am a  mental image  of some  higher level self). A pair of BSFRs leads to the original arrow of time  and corresponds to quantum tunnelling.    

  3.  Holography of consciousness is essential. The mental image generated in SSFR lasts and  until the  next SSFR occurs and possibly modifies it. The outcome of quantum measurement as SSFR  and also BSFR thus defines analog of holographic data. If new observables commuting with the original ones are measured in the next SSFR they make the percept   more precise and conscious experience changes.  The percept can also become less sharp when less observables are measured.
  4.  If synchronous firing  could be related to SSFRs it would happen in each modification of mental images as it sharpens or becomes dimmer. The gradual loss of synchrony could relate to the dimming.  The synchronous firing occurs only when  the mental images are created, say in eureka experience, but  does not last  the entire duration of the percept as found by Revonsuo a long time ago. No spikes need to occur when the stimulus disappears unless the observer  is ready to detect  this.
TGD therefore overcomes the first and second hurdle. The third hurdle reduces the synchrony across brain scales that at the level of the magnetic body and synchronous firing at disjoint brain regions corresponds to a single region active at the part of the magnetic body of the brain to which signals from the brain are sent by EEG. This explains why salamander survive as a conscious entity when its brain is sliced to pieces and shuffled like a pack of cards.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, August 23, 2023

An explicit formula for M8-H duality

M8-H duality is a generalization of momentum-position duality relating the number theoretic and geometric views of physics in TGD and, despite that it still involves poorly understood aspects, it has become a fundamental building block of TGD. One has 4-D surfaces Y4 ⊂ M8c, where M8c is complexified M8 having interpretation as an analog of complex momentum space and 4-D spacetime surfaces X4 ⊂ H = M4 × CP2. M8c, equivalently E8c, can be regarded as complexified octonions. M8c has a subspace M4c containing M4.

Comment: One should be very cautious with the meaning of "complex". Complexified octonions involve a complex imaginary unit i commuting with the octonionic imaginary units Ik. i is assumed to also appear as an imaginary unit also in complex algebraic numbers defined by the roots of polynomials P defining holographic data in M8c.

In the following M8-H duality and its twistor lift are discussed and an explicit formula for the dualities are deduced. Also possible variants of the duality are discussed.

1. Holography in H

X4 ⊂ H satisfies holography and is analogous to the Bohr orbit of a particle identified as a 3-surface. The proposal is that holography reduces to a 4-D generalization of holomorphy so that X4 is a simultaneous zero of two functions of complex CP2 coordinates and of what I have called Hamilton-Jacobi coordinates of M4 with a generalized K hler structure.

The simplest choice of the Hamilton-Jacobi coordinates is defined by the decomposition M4 = M2 × E2, where M2 is endowed with hypercomplex structure defined by light-like coordinates (u,v), which are analogous to z and \overline{z}. Any analytic map u → f(u) defines a new set of light-like coordinates and corresponds to a solution of the massless d'Alembert equation in M2. E2 has some complex coordinates with an imaginary unit defined by i.

The conjecture is that also more general Hamilton-Jacobi structures for which the tangent space decomposition is local are possible. Therefore, one would have M4 = M2(x) × E2(x). These would correspond to non-equivalent complex and K hler structures of M4 analogous to those possessed by 2-D Riemann surfaces and parametrized by the moduli space.

2. Number theoretic holography in M8c

Y4 satisfies number theoretic holography defining dynamics, which should reduce to associativity in some sense. The Euclidean complexified normal space N4(y) at a given point y of Y4 is required to be associative, i.e. quaternionic. Besides this, N4(i) contains a preferred complex Euclidean 2-D subspace Y2(y). Also, the spaces Y2(x) define an integrable distribution. I have assumed that Y2(x) can depend on the point y of Y4.

These assumptions imply that the normal space N(y) of Y4 can be parameterized by a point of CP2 = SU(3)/U(2). This distribution is always integrable, unlike quaternionic tangent space distributions. M8-H duality assigns to the normal space N(y) a point of CP2. M4c point y is mapped to a point x in M4 ⊂ M4 × CP2 defined by the real part of its inversion (conformal transformation): this formula involves the effective Planck constant for dimensional reasons.

The 3-D holographic data, which partially fixes 4-surfaces Y4, is partially determined by a polynomial P with real integer coefficients smaller than the degree of P. The roots define mass squared values which are in general complex algebraic numbers and define complex analogs of mass shells in M4c ⊂ M8c, which are analogs of hyperbolic spaces H3. The 3-surfaces at these mass shells define 3-D holographic data continued to a surface Y4 by requiring that the normal space of Y4 is associative, i.e. quaternionic. These 3-surfaces are not completely fixed, but an interesting conjecture is that they correspond to fundamental domains of tessellations of H3.

What does the complexity of the mass shells mean? The simplest interpretation is that the space-like M4 coordinates (3-momentum components) are real, whereas the time-like coordinate (energy) is complex and determined by the mass shell condition. One would have Re2(E) - Im(E)2 - p2 = Re(m2) and 2Re(E)Im(E) = Im(m2). The condition for the real parts gives H3 when Eeff=(Re2(E) - Im(E)2)1/2 is taken as energy coordinate. The second condition allows solving Im(E) in terms of Re(E), so that the first condition reduces to a modifed equation of the mass shell when (Re(E)2 - Im(E)2)1/2, expressed in terms of Re(E), is used in Eeff. Is this deformation of H3 in the imaginary time direction equivalent to a region of the hyperbolic 3-space H3?

One can look at the formula in more detail. Mass shell condition gives Re2(E)-Im(E)2- p2= Re(m2) and 2Re(E)Im(E)= Im(m2). The condition for the real parts gives H^3, when [Re2(E)-Im(E)2]1/2 is taken as an effective energy. The second condition allows to solve Im(E) in terms of Re(E) so that the first condition reduces to a dispersion relation for Re(E)2.

Re(E)2 = (1/2)×(Re(m2)-Im(m2) +p2)(1 +/- [1+2Im(m2)2/(Re(m2)-Im(m2)+p2)2]1/2]

Only the positive root gives a non-tachyonic result for Re(m2)-Im(m2)>0. For real roots with Im(m2)=0 and at the high momentum limit the formula coincides with the standard formula. For Re(m2)= Im(m2) one obtains Re(E)2→ Im(m2/21/2 at the low momentum limit p2→ 0. Energy does not depend on momentum at all: the situation resembles that for plasma waves.

3. Can one find an explicit formula for M8-H duality?

The dream is an explicit formula for the M8-H duality mapping Y4 ⊂ M8c to X4 ⊂ H. This formula should be consistent with the assumption that the generalized holomorphy holds true for X4.

The following proposal is a more detailed variant of the earlier proposal for which Y4 is determined by a map g of M4c → SU(3)c ⊂ G2,c, where G2,c is the complexified automorphism group of octonions and SU(3)c is interpreted as a complexified color group.

  1. This map defines a trivial SU(3)c gauge field. The real part of g however defines a non-trivial real color gauge field due to the non-linearity of the non-abelian gauge field with respect to the gauge potential. The quadratic terms involving the imaginary part of the gauge potential give an additional condition to the real part in the complex situation and cancel it. If only the real part of g contributes, this contribution would be absent, and the gauge field is non-vanishing.

  2. A physically motivated proposal is that the real parts of SU(3)c gauge potential and color gauge field can be lifted to H and the lifts are equal to the classical gauge potentials and color gauge field proposed in H. Color gauge potentials in H are proportional to the isometry generators of the color gauge field and the components of the color gauge field are proportional to the products of color Hamiltonians with the induced K\"ahler form.
  3. The color gauge field Re(G) obeys the formula Re(G)= dRe(A) +[Re(A),Re(A)]=[Re(A),Re(A)] and does not vanish since the contribution of [Im(A),Im(A)] cancelling the real part is absent. The lift of AR=g-1dg to H is determined by g using M4 coordinates for Y4 . The M4 coordinates pk(M8) having interpretation as momenta are mapped to the coordinates mk of H by the inversion

    I: mk= ℏeff Re(pk/p2) , p2== pkpk ,

    where pk is complex momentum. Re(A)H is obtained by the action of the Jacobian

    dIkl= ∂ pk/∂ ml

    as

    AH,k = dIkl Re(AM8,l) .

    dIkl can be calculated as the inverse of the Jacobian ∂ mk/∂Re(p)l. Note that Im(pk) is expressible in terms of Re(pk). This gives the formula

    For Im(pk)=0, the Jacobian for I reduces to that for mk= ℏeff mk/m2 and one has

    ∂ mk/∂ pl= (ℏeff/m2)(δkl- mkml/m2) .

    This becomes singular for m2=0. The nonvanishing of Im(pk) however saves from the singularity.

  4. The M8-H duality obeys a different formula at the light-cone boundaries associated with the causal diamond: now one has p0= ℏeff/m0. This formula should be applied for m2=0 if this case is encountered. Note that number theoretic evolution for masses and classical color gauge fields is directly coded by the mass squared values and holography.

    How could the automorphism g(x) ⊂ SU(3) ⊂ G2 give rise to M8-H duality?

    1. The interpretation is that g(y) at a given point y of Y4 relates the normal space at y to a fixed quaternionic/associative normal space at point y0, which corresponds to being fixed by some subgroup U(2)0 ⊂ SU(3). The automorphism property of g guarantees that the normal space is quaternionic/associative at y. This simplifies the construction dramatically.
    2. The quaternionic normal sub-space (which has Euclidean signature) contains a complex sub-space corresponding to a point of the sphere S2 = SO(3)/O(2), where SO(3) is the quaternionic automorphism group. The interpretation could be in terms of a selection of spin quantization axes. The local choice of the preferred complex plane would not be unique and is analogous to the possibility of having non-trivial Hamilton Jacobi structures in M4 characterized by the choice of M2(x) and equivalently its normal subspace E2(x).
    3. The real part Re(g(y)) defines a point of SU(3), and the bundle projection SU(3) → CP2 in turn defines a point of CP2 = SU(3)/U(2). Hence one can assign to g a point of CP2 as M8-H duality requires and deduce an explicit formula for the point. This means a realization of the dream.
    4. The construction requires a fixing of a quaternionic normal space N0 at y0 containing a preferred complex subspace at a single point of Y4 plus a selection of the function g. If M4 coordinates are possible for Y4, the first guess is that g as a function of complexified M4 coordinates obeys generalized holomorphy with respect to complexified M4 coordinates in the same sense and in the case of X4. This might guarantee that the M8-H image of Y4 satisfies the generalized holomorphy.
    5. Also, space-time surfaces X4 with M4 projection having a dimension smaller than 4 are allowed. I have proposed that they might correspond to singular cases for the above formula: a kind of blow-up would be involved. One can also consider a more general definition of Y4 allowing it to have a M4 projection with dimension smaller than 4 (say cosmic strings). Could one have implicit equations for the surface Y4 in terms of the complex coordinates of SU(3)c and M4? Could this give, for instance, cosmic strings with a 2-D M4 projection and CP2 type extremals with 4-D CP2 projection and 1-D light-like M4 projection?

    4. What could the number theoretic holography mean physically?

    What could be the physical meaning of the number theoretic holography? The condition that has been assumed is that the CP2 coordinates at the mass shells of M4c ⊂ M8c mapped to mass shells H3 of M4 ⊂ M4 × CP2 are constant at the H3. This is true if the g(y) defines the same CP2 point for a given component X3_i of the 3-surface at a given mass shell. g is therefore fixed apart from a local U(2) transformation leaving the CP2 point invariant. A stronger condition would be that the CP2 point is the same for each component of X3_i and even at each mass shell, but this condition seems to be unnecessarily strong.

    Comment: One can criticize this condition as too strong, and one can consider giving up this condition. The motivation for this condition is that the number of algebraic points at the 3-surfaces associated with H3 explodes since the coordinates associated with normal directions vanish. Kind of cognitive explosion would be in question.

    SU(3) corresponds to a subgroup of G2 and one can wonder what the fixing of this subgroup could mean physically. G2 is 14-D, and the coset space G2/SU(3) is 6-D, and a good guess is that it is just the 6-D twistor space SU(3)/U(1) × U(1) of CP2: at least the isometries are the same. The fixing of the SU(3) subgroup means fixing of a CP2 twistor. Physically, this means the fixing of the quantization axis of color isospin and hypercharge.

    5. Twistor lift of the holography

    What is interesting is that by replacing SU(3) with G2, one obtains an explicit formula from the generalization of M8-H duality to that of the twistorial lift of TGD!

    One can also consider a twistorial generalization of the above proposal for the number theoretic holography by allowing local G2 automorphisms interpreted as local choices of the color quantization axis. G2 elements would be fixed apart from a local SU(3) transformation at the components of 3-surfaces at mass shells. The choice of the color quantization axes for a connected 3-surface at a given mass shell would be the same everywhere. This choice is indeed very natural physically since a 3-surface corresponds to a particle.

    Is this proposal consistent with the boundary condition of the number theoretical holography in the case of 4-surfaces in M8c and M4 × CP2?

    1. The selection of SU(3) ⊂ G2 for ordinary M8-H duality means that the G2,c gauge field vanishes everywhere, and the choice of color quantization axis is the same at all points of the 4-surface. The fixing of the CP2 point to be constant at H3 implies that the color gauge field at H3 ⊂ M8c and its image H3 ⊂ H vanish. One would have color confinement at the mass shells H3_i, where the observations are made. Is this condition too strong?
    2. The constancy of the G2 element at mass shells makes sense physically and means a fixed color quantization axis. The selection of a fixed SU(3) ⊂ G2 for the entire space-time surface is in conflict with the non-constancy of the G2 element unless the G2 element differs at different points of the 4-surface only by a multiplication of a local SU(3)0 element, which is a local SU(3) transformation. This kind of variation of the G2 element would mean a fixed color group but a varying choice of the color quantization axis.
    3. Could one consider the possibility that the local G2,c element is free and defines the twistor lift of M8-H duality as something more fundamental than the ordinary M8-H duality based on SU(3)c? This duality would make sense only at the mass shells, so that only the spaces H3 × CP2 assignable to mass shells would make sense physically? In the interior, CP2 would be replaced with the twistor space SU(3)/U(1) × U(1). Color gauge fields would be non-vanishing at the mass shells, but outside the mass shells, one would have G2 gauge fields. This does not look like an attractive option physically.
    4. There is also  a physical objection against the G2  option. The 14-D Lie algebra representation of G2 acts on the imaginary octonions which decompose with respect to the color group to 1⊕ 3⊕ 3*. The automorphism property requires that 1 can be transformed to 3  or 3* to themselves: this requires that the decomposition contains  3⊕ 3*. Furthermore, it must be possible to transform 3 and 3*, which requires the presence of 8. This leaves only the decomposition 8⊕ 3⊕ 3*. G2 gluons would both color octet and triplets. In the TDG framework the only conceivable interpretation would be in terms of ordinary gluons and leptoquark-like gluons. This does not fit with the basic vision   of TGD.

    The choice of a twistor as a selection of quantization axes should make sense also in the M4 degrees of freedom. M4 twistor corresponds to a choice of a light-like direction at a given point of M4. The spatial component of the light-like vector fixes the spin quantization axis. Its choice together with the light-likeness fixes the time direction and therefore the rest system and energy quantization axis. The light-like vector also fixes the choice of M2 and of E2 as its orthogonal complement. Therefore, the fixing of M4 twistor as a point of SU(4)/SU(3) × U(1) corresponds to a choice of the spin quantization axis and the time-like axis defining the rest system in which the energy is measured. This choice would naturally correspond to the Hamilton-Jacobi structure fixing the decompositions M2(x) × E2(x). At a given mass shell, the choice of the quantization axis would be constant for a given X3_i.

    See the article New findings related to the number theoretical view of TGD or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, August 21, 2023

Empirical support for the Expanding Earth hypothesis

During last weeks I have learned about several pieces of empirical support for the Expanding Earth hypothesis which is a rather dramatic prediction distinguishing between TGD and general relativity and has profound implications for biology. I attach below the abstract of the article "Empirical support for the Expanding Earth hypothesis" summarizing these findings. These findings are also discussed in the chapter "Quantum gravitation and quantum biology in TGD Universe".

In this article I discuss the empirical support for the Expanding Earth hypothesis that I have become aware of quite recently.

  1. There is empirical support for the view that the oxygenation of oceans did not occur before CE. This conforms with the prediction that oxygenation was due to photosynthesis in underground oceans. TGD provides the new physics needed: dark photons from either Earth's core or Sun could have provided the metabolic energy making photosynthesis and therefore oxygenation possible.
  2. Anomalously high recession velocities for the tectonic plates during CE have been observed and could be due to the radial expansion of the Earth lasting about 30 million years which corresponds to the duration of Cambrian explosion. A quantitative estimate for the expansion velocity gives an estimate consistent with the findings. Cambrian explosion would correspond to quantum tunnelling in astrophysical scale and involve "big" state function reductions and a temporary change of the arrow of time. The change of the arrow of time in scale of 30 million years could even allow to understand the plant fossils with age about 600 million years conflicting with the fact that the Cambrian explosion (CE) occurred about 540 million years ago.
  3. The finding that the mantle-core boundary looks like a seafloor having even mountains has a rather convincing explanation in terms of the subduction of tectonic plates, which sink to the mantle. This however inspired the question whether life in underground oceans as porous structures containing water in some exotic form, most naturally the fifth phase of water studied by Pollack playing a key role in the TGD inspired view of biology, could make possible the needed thermal and chemical isolation. Pollack effect could provide this isolation and is certainly needed even if the temperature of the underground ocean is not far from the physiological temperature.

    Assuming that the Sun was faint so that the temperature at the surface of Earth was below the freezing point, one ends up with conflict with the isotopic determination of the temperature giving a temperature of oceans slightly higher than the temperature 38 C above which marine invertebrates cannot survive. The temperature about 30 degrees allows life but this requires a slightly lower amount of O18 isotope than prevailing in the recent oceans. The paradox can be solved if the warm water originated from underground oceans and mixed with the non-oxygenated water (or actually ice) at the surface of Earth so that the isotopic fraction was reduced. The optimal situation for life would have been at depths of order kilometer and one can say that life had no other option than developing underground.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, August 20, 2023

Does the existence of an underground ocean floor at the mantle-core boundary relate to underground life?

The popular article published in Futurism discussed an unexpected observation by the group led by Samantha Hansen published in Science (see this). The mantle-core boundary in the Earth's interior contains a layer that looks like the crust of Earth in the sense that the seismic perturbations propagating through it have an ultralow velocity. There are mountains many times higher than Himalaya! How is this possible? Is it possible in standard physics?

The answer of the article to this question is based on the idea that subduction for continental plates implies that part of them sinks down because they are denser than the surrounding material and gradually gather to form a second sea floor at the mantle-core boundary. To me this idea looks rather plausible but need not be correct.

My first reaction was the question whether this second sea floor could be a genuine seafloor, the seafloor of an underground ocean! Could new physics predicted by TGD make this possible?

  1. The basic prediction of the Expanding Earth hypothesis (see this) explaining Cambrian Explosion is that life evolved in underground oceans and bursted to the surface as the radius of Earth increased by factor 2 in a rapid expansion lasting about 30 million years (cosmic expansion would occurred as rapid jerks for astrophysical objects). During the last weeks several strange findings removing the most obvious objections against this vision have emerged.
  2. Could these mountains at the core-mantle boundary correspond to mountains of underground ocean floor?
Could the underground oceans have existed and carried life? Could they reside even in the extremely hostile environement at the mantle-core boundary?
  1. Underground oceans near the mantle-core boundary boundary could be imagined as a porous structure having water inside pores. Such structures are very common and if Earth's crust is formed from meteorites the water would be present from the beginning. Even biological matter is analogous to porous structure. When stone is heated it becomes a porous structure. Maybe the enormous heat flux from the core could cause porosity. In accordance with the standard vision of self-organization, this could be understood as complexity developing induced by a constant heat flux. Self-organization takes place at boundaries.
  2. It is known that huge reservoirs of water exist underground. The boundary between upper and lower mantle at a depth of about 500 km contains a porous structure carrying water (see this). If the size of the pores is large enough, considerably above cell size, advanced multicellulars could evolve in the underground oceans.
It is easy to invent lethal objections in the standard physics framework.
  1. The temperature and pressure increase as one goes towards the core. The temperature of pores should be around 40 C for life to survive. Also the pressure should be normal.
  2. Consider the crust first. The temperature reaches the values in the range 100-600 C at the crust-mantle boundary. The temperature increase is about 30 C per kilometer in the upper part of the crust and would be about 30 C at the depth of 1 km if it is 0 C at the surface. The underground water reservoirs should not be at depths much larger than 1 km if the standard physics applies and the largest depths would be possible near the poles.
  3. In the underground ocean at the boundary of the upper and lower mantle at a depth of about 500 km, the temperature and pressure are quite too high. Temperature of the surrounding solid material varies from 500 K at the lower boundary of the crust to 1200 K at the boundary of upper and lower mantle. Densities would be several times higher than the normal density of water.
  4. Temperature at the mantle-core boundary is about 3000 -4500 K and pressure 1.3 trillion times the atmospheric pressure. The density of mantle is by factor about 5-6 higher than the density of crust so that the pressure is really huge since water and solid matter are almost incompressible. Water in ordinary form cannot exist in this kind of environment if standard physics applies.
Could underground oceans allow some exotic phase of water at physiological temperature around 40 C and normal pressure? This is not possible for the water of standard physics. But the water in living matter is not normal!
  1. The phase of water discovered by Pollack, called fifth phase of water by Pollack himself (also the term "ordered water" is used). Pollack proposed it to be fundamental for life. Gel phases would represent a basic example of this water. This phase of water plays a key role in the TGD based model of living matter. The model identifies dark matter based as phases of ordinary matter with non-standard values of Planck constant. The gravitational Planck constant indeed has huge values.

    The underground life faces the same problem as the biological cell at the surface of Earth: how to isolate itself from the environment. The high temperatures and pressures make the problem orders of magnitudes more challenging. The fifth phase of water surrounding the system could provide the solution in the case of cell membrane and DNA double strand: develop a layer consisting of the fifth phase of water which shields the volume of the ordinary water from the environment at a different temperature.

    As a matter of fact, it has been discovered that ordinary water in air develops a thin molecular layer at its surface. This layer is neither water or ice and the identification as the fifth phase of water would be suggestive (see this). This layer could also work at nanoscales and reduce the freezing temperature of the lattice water in materials like concrete to about -70 C. The mechanism could be essentially thermal isolation. Could thermal isolation work also in high temperature environments, where underground life had to survive?

  2. Could the darkness of the ordered water make possible a situation in which the interactions of the water inside porese with the hot high pressure environment are very weak and heat and matter are not transferred between the solid environment and water. Thermal equilibrium would be established very slowly and the temperature could and pressure could be much lower than otherwise for very long periods.

    Magnetic bodies would carry the dark matter relevant for the biocontrol and would be shielded from the hot environment. They would be gradually heated and this would lead to biological death as it does in ordinary biology according to TGD. Zero energy ontology would however come in rescue and the change of arrow of time would reverse heating to cooling!

    The unpaired and their chemically non-inert valence electrons of biologically important ions should be dark and reside at the flux tubes associated with very long dark valence bonds. This would generate long range quantum coherence. This would explain why living matter contains these ions although thermal ionization is not possible at physiological temperatures. Also the protons of hydrogen bonds would be dark. Only the chemically inert full electron shells would remain and the system would remain and since be effectively thermally isolated from the hot environment. As a matter of fact, electrolytes involve ions and the mechanism of ionization is not actually understood and TGD suggest a mechanism of ionization based on the generation of dark valence electrons and dark protons (see this and this).

While preparing this article I learned that the standard view of Cambrian explosion has a problem with the Cambrian ocean temperature.
  1. If the oceans existed (not clear in the TGD framework before the Cambrian explosion!), their temperatures should have been around 60 C. Marine invertebrates do not however survive above 38 C.
  2. Isotopic estimates for Cambrian phosphatic brachiopods (see this) assuming no post-Cambrian O18 isotopic depletion relative to the recent concentration suggests that the temperatures of Cambrian oceans were in the range 35-41 C. This range is above the recent range 27-35 C. Assuming a O18 depletion of -3 promille of the early Cambrian sea water relative today, one can get Cambrian temperatures around 30 degrees.
  3. What could have caused the O18 depletion of Cambrian phosphatic brachiopods? Suppose that they evolved in underground oceans which bursted to the surface. Could the depletion be that for underground ocean water relative to the water in the recent oceans? The depletion would reflect different environments for the underground oceans and recent oceans. An alternative explanation is that there were non-oxygenated water reservoirs at the surface of Earth and the oxygenerate underground water was mixed with it. Also the surface of Mars, to which the surface of Earth before the Cambrian explosion is analogous, contains some water.
To conclude, I am not suggesting that life developed at the mantle-core boundary: this sound quite too science-fictive. Pole regions of the crust are the most conservative candidate for the seat of underground oceans. It is quite enough for the purposes of the Expanding Earth model that it developed in underground water reservoirs at depths of a few kilometers. Also in this case the thermal isolation from the environment could have played a key role. An interesting question is whether the critical temperature range 30-40 C of life could fix the depth for the underground oceans in which life most probably evolved.

See the article Expanding Earth hypothesis and Pre-Cambrian Earth or the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, August 19, 2023

Trillions of stars mysteriously disappearing from the sight of James Webb Telescope!

James Webb Telescope is revolutionizing our world view. Now it tells about disappearing stars (see this). Trillions of stars suddenly disappear from the infrared sight of the JWT!

A possible explanation for the mysterious disappearance is that the violent collisions of galaxies lead to re-organization of stars and change their observable characteristics so that they effectively disappear. But there are also stars, which look completely stable and then disappear and these are recently studied systematically.

What says TGD?

  1. The TGD based explanation for the vanishing stars relies on the prediction that astrophysical objects of various scales, stars, galaxies, etc.. appear as nodes of networks formed from 3-surfaces, which can be thought of as regions of hyperbolic 3-space (cosmic time=constant), which are connected by monopole flux tube pairs.
  2. These 3-surfaces form cosmic lattice-like structures, tessellations as they are called by mathematicians (see this). The connection by flux tube pair is not stable: reconnection (or rather de-reconnection) can occur and lead to a splitting of the pair to two disjoint U-shaped flux tubes assignable to the two originally connected objects. This general mechanism works also outside astrophysics and in the TGD inspired view of quantum biology biocatalysis and biochemical reactions relies on this mechanism.
  3. The radiation from stars arrives along the flux tubes connecting astrophysical objects to a network. Diffraction takes place and the signals propagating along the flux tubes are amplified and travel in specific directions only and only between the objects of the network. This applies also to gravitational radiation and could explain the recently observed gravitational hum as being associated with the net work of stars.
  4. The explanation for the vanishing stars could be very simple: the splitting of the U-shaped flux tube contact between regions containing trillions of stars and Earth,solar system, or even Milky Way. The violent events in the source region could induce these splittings.
For the role of the magnetic flux tubes in the TGD based view o cosmology and astrophysics see this , this , this .

See the article The TGD view of the recently discovered gravitational hum as gravitational diffraction or the chapter Quantum Astrophysics.

For a summary of earlier postings see this">Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, August 17, 2023

Anomalously high rates of tectonic plate motion and the TGD view of Cambrian explosion

The popular Arstechnica article (see this) tells that the motion between plates was surprisingly fast. The rate of the tectonic motion as relative rate for the distance increase between plates was surprisingly fast: even about 4 times the recent one.

The TGD view of expanding Earth relies on the prediction of cosmic expansion as a sequence of fast periods of expansion for astrophysical objects.

  1. The model predicts that the tectonic plates were created in rather fast radial expansion of Earth: radius increased by a factor 2. Cracks giving rise plates were formed because rock is not flexible material.
  2. The model explains the Cambrian explosion: advanced photosynthesising multicellulars emerged from underground oceans as the oxygenated water bursted to the surface. The TGD view of dark matter allows to circumvent the obvious objections against the model and conforms with the recent surprising findings (see this and this).
  3. The fast radial expansion caused a fast increase of the distances between plates. The velocity v of this recession would have been v=dR/dt×ΔΦ, where ΔΦ is the angular distance between the plates and dR/dt is the radial expansion rate.
  4. The duration of Cambrian Explosion was roughly Δ T=30 million years. Using Earth radius 6,371 km one obtains the estimate dR/dt = R/Δ T ≈ 20 cm/year. v is obtained from this by multiplying with Δ Φ≤2π. The largest rate mentioned in the popular article is v=64 cm/year. The order of magnitude is correct and the rate would have been higher than the average during the fastest periods. Note that this contribution to the rate is always positive and could provide a test for the TGD view.

    The estimate for v must involve a large enough angle Δ Φ and a long enough time period so that Δ Φ is expected to be a considerable fraction of π. For ΔΦ slightly below π, the estimate is practically exact but this is probably an accident.

  5. Note that the predicted contribution to v is always positive and could provide a test for the TGD view.
A fascinating, and admittedly frightening, question, which just now occurred to me, is whether the Cambrian explosion was gravitational expansion analogous to cosmic expansion in which the metric distances between points doubled! This would have required the scaling of the the spatial part of metric by a factor 4. Could this make sense or does it kill the basic idea?
  1. In zero energy ontology (ZEO), light-cone proper time a serves in the role of the cosmic scale factor of either half-cone of the causal diamond (CD) having interpretation as empty cosmology. "Big" state function reductions (BSFRs), serving as TGD counterparts of the ordinary SFRs, change the arrow of time and a pair of BSFRs would be behind quantum tunnelling in the TGD Universe.

    In the TGD framework quantum coherence and BSFRs are possible even in astrophysical scales. Could the increase of the radius of Earth be quantum tunnelling be realized as a pair of BSFRs.

  2. Can one imagine a local "mini" Big Bang for the CD inside which Earth's space-time surface belongs and a scaling of light-cone proper time a by a factor 2 in astrophysical quantum quantum tunnelling? The value of the light-cone proper time a, characterizing the cosmotemporal position of Earth in a double BSFR, would have increased by factor 2 The spatial scaling by a factor 2 conforms with the p-adic length scale hypothesis stating that p-adic length scales coming as powers of 2 are of special importance.
  3. One can try to form a more quantitative view of the situation. Note that the size scale of the initial CD before explosion would be T/2, where T is the distance between the tips of the CD would be about 30 million years. The Cambrian explosion occurred about Ti=540 million years ago. If Ti corresponds to the cm of CD, the future tip of the initial CD would be at 570 million years. CD size would be scaled by factor 2 and the end of the cm of CD would correspond to Tf=570 million years.

    The quantum average space-time surface would be replaced by a new one in double BSFR and would be modified already 60 million years before Tf. This time would correspond to 630 million years. As explained, some multicellular plant fossils have been found with an age of about 600 million years. Could this replacement of geometric past explain them?

See the article Expanding Earth hypothesis and Pre-Cambrian Earth of the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, August 13, 2023

A fresh look at Shnoll effect

Thanks for Ed Oberg for sending an email in which he mentioned Shnoll effect. I have discussed the Shnoll effect from the TGD point of view at here. Now I must say that the number theoretical ideas look a little bit too formal an approach when one wants to understand the splitting of the distribution for the number of counts per time interval in say alpha decays. A more direct connection with physics would be badly needed. Therefore I decided to take a fresh look on Shnoll effect with inspiration coming from the increase of the understanding of quantum gravitation in the TGD framework (see for instance this and this).

Consider first the Shnoll effect.

  1. For instance, alpha decay rates are studied: overall decay rates or in fixed direction for alpha rays. Number of counts per time interval τ varies. Poisson distribution becomes many-peaked.
  2. Is there a dependence on the period tau? How many peaks? Are the numbers of peaks the same for various values of tau? The natural assumption is that there are several rates. If so, the number N for peak I is N= rate(I)×τ.
  3. There are periodicities of the peak structure related to sidereal time and solar time. There are correlations with the dynamics of the Sun, Earth, and even galaxy. There is also a dependence on the direction of the alpha ray.
  4. The splitting of the decay rates as the emergence of almost degenerate states of nuclei would be the simplest explanation. The astrophysical correlations suggest that this should be due to the gravitational effects.
The recent TGD view of quantum gravitation could provide a simple explanation.
  1. A splitting of the state of the emitting nucleus to N states occurs such that the N states have different decay rates. Where does this degeneracy come from? Could the degenerate states be dark variants of the ordinary nucleus in the TGD sense and therefore have different values of heff. The gravitational Planck constants ℏgr for astrophysical objects are suggested by the observed astrophysical correlations.
  2. Why would these almost degenerate states of nuclei have different alpha decay rates? These rates are determined by nuclear physics. In the TGD framework, the only variable parameter is effective Planck heff which affects the rates in higher order in perturbation expansion. Lowest order is not affected. In higher orders the effect is non-trivial and could be large for strong interactions.
  3. The quantum gravitational effects characterized by ℏgr are expected to be the largest ones. Could the almost degenerate nuclei be attached to gravitational flux tubes of different astrophysical objects and have different effective/gravitational Planck constants? Sun, Earth, Moon, galaxy, and planest come first in mind.
  4. This model applies also to electromagnetic interactions and could explain the Shnoll effect in chemistry. The basic prediction is that the splitting of the Poisson distribution is qualitatively similar independent of the system studied.
See the article A Possible Explanation of Shnoll Effect or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, August 11, 2023

Do Yangians and Galois Confinement provide M8-H dual approaches to the construction of the many-particle states?

The construction of many-particle states as zero energy states defining scattering amplitudes and S-matrix is one of the basic challenges of TGD. TGD suggests two approaches implied by physics as geometry and physics as number theory views to TGD. Geometric vision suggests Yangians of the symmetry algebras of the "world of classical worlds" (WCW) at the level of H = M4 × CP2. Number theoretic vision suggests Galois confinement at the level of complexified M8. Could these approaches be M8-H duals of each other?

Yangian Approach

The states would be constructed from fermions and antifermions as modes of WCW spinor field. An idea taking the notion of symmetry to extreme is that this could be done purely algebraically using generators of symmetries.

  1. For a given vacuum state assignable to a partonic 2-surface and identifiable as a ground state of Kac-Moody type representation, the states would be generated by Kac-Moody algebra. Also super-Kac-Moody algebra could be used to construct states with nonvanishing fermion and antifermion numbers. In the case of super symplectic algebra, the generators would correspond to super Noether charges from the isometries of WCW and would have both fermionic and might also have bosonic parts.
  2. The spaces of states assignable to partonic 2-surfaces or to a connected 3-surface are however still rather restricted since it assumes in the spirit of reductionism that the symmetries are local single particle symmetries. The first guess for many-particle states in this approach is as free states and one must introduce interactions in an ad hoc manner and the problems of quantum field theories are well-known.
  3. In the TGD framework, there is a classical description of interactions in terms of Bohr-orbit like preferred extremals and one should generalize this to the quantum context using zero energy ontology (ZEO). Classical interactions have as space-time correlates flux tubes and "massless extremals" connecting 3-surfaces as particles and topological vertices for the partonic 2-surfaces.
  4. The construction recipe of many-particle states should code automatically for the interactions and they should follow from the symmetries as a polylocal extension of single particle symmetries. They should be coded by the modification of the usual tensor product giving only free many-particle states. One would like to have interacting many-particle states assignable to disjoint connected 3-surfaces or many-parton states assignable to single connected space-time surfaces inside causal diamond (CD).

Yangian algebras are especially interesting in this respect.

  1. Yangian algebras have a co-algebra structure allowing to construct multi fermion representations for the generators using comultiplication operation, which is analogous to the time reversal of a Lie-algebra commutator (super algebra anticommutator) regarded as an interaction vertex with two incoming and one outgoing particle. The co-product is analogous to tensor product and assignable to a decay of a particle to two outgoing particles.
  2. What is new is that the generators of Yangian are poly-local. The infinitesimal symmetry acts on several points simultaneously. For instance, they could allow a more advanced mathematical formulation for n-local interaction energy lacking from quantum field theories, in particular potential energy. The interacting state could be created by a bi-local generator of Yangian. The generators of Yangian can be generated by applying coproducts and starting from the basic algebra. There is a general formula expressing the relations of the Yangian.
  3. Yangian algebras have a grading by a non-negative integer, which could count the number of 3-surfaces (say all connected 3-surfaces appearing at the ends of the space-time surface at the boundaries of causal diamond (CD)), or the number of partonic 2-surfaces for a given 3-surface. There would also be gradings with respect to fermion and antifermion numbers.

There are indications that Yangians could be important in TGD.

  1. In TGD, the notion of Yangian generalizes since point-like particles correspond to disjoint 3-surfaces, for a given 3-surface to partonic 2-surfaces, and for a partonic 2-surface to point-like fermions and antifermions. In the TGD inspired biology, the notion of dark genes involves communications by n-resonance. Two dark genes with N identical codons can exchange cyclotron 3N-photon in 3N-resonance. Could genes as dark N-codons allow a description in terms of Yangian algebra with N-local vertex? Could one speak of 3N-propagators for 3N cyclotron-photons emitted by dark codons.
  2. In quantum theory, Planck constant plays a central role in the representations of the Lie algebras of symmetries. Its generalization assignable to n-local Lie algebra generators could make sense for Yangians. The key physical idea is that Nature is theoretician friendly. When the coupling strength proportional to a product of total charges or masses becomes so large that perturbation series fails to converge, a phase transition increasing the value of heff takes place. Could this transition mean a formation of bound states describable in terms of poly-local generators of Yangian and corresponding poly-Planck constant?
  3. For instance, the gravitational Planck constant gr, which is bilocal and proportional to two masses to which a monopole flux tube is associated, could allow an interpretation in terms of Yangian symmetries and be assignable to a bi-local gravitational contribution to energy momentum. Also, other interaction momenta could have similar Yangian contributions and be characterized by corresponding Planck constants.
  4. It is not clear whether gr and its generalization can be seen as a special case of the proposal heff = nh0 generalizing the ordinary single particle Planck constant or whether it is something different. If so, the hierarchy of Planck constant would correspond to a hierarchy of polylocal generators of Yangian.

Galois confinement

The above discussion was at the level of H=M^4× CP2 and "world of classical worlds" (WCW). M8-H duality predicts that this description has a counterpart at the level of M8. The number theoretic vision predicting the hierarchy of Planck constants strongly suggests Galois confinement as a universal mechanism for the formation of bound states of particles as Galois singlets.

  1. The simplest formulation of Galois confinement states that the four-momenta of particles have components which are algebraic integers in the extension of rationals characterizing a polynomial defining a 4-surface in complexified M8, which in turn is mapped to a space-time surface in H=M^4× CP2, when the momentum unit is determined by the size of causal diamond (CD).

    The total momentum for the bound state would be Galois singlet so that its components would be ordinary integers: this would be analogous to the particle in box quantization. Each momentum component "lives" in n-dimensional discrete extension of rationals with coefficient group, which consists of integers.

    In principle one has a wave function in this discrete space for all momentum components as a superposition of Galois singlet states. The condition that total momentum is Galois singlet forces an entanglement between these states so that one does not have a mere product state.

  2. Galois confinement poses strong conditions on many-particle states and forces entanglement. Could Galois confinement be M8-H dual of the Yangian approach?
See the article Questions about coupling constant evolution or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Muon anomaly, fifth force, and TGD

We are living interesting times  from the point of view of TGD. Also in elementary particle physics. The popular article tells that the "anomalous" anomalous moment of muon  for which Fermi Lab reported 2021, seems to be real. Fermilab has gathered more data and reduced uncertainties but it will take a couple of years to narrow down the theoretical uncertainties. This would mean a crack in the  standard model and could start a revolution.

What can one say about the situation in  TGD? In TGD,  the mysterious family replication of fermions has a topological explanation in TGD: genus-generation correspondence (see this,  this, and this).

This predicts 3 fermion generations  to which one can assign SU(3)g as a combinatorial symmetry group. In TGD, bosons are identified as fermion antifermion pairs and correspond to 8g+1g representation of SU(3)g. The singlet 1g corresponds to ordinary gauge bosons obeying fermion universality in its couplings. p-Adic thermodynamics makes it possible to estimate  the mass scale and even masses of 8g bosons.

8g corresponds to new gauge bosons and violates universality in its couplings.  The "fifth force" would be assignable to both electroweak and color interactions and even to gravitation (one would have (8⊕1) ⊗ (8⊕1)= 8⊗8⊕8⊕8⊕1.  Using the terminology of quantum field theories, the  loops containing 8g for gluons and electroweak gauge bosons could cause the doubly anomalous magnetic moment of muon.  

Also other anomalies would be predicted: for instance decays violating separate conservation of lepton numbers. There is already evidence for 8g Higgs as anomalous decays of Higgs like particle producing muon-electron pairs.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, August 08, 2023

Electroweak Symmetry Breaking and supersymplectic symmetries

One of the hardest challenges in the development of the TGD based view of weak symmetry breaking was the fact that classical field equations allow space-time surfaces with finite but arbitrarily large size. For a fixed space-time surface, the induced gauge fields, including classical weak fields, are long ranged. On the other hand, the large mass for weak bosons would require a short correlation length. How can one understand this together with the fact that a photon has a long correlation length?

In zero energy ontology quantum states are superpositions of space-time surfaces as analogs of almost unique Bohr orbits of particles identified as 3-D surfaces. For some reason the superposition should be such that the quantum averages of weak gauge boson fields vanish below the weak scale whereas the quantum average of electromagnetic fields is non-vanishing.

This is indeed the case.

  1. The supersymplectic symmetries form isometries of the world of classical worlds (WCW) and they act in CP2 degrees of freedom as symplectic transformations leaving the CP2 symplectic form J invariant and therefore also its contribution to the electromagnetic field since this part is the same for all space-time surfaces in the superposition of space-time surfaces as a representation of supersymplectic isometry group (as a special case a representation of color group).
  2. In TGD, color and electroweak symmetries acting as holonomies are not independent and for the SU(2)L part of induced spinor connection the symplectic transformations induce SU(2)L × U(1)R gauge transformation. This suggests that the quantum expectations of the induced weak fields over the space-time surfaces vanish above the quantum coherence scale. The averages of W and of the left-handed part of Z0 should therefore vanish.
  3. ⟨Z0 should vanish. For U(1)R part of Z0, the action of gauge transformation is trivial in gauge theory. Now, however, the space-time surface changes under symplectic transformations and this could make the average of the right-handed part of Z0 vanishing. The vanishing of the average of the axial part of the Z0 is suggested by the partially conserved axial current hypothesis.

One can formulate this picture quantitatively.

  1. The electromagnetic field [1][2] contains, besides the induced Kähler form, also the induced curvature form R12, which couples vectorially. The conserved vector current hypothesis suggests that the average of R12 is non-vanishing. One can express the neutral part of the induced gauge field in terms of induced spinor curvature and Kähler form J as:
    R03 = 2(2e0 ∧ e3 + e1 ∧ e2) = J + 2e0 ∧ e3,
    J = 2(e0 ∧ e3 + e1 ∧ e2),
    R12 = 2(e0 ∧ e3 + 2e1 ∧ e2) = 3J - 2e0 ∧ e3.
  2. The induced fields γ and Z0 (photon and Z-boson) can be expressed as:
    γ = 3J - sin2θW R12,
    Z0 = 2R03 = 2(J + 2e0 ∧ e3).
    The condition ⟨Z0⟩ = 0 gives 2⟨e0 ∧ e3⟩ = -2J, and this in turn gives ⟨R12⟩ = 4J. The average over γ would be:
    ⟨γ⟩ = (3 - 4sin2θW)J.
    For sin2θW = 3/4, ⟨γ⟩ would vanish.
The quantum averages of classical weak fields quite generally vanish. What about correlation functions?
  1. One expects that the correlators of classical weak fields as color invariants, and perhaps even symplectic invariants, are non-vanishing below the Compton length since in this kind of situation the points in the correlation function belong to the same 3-surface representing particle, such as hadron.
  2. The intuitive picture is that in longer length scales one has disjoint 3-surfaces with a size scale of Compton length. If the states associated with two disjoint 3-surfaces are separately color invariant there are no correlations in color degrees of freedom and correlators reduce to the products of expectations of classical weak fields and vanish. This could also hold when the 3-surfaces are connected by flux tube bonds.

    Below the Compton length weak bosons would thus behave as correlated massless fields. The Compton lengths of weak bosons are proportional to the value of effective Planck constant heff and in living systems the Compton lengths are proposed to be even of the order of cell size. This would explain the mysterious chiral selection in living systems requiring large parity violation.

  3. What about the averages and correlators of color gauge fields? Classical color gauge fields are proportional to the products of Hamiltonians of color isometries induced Kähler form and the expectations of color Hamiltonians give vanishing average above Compton length and therefore vanishing average. Correlators are non-vanishing below the hadron scale. Gluons do not propagate in long scales for the same reason as weak bosons. This is implied by color confinement, which has also classical description in the sense that 3-surfaces have necessarily a finite size.

    A large value of heff allows colored states even in biological scales below the Compton length since in this kind of situation the points in the correlation function belong to the same 3-surface representing particle, such as dark hadron.

See the article Reduction of standard model structure to CP2 geometry and other key ideas of TGD or the chapter Appendix.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, August 07, 2023

Gravitational memory and the possible almost universal failure of perturbation theory to converge

Gary Ehlenberg sent an interesting post about the gravitational memory effect. See this and this).

Classical gravitational waves would leave a memory of its propagation to the metric of space-time affecting distances between mass points. The computations are done by treating Einstein's theory as a field theory in the background defined by the energy momentum tensor of matter and calculations are carried out only in the lowest non-trivial order.

There are two kinds of effects: the linear memory effect occurs for instance when a planet moves along non-closed hyperbolic orbit around a star and involves only the energy momentum tensor of the system. The non-linear memory effect also involves the energy momentum tensor of gravitational radiation as a source added to the energy momentum tensor of matter.

The effect is accumulative and involves integration over the history of the matter source over the entire past. The reason why the memory effect is non-vanishing is basically that the source of the gravitational radiation is quadratic in metric. In Maxwellian electrodynamics the source does not have this property.

I have never thought of the memory effect. The formula used to estimate the effect is however highly interesting.

  1. In the formula for the non-linear memory effect, that is for the action of d'Alembert operator acting on the radiation contribution to the metric, the source term is obtained by adding to the energy momentum tensor of the matter, the energy momentum tensor of the gravitational radiation.
  2. This formula can be iterated and if the limit as a fixed point exists, the energy momentum tensor of the gravitational radiation produced by the total energy momentum tensor, including also the radiative contribution, should vanish. This brings in mind fractals and criticality. One of the basic facts about iteration for polynomials is that it need not always converge. Limit cycles typically emerge. In more complex situations also objects known as strange attractors can appear. Does the same problem occur now, when the situation is much much more complex?
  3. What is interesting is that gravitational wave solutions indeed have vanishing energy momentum tensors. This is problematic if one considers them as radiation in empty space. In the presence of matter, this might be true only for very special background metrics as a sum of matter part and radiation part: just these gravitationally critical fixed point metrics. Could the fixed point property of these metrics (matter plus gravitational radiation) be used to gain information of the total metric as sum of matter and gravitational parts?
  4. As a matter of fact, all solutions of non-linear field theories are constructed by similar iteration and the radiative contribution in a given order is determined by the contribution in lower orders. Under what conditions can one assume convergence of the perturbation series, that is fixed point property? Are limit cycles and chaotic attractors, and only a specialist knows what, unavoidable? Could this fixed point property have some physical relevance? Could the fixed points correspond in quantum field theory context to fixed points of the renormalization group and lead to quantization of coupling constants?
Does the fixed point property have a TGD counterpart?
  1. In the TGD, framework Einstein's equations are expected only at the QFT limit at which space-time sheets are replaced with a single region of M4 carrying gauge fields and gravitational fields which are sums of the induced fields associated with space-time sheets. What happens at the level of the basic TGD.

    What is intriguing, is that quantum criticality is the basic principle of TGD and fixes discrete coupling constant evolution: could the quantum criticality realize itself also as gravitational criticality in the above sense? And even the idea that perturbation series can converge only at critical points and becomes actually trivial?

  2. What does the classical TGD say? In TGD space-time surfaces obey almost deterministic holography required by general coordinate invariance. Holography follows from the general coordinate invariance and implies that path integral trivializes to sum over the analogs of Bohr orbits of particles represented as 3-D surfaces. This states quantum criticality and fixed point property: radiative contributions vanish. This also implies a number theoretic view of coupling constant evolution based on number theoretic vision about physics.

    There is also universality: the Bohr orbits are minimal surfaces which satisfy a 4-D generalization of 2-D holomorphy and are independent of the action principle as long as it is general coordinate invariant and constructible in terms of the induced geometry. The only dependence on coupling constants comes from singularities at which minimal surface property fails. Also classical conserved quantities depend on coupling constants.

  3. The so called "massless extremals" (MEs) represent radiation with very special properties such as precisely targeted propagation with light velocity, absence of dispersion of wave packed, and restricted linear superposition for massless modes propagating in the direction of ME. They are analogous to laser beams, Bohr orbits for radiation fields. The gauge currents associated with MEs are light-like and Lorentz 4-force vanishes.
  4. Could the Einstein tensor of ME vanish? The energy momentum tensor expressed in terms of Einstein tensor involves a dimensional parameter and measures the breaking of scale invariance. MEs are conformally invariant objects: does this imply the vanishing of the Einstein tensor? Note however that the energy momentum tensor assignable to the induced gauge fields is non-vanishing: however, its scale covariance is an inherent property of gauge fields so that it need not vanish.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, August 06, 2023

Photosynthesis occurs in underground waters!!

Quite literally, every day has been bringing one or two pieces of support for the TGD view of quantum physics and biology. Now Quanta Magazine article (see this) told that in a new research (see this) published last month in Nature Communications, researchers reported that in groundwater reservoirs 200 meters below the fossil fuel fields of Alberta, Canada, they discovered abundant microbes that produce unexpectedly large amounts of oxygen even in the absence of light. Photosynthesis is the standard way to produce oxygen. But how could photosynthesis work underground? This looks like a complete mystery in the standard physics framework.

TGD based vision of Cambrian explosion (see this) predicts a new physics mechanism making this possible (see this and this).

TGD view proposes that complex multicellular life evolved in underground oceans and bursted to the surface in Cambrian explosion, which involved a relatively rapid increase of the Earth radius by factor 2 (discrete step in TGD counterpart of cosmic expansion). The underground life must have been able to do photosynthesis and therefore to oxygenate the water. This would solve the oxygenation problem.

Intriguingly, the light spectrum from the Earth's core is in the same range as that from the Sun. Could dark photons (darkness in TGD sense as value heff=nh0>h for Planck constant) have served as energy for underground photosynthesis? One can also imagine solar photons transforming to dark photons at monopole flux tubes making them able to penetrate the Earth's surface.

See the article Expanding Earth Hypothesis and Pre-Cambrian Earth or the chapter Quantum gravitation and quantum biology in TGD Universe.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, August 05, 2023

Phenotype is much more stable against point mutations of genotype as one might expect: Why?

Paul Kirsch sent an interesting link (see this) to a genetics related article discussing the question how stably genotype determines the phenotype. The article proposed a number theoretic formula for the probability that a point mutation does not affect the phenotype. This probability is called robustness of the phenotype. The number theory involved is very different from that in the TGD framework and I do not understand the technical details.

One considers the correspondence between genotype and phenotype and point mutations in which code letter changes. The point mutations that do not affect the phenotype, are called neutral.

  1. It is empirically found that robustness defined as the probability that a point mutation does not change a phenotype is orders of magnitudes higher than expected by assuming that this property is given by the probability that a random letter sequence gives rise to the phenotype. This is very natural since it makes possible steady evolution: quite few point mutations change the phenotype.This requires that there are strong correlations between genes which can give rise to a given phenotype. The pool of allowed letter sequences is much smaller than the pool of all possible letter sequences.
  2. It is argued that a certain number theoretical function gives a good estimate for this probability. I have no idea how they end up with this proposal. What this also suggests to me is that quite generally, the allowed genes are not random sequences of letters. There are correlations between them.
Could one understand these correlations by using the number theoretic view of biology proposed in the TGD framework? Consider first how general quantum states are constructed in number theoretical vision.
  1. In the TGD framework, all quantum states are regarded as Galois singlets formed from dark particles. This universal mechanism for the formation of bound states is a number theoretic generalization of the notion of color confinement (see this).
  2. One obtains a hierarchy of Galois confined states. If one has Galois singlets at a given level one can deform them to non-singlets. One can also consider a larger extension in which the Galois group is larger and singlets cease to be singlets. One can however form Galois singlets of them at the next level. This is the general picture and applies to any physical state in number theoretical vision. In biology dark codons, dark genes, parts of the genome, perhaps even the genome, can belong to this hierarchy.
  3. What does Galois singletness mean? The momentum components assignable to the Galois singlet as a bound state are Galois singlets and therefore ordinary integers when the momentum unit defined by causal diamond is used. The momenta of the particles forming the Galois singlet state are not Galois singlets: they have momentum components which are algebraic integers which can be complex. They are analogous to virtual particles. Galois singletness gives a large number of constraints: their number is 4 times (d-1), where d is the dimension of the extension.
This mechanism for the formation of bound states is universal and should apply also to codons and genes.
  1. Free dark codons would be Galois singlets formed from 3 dark protons, which are not Galois singles. In gene, dark codons need not be Galois singlets anymore but the gene itself must be a Galois singlet and therefore defines a quantum coherent state analogous to hadron and behaving like a single unit in its interactions.
  2. Galois singletness poses a constraint on the gene as a quantum state. Not any combination of dark codons is possible as a dark gene. In the momentum representation, the total momentum of genes as a many-codon state must have components, which are ordinary integers in the unit defined by the causal diamond. The momentum components assignable to codons are algebraic integers: they are analogous to virtual particles.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, August 04, 2023

Too many blackholes in the early Universe

James webb telescope has observed more blackholes in the early universe than expected (see this). Does this mean that blackholes are not the end but the beginning or have we misunderstood the notion of time as the existence of astrophysical objects older than the Universe suggests?
  1. In the TGD framework zero energy ontology (ZEO) predicts that the arrow of time changes in TGD counterparts of ordinary state function reductions and is unaffected in state function reductions which correspond to repeated measurements of the same observables. In wave mechanics they would not affect the state at all but in TGD framework give rise to sensory experience of a conscious entity, self.
  2. TGD also predicts a hierarchy heff=nh0 of Planck constants and quantum coherence and therefore quantum jumps are possible in arbitrarily long, even astrophysical scales.
This picture implies that even astrophysical objects can live forth and back in geometric time and their evolutionary age can be longer than the cosmic age. This could also explain why there are more blackholes than predicted.

See the article TGD view of the paradoxical findings of the James Webb telescope .

For a summary of earlier postings see Latest progress in TGD. For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Bubbletrons and magnetic bubbles

The popular article in Livescience (see this) told about giant "bubbletrons", which in the article "Bubbletrons" (see this) are proposed to have played a key role in the early universe. Bubbletrons would be walls generated in first-order phase transitions. First order phase transition requires free energy or liberates it.

Note: First order means that the derivative of the free energy with respect to some variable is discontinuous: the usual phase transitions in condensed matter are first order. Magnetization is second order phase transition. Magnetization as the first derivative of free energy with respect to the external magnetic field is continuous but magnetic susceptibility as its second derivative is discontinuous.

The inner and outer surfaces of bubblerons could contain high energy particles and the collisions of bubbletrons would liberate energy accelerating particles to huge energies. These explosions could also generate dark matter assumed to be some exotic particles.

In the fractal TGD Universe, magnetic bubbles generated in local analogs of the Big Bang, would have been basic structures in the emergence of astrophysical objects. They would serve as analogs of bubbletrons and would play a key role in the formation of all astrophysical structures, including even the formation of planets. I wrote in the beginning of this year two articles describing this vision in various scales (see this and this).

The production of ordinary and dark matter from the TGD counterpart of dark energy associated with monopole flux tubes, in particular cosmic strings, would be an essential part of the mini big bang and give rise to the TGD analog of inflation. In TGD dark matter would correspond to h_eff=nh0>h phases of ordinary matter and no exotic dark matter particles are needed.

The proposal is that the collisions of bubbletrons could have created gravitational waves causing the gravitational hum. This might be the case also for the magnetic bubbles of TGD but I think that this is not enough. TGD predicts tessellations of cosmic time=constant hyperboloids H3: they are hyperbolic spaces. They appear in all scales. The tessellations are hyperbolic analogs of crystal lattices in E3. There are 4 regular tessellations consisting of cubes, icosahedrons and dodecahedrons. In E3 only the cubic regular tessellation is possible.

There is also the completely unique icosa-tetrahedral tessellation having tetrahedra, octahedra and icosahedra in its fundamental region: this tessellation is essential in the TGD based model of genetic code as a universal piece of quantum information processing, not only related to chemical life.

The large voids could correspond to the fundamental regions of icosahedral tessellations: icosahedrons are indeed the Platonic solids nearest to sphere. Also tessellations having stars with a typical distance of about 5 light years at their nodes can be considered. Hyperbolic diffraction guides the gravitational fields to preferred directions and amplifies them: just as in X-ray diffraction. Quantum coherence in astrophysical scales predicted by the TGD view of dark matter also amplifies the radiation in these directions (see this) .

See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, August 03, 2023

The mystery of magnetic fields appearing in cosmic scales from the TGD view point

There was a popular article New Clues on the Source of the Universe s Magnetic Fields discusses the mysterious of long range magnetic fields possible even in cosmic scales. In the Maxwellian world, currents are needed to generate them but it is very difficult to imagine how currents in long scales could be possible in the plasma of the early universe. The proposal was that somehow these plasma currents could emerge not only in short scales as claimed in the study but also in cosmic scales.

In the TGD framework the solution of the problem is simple. At the fundamental level space-time is replaced with a collection of space-time surfaces of finite size in H= M4×CP2. The corresponding 3-surfaces or at least their M^4 projections have finite size

The homology of CP2 makes possible 3-surfaces, which are flux tubes with cross section which is a closed 2-surface carrying quantized magnetic flux. This is not possible in Minkowski space. The associated magnetic fields require no current. The flux tubes are stable against splitting and can be arbitrarily long. U-shaped flux tubes give rise to tentacles which can reconnect and play a key role in biocatalysis.

The monopole part of the magnetic field, having also the Maxwellian part, explains the magnetic fields in cosmic scales. Of course, also the stability of the magnetic field of Earth is a mystery and finds a similar explanation. The strength of the monopole part is about 2/5 of that for the entire magnetic field of Earth.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

The problem of large voids from the TGD point of view

Quanta Magazine post "How (Nearly) Nothing Might Solve Cosmology's Biggest Questions" (see this) tells about the mysterious large voids.

1. TGD view of large voids

I have considered the problem of cosmic voids in the TGD framework for decades. I assumed that voids involve cosmic strings going through their center. At that time I did not realize that TGD allows us to consider a considerably simpler solution, which is not possible in general relativity.

In the TGD Universe, space-time consists of 4-D surfaces in H= M4×CP2. Einsteinian space corresponds to space-time surface with 4-D M4 projections, I call them space-time sheets and they can be connected by extremely tiny wormhole contacts, which are in the simplest situation isometric with a region of CP2 having 1-D light-like geodesic as M4 projection. Wormhole contacts serve as basic building bricks of elementary particles. Space-time surfaces or at least their M4 projections have outer boundaries. The boundaries of physical objects correspond to boundaries of 3-surfaces or of their M4 projections so that we can see the TGD space-time directly with our bare eyes!

Also other kinds of space-time surfaces, such as cosmic strings with 2-D M4 and CP2 projections, are predicted and play a fundamental role in the TGD inspired view of the formation of astrophysical objects.

Concerning the problem of large voids, the key point is that it is possible to have voids in M4 as regions of M4 (or E3) which contain very few or no 3-surfaces. Gravitational attraction could have drawn the 3-surfaces inside the voids to the boundaries of the voids. Could it be that we have been seeing TGD space-time directly for decades?

Also tessellations at the cosmic time= constant hyperboloids would be in a key role and one can imagine that they give rise to tessellations of voids with matter near the walls of the voids. There are 4 regular tessellations involving either cubes, icosahedron of dodecahedron (in E3 only a cubic regular tessellation is possible) plus the icosa-tetrahedral tessellation consisting of tetrahedrons, octahedrons, and icosahedrons. This tessellation is completely unique and plays a key role in the TGD inspired model of the genetic code, which raises the question whether genetic code could be universal and realized in all scales at the level of the magnetic body (see this).

2. Could CMB could spot be a super void?

There was also another interesting link to a popular article (see this) with the title "Our Universe is normal! Its biggest anomaly, the CMB cold spot, is now explained!" CMB cold spot is a huge region inside which the temperature of CMB background is about 70 μKelvin below the average temperature. What adds to the mystery is that it is surrounded by a hotter region. The idea is that the CMB cold spot corresponds to an expanding supervoid. I am however not at all sure whether our Universe is normal in the sense of general relativity.

Consider first the Sachs-Wolfe effect. Assume that a photon arrives at a gravitational well due to a mass distribution. The presence of matter induces first a blueshift as the photon falls in the gravitational potential of the region and then a redshift as it climbs out of it. The expansion however flattens the potential that there is a net reduction of the overall redshift due the average density of matter.

Since the local temperature depends on the local matter density, the low density region corresponds to a cold spot. If the cold spot corresponds to a region, which has a small density and expands during the period that photon uses to go through the cold spot, the redshift inside the region vanishes and is smaller than the redshift caused by the average region. The region appears to have lower density and lower temperature. There are a lot of these kinds of hot and cold spots and they induce fluctuations of the CMB temperature. But there is also a really big cold spot surrounded by hotter regions. This cold spot has been problematic.

The idea is that the CMB cold spot could correspond to an expanding supervoid. It is not however obvious to me how this explains the higher temperature at the boundaries of the supervoid. In the TGD framework, one can however ask whether the supervoid could correspond to a magnetic bubble caused by a local big bang, which has feeded energy to the boundaries of the resulting void forming a magnetic bubble so that the temperature at the boundaries would be higher than inside the void. One can even consider the possibility that the supervoid is in a reasonable approximation a void in M4 sense so that very few 4-D space-time surfaces would exist in that region.

3. Could M4 voids allow to test the TGD view of space-time?

The existence of M4 voids might allow to test TGD view of space-time. The physics predicted by TGD is extremely simple in the case of a single-sheeted space-time sheet. The observed space-time is however many-sheeted. One can think using analogy with extremely thin glass plates with M4 corresponding to the 2-D plane and CP2 corresponding to its thickness. Einsteinian space-time sheets correspond to 2-D surfaces inside the plate, which are slightly curved and are connected by wormhole contacts. At the QFT limit one must replace the many-sheeted structure with a region of M4 and define gauge and gravitational fields as sums of the induced fields associated with various sheets (and determined by the surface geometry alone). The extreme simplicity is lost.

However, if M4 vacua exists one could test TGD at the single-sheeted limit to see the predicted fundamental physics in its extreme simplicity. Things would indeed be simple. Not only are the induced fields determined by the minimal surface property of the space-time region but also holography holds and is realized in terms of a generalization of the 2-D holomorphy to 4-D case.

See the article Magnetic Bubbles in TGD Universe: Part I or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, August 02, 2023

Double slit experiment in time domain from the TGD  perspective

The temporal analog of the double slit experiment carried out by a research team led by Riccardo Sapienza has gained a lot of attention. The experiment is a generalization of the regular double slit experiment to the time domain. The results of the experiment challenge the existing views of quantum physics and it is interesting to see whether the zero energy ontology (ZEO) to which TGD inspired quantum measurement theory is based, could provide new insights about the experiment.

The basic outcome of the considerations is that at least at the level of principle it is possible to determine the classical em fields in the geometric past after a pair of "big" state functions, which are the TGD counterparts of ordinary state function reductions and change the arrow of geometric time. Violations of classical causality based on finite signal velocity would serve as a support for the ZEO.

See the article Double slit experiment in time domain from the TGD  perspective or the chapter Zero Energy Ontology.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.