Monday, September 17, 2018

Is it possible to determine experimentally whether gravitation is quantal interaction?

Marletto and Vedral have proposed (thanks for link to Ulla) an interesting method for measuring whether gravitation is quantal interaction (see this). I tried to understand what the proposal suggests and how it translates to TGD language.

  1. If gravitational field is quantum it makes possible entanglement between two states. This is the intuitive idea but what it means in TGD picture? Feynman interpreted this as entanglement of gravitational field of an objects with the state of object. If object is in a state, which is superposition of states localized at two different points xi, the classical gravitational fields φgr are different and one has a superposition of states with different locations

    | I>= ∑i=1,2 | mi ~at~ xi> | φgr,xi> == | L> +|R> .

  2. Put two such de-localized states with masses mi at some distance d to get state I1>I2>,
    | i> =| L>i +| R>i. The 4 components pairs of the states interact gravitationally and since there are different gravitational fields between different states the states get different phases, one can obtain entangled state.

    Gravitational field would entangle the masses. If one integrates over the degrees of freedom associated with gravitational field one obtains density matrix and the density matrix is not pure if gravitational field is quantum in the sense that it entangles with the particle position.

    That gravitation is able to entangle the masses would be a proof for the quantum nature of gravitational field. It is not however easy to detect this. If gravitation only serves as a parameter in the interaction Hamiltonian of the two masses, entanglement can be generated but does not prove that gravitational interaction is quantal. It is required that the only interaction between the systems is gravitational so that other interactions do not generate entanglement. Certainly, one should use masses having no em charges.

  3. In TGD framework the view of Feynman is natural. One has superposition of space-time surfaces representing this situation. Gravitational field of particle is associated with the magnetic body of particle represented as 4-surface and superposition corresponds to a de-localized quantum state in the "world of classical worlds" with xi representing particular WCW coordinates.

I am not specialist in quantum information theory nor as quantum gravity experimentalist, and hereafter I must proceed keeping fingers crossed and I can only hope that I have understood correctly. To my best understanding, the general idea of the experiment would be to use interferometer to detect phase differences generated by gravitational interaction and inducing the entanglement. Not for photons but for gravitationally interacting masses m1 and m2 assumed to be in quantum coherent state and be describable by wave function analogous to em field. It is assumed that gravitational interact can be describe classically and this is also the case in TGD by quantum-classical correspondence.
  1. Authors think quantum information theoretically and reduce everything to qubits. The de-localization of masses to a superposition of two positions correspond to a qubit analogous to spin or a polarization of photon.

  2. One must use and analog of interferometer to measure the phase difference between different values of this "polarization".

    In the normal interferometer is a flattened square like arrangement. Photons in superpositions of different spin states enter a beam splitter at the left-lower corner of interferometer dividing the beam to two beams with different polarizations: horizontal (H) and vertical (V). Vertical (horizontal) beam enters to a mirror which reflects it to horizontal (vertical beam). One obtains paths V-H and H-V meeting at a transparent mirror located at the upper right corner of interferometer and interfere.

    There is detector D0 resp. D1 detecting component of light gone through in vertical resp. horizontal direction of the fourth mirror. Firing of D1 would select the H-V and the firing of D0 the V-H path. This thus would tells what path (V-H or H-V) the photon arrived. The interference and thus also the detection probabilities depend on the phases of beams generated during the travel: this is important.

  3. If I have understood correctly, this picture about interferometer must be generalized. Photon is replaced by mass m in quantum state which is superposition of two states with polarizations corresponding to the two different positions. Beam splitting would mean that the components of state of mass m localized at positions x1 and x2 travel along different routes. The wave functions must be reflected in the first mirrors at both path and transmitted through the mirror at the upper right corner. The detectors Di measure which path the mass state arrived and localize the mass state at either position. The probabilities for the positions depend on the phase difference generated during the path. I can only hope that I have understood correctly: in any case the notion of mirror and transparent mirror in principle make sense also for solutions of Schrödinger eequation.

  4. One must however have two interferometers. One for each mass. Masses m1 and m2 interact quantum gravitationally and the phases generated for different polarization states differ. The phase is generated by the gravitational interaction. Authors estimate that phases generate along the paths are of form

    Φi = [m1m2G/ℏ di] Δ t .

    Δ t =L/v is the time taken to pass through the path of length L with velocity v. d1 is the smaller distance between upper path for lower mass m2 and lower path for upper mass m1. d2 is the distance between upper path for upper mass m1 and lower m2. See Figure 1 of the article.

What one needs for the experiment?
  1. One should have de-localization of massive objects. In atomic scales this is possible. If one has heff/h0>h one could also have zoomed up scale of de-localization and this might be very relevant. Fountain effect of superfluidity pops up in mind.

  2. The gravitational fields created by atomic objects are extremely weak and this is an obvious problem. Gm1m2 for atomic mass scales is extremely small: since Planck mass mP is something like 1019 proton masses and atomic masses are of order 10-100 atomic masses.

    One should have objects with masses not far from Planck mass to make Gm1m2 large enough. Authors suggest using condensed matter objects having masses of order m∼ 10-12 kg, which is about 1015 proton masses 10-4 Planck masses. Authors claim that recent technology allows de-localization of masses of this scale at two points. The distance d between the objects would be of order micron.

  3. For masses larger than Planck mass one could have difficulties since quantum gravitational perturbation series need not converge for Gm1m2> 1 (say). For proposed mass scales this would not be a problem.

What can one say about the situation in TGD framework?
  1. In TGD framework the gravitational Planck hgr= Gm1m2/v0 assignable to the flux tubes mediating interaction between m1 and m2 as macroscopic quantum systems could enter into the game and could reduce in extreme case the value of gravitational fine structure constant from Gm1m2/4π ℏ to Gm1m2/4π ℏeff = β0/4π, β0= v0/c<1. This would make perturbation series convergent even for macroscopic masses behaving like quantal objects. The physically motivated proposal is β0∼ 2-11. This would zoom up the quantum coherence length scales by hgr/h.

  2. What can one say in TGD framework about the values of phases Φ?

    1. For ℏ → ℏeff one would have

      Φi = [Gm1m2/ℏeff di] Δ t .

      For ℏ → ℏeff the phase differences would be reduced for given Δ t. On the other hand, quantum gravitational coherence time is expected to increase like heff so that the values of phase differences would not change if Δ t is increased correspondingly. The time of 10-6 seconds could be scaled up but this would require the increase of the total length L of interferometer arms and/or slowing down of the velocity v.

    2. For ℏeff=ℏgr this would give a universal prediction having no dependence on G or masses mi

      Φi = [v0Δ t/di] = [v0/v] [L/di] .

      If Planck length is actually equal to CP2 length R∼ 103.5(GNℏ)1/2, one would has GN = R2/ℏeff, ℏeff∼ 107. One can consider both smaller and larger values of G and for larger values the phase difference would be larger. For this option one would obtain 1/ℏeff2 scaling for Φ. Also for this option the prediction for the phase difference is universal for heff=hgr.

    3. What is important is that the universality could be tested by varying the masses mi. This would however require that mi behave as coherent quantum systems gravitationally. It is however possible that the largest quantum systems behaving quantum coherently correspond to much smaller masses.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time" or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, September 10, 2018

Two comments about coupling constant evolution

In the following two comments about coupling constant evolution refining slightly the existing view. First comment proposes that coupling constant evolution is forced by the convergence of perturbation series at the level of "world of classical worlds" (WCW). At the level of cognitive representations provided by adelic physics based on quantum criticality coupling constant evolution would reduce to a sequence of quantum phase transitions between extensions of rationals. Second comment is about evolution of cosmological constant and its relationship to the vision about cosmic expansion as thickening of M4 projections of magnetic flux tubes.

Dicrete coupling constant evolution: from quantum criticality or from the convergence of perturbation series?

  1. heff/h_0 =n identifiable as dimension for extension of rationals has integer spectrum. This allows the generalization of the formula for Newton's constant as Geff= R2/&hbareff with Planck length lP identifies much longer CP2 size so that TGD involves only single fundamental length in accordance with the assumption held for about 35 years before the emergence of twistor lift of TGD. Therefore Newton's constants is varies and is different at different levels of dark matter hierarchy identifiable in terms of hierarchy of extensions of rationals

  2. As a special case one has hbareff= hgr= GMm/v0. Is this case case the gravitational coupling become GeffMm= v0 and does not depend on masses or G at all. In quantum scattering amplitudes a dimensionless parameter (1/4π)v0/c would appear in the role of gravitational fine structure constant and would be obtained from hbareff= hgr= GMm/v0 consistent with Equivalence Principle. The miracle would be that Geff would disappear totally from the perturbative expansion in terms of GMm as one finds by looking what αgr= GMm/ℏgr is! This picture would work when GMm is larger than perturbative expansion fails to converge. For Mm above Planck mass squared this is expected to be the case. What happens below this limit is yet unclear (n is integer).

    Could v0 be fundamental coupling constant running only mildly? This does not seem to be the case: Nottale's original work proposing hbargr proposes that v0 for outer planets is by factor 1/5 smaller than for the inner planets (see this and this).

  3. This picture works also for other interactions (see this). Quite generally, Nature would be theoretician friendly and induce a phase transition increasing hbar when the coupling strength exceeds the value below which perturbation series converges so that perturbation series converges. In adelic physics this would mean increase of the algebraic complexity since heff/h=n is the dimension of extension of rationals inducing the extensions of various p-adic number fields and defining the particular level in the adelic hierarchy (see this). The parameters characterizing space-time surfaces as preferred extremals of the action principle would be numbers in this extension of rationals so that the phase transition would have a well-defined mathematical meaning. In TGD the extensions of rationals would label different quantum critical phases in which coupling constants would not run so that coupling constant evolution would be discrete as function of the extension.
This vision allows also to understand discrete coupling constant evolution replacing continuous coupling constant evolution of quantum field theories as being forced by the convergence of perturbation expansion and induced by the evolution defined by the hierarchy of extensions of rationals.
  1. When convergence is lost, a phase transition increasing algebraic complexity takes place and increases n. Extensions of rationals have also other characteristics than the dimension n.

    For instance, each extension is characterized by ramified primes and the the proposal is that favoured p-adic primes assignable to cognition and also to elementary particles and physics in general correspond to so called ramified primes analogous to multiple zeros of polynomials. Therefore number theoretic evolution would also give rise to p-adic evolution as analog of ordinary coupling constant evolution with length scale.

  2. At quantum criticality coupling constant evolution is trivial. In QFT context this would mean that loops vanish separately or at least they sum up to zero for the critical values of coupling constants. This argument however seems to make theargument about the convergence of coupling constant expansion obsolete unless one allows only the quantum critical values of coupling constants guaranteeing that quantum TGD is quantum critical. There are strong reasons to believe that the TGD analog of twistor diagrammatics involves only tree diagrams and there are strong number theoretic argument for this: infinite sum of diagrams does not in general give a number in given extension of rationals. Quantum criticality would be forced by number theory.

  3. The consistency would be achieved if ordinary continuous coupling constant evolution is obtained as a completion of the discrete coupling constant evolution to real number based continuous evolution. Similar completions should make sense in p-adic sectors. These perturbation series should converge and this condition would force the phase transitions for the critical values of coupling constant strength the sum over the loop corrections would vanish and the outcome would be in the extension of rationals and make sense in extension of fany number field induced by the extension of rationals. Quantum criticality would boil down to number theoretical universality. The completions to continuous evolution are not unique and this would correspond to a finite measurement resolution achieved only at the limit of algebraic numbers.

    One can ask whether one should regard this hierarchy as a hierarchy of approximations for space-time surfaces in M8 represented as zero loci for real or imaginary part (in quaternionic sense) of octonion analytic functions obtained by replacing them with polynomials of finite degree. The picture based on the notion of WCW would correspond to this limit and the hierarchy of rational extensions to what cognitive representations can provide.

Evolution of cosmological constant

The goal is to understand the evolution of cosmological constant number theoretically and correlate it with the intuitive idea that cosmic expansion corresponds to the thickening of the M4 projections of cosmic strings in discrete phase transitions changing the value of the cosmological contant and other coupling parameters.

First some background is needed.

  1. The action for the twistor lift is 6-D analog of Kähler action for the 6-D surfaces in 12-D product of twistor spaces for M4 and CP2. The twistor space for M4 is the geometric variant of twistor space and simply the product M4× S2. For the allowed extremals of this action 6-D surfaces dimensionally reduces to twistor bundle over X4 having S2 as fiber. The action for the space-time surface is sum of Kähler action and 4-volume term. The coefficient of four-volume term has interpretation terms of cosmological constant and I have considered explicitly its p-adic evolution as function of p-adic length scale.

  2. The vision is that cosmological constant Λ behaves in average sense as 1/a2 as function of the light-cone proper time assignable to causal diamond (CD) and serving as analog cosmological time coordinate. One can say that Λ is function of the scale of the space-time sheet and a as cosmological time defines this scale. This solves the problem due to large values of Λ at very early times. The size of Λ is reduced in p-adic length scale evolution occurring via phase transitions reducing Λ.

  3. p-Adic length scales are given by Lp= kR p1/2, where k is numerical constant and R is CP2 size - say radius of geodesic sphere. An attractive interpretation (see this) is that the real Planck length actually corresponds to R although it is by a factor of order 10-3.5 shorter. The point is that one identifies Geff = R2/hbareff with hbareff∼ 107 for GN. Geff would thus depend on heff/h=n, which is essentially the dimension of extension of rationals, whose hierarchy gives rise to coupling constant evolution. Also evolution of Λ which is indeed coupling constant like quantity.

  4. p-Adic length scale evolution predicts discrete spectrum for Λ ∝ Lp-2 ∝ 1/p and p-adic length scale hypothesis stating that p-adic primes p≈ 2k, k some (not arbitrary) integer, are preferred would reduce the evolution to phase transitions in which Λ changes by a power of 2. This would replace the
    continuous cosmic expansion with a sequence of this kind of phase transitions. This would solve the paradox due to the fact that stellar objects participate to cosmic expansion but do not seem to expand themselves. The objects would expand in discrete jerks and so called Expanding Earth hypothesis would have TGD variant: in Cambrian explosion the radius of Earth increased by factor of 2 (see this and this).

One can gain additional insight about the evolution of Λ from the vision that the cosmic evolution to high extent corresponds to evolution of magnetic flux tubes which started from cosmic strings which are objects of form X2× Y2⊂ M4 × CP2, where X2 is minimal surface - string world sheet- and Y2 is complex sub-manifold of CP2 - homologically non-trivial or trivial geodesic sphere in the simplest situation. In homologically non-trivial case there is monopoke flux of Kähler magnetic field along the string. M4 projection of cosmic string is unstable against perturbations and gradually thickens during cosmic evolution.
  1. The Kähler magnetic energy of the flux tube is proprtional to B2SL where B is Kähler magnetic field, whose flux is quantized and does no change. By flux quantization B itself is roughly proportional to 1/S, S the area of M4 projection of the flux tube, which gradually thickens. Kähler energy is proportional to L/S and thus decreases as S increases unless L increases. In any case B weakens and this suggests that Kähler magnetic energy transforms to ordinary particles or their dark counterparts and part of particles remains inside flux tube as dark particles with heff/h0=n characterizing the dimension of extension of rationals.

  2. What happens to the volume energy Evol? One Evol ∝ Λ LS and increases as S increases. This cannot make sense but the p-adic evolution of Λ as Λ ∝ 1/p saves the situation. Primes p possible or given extension of rationals would correspond to ramified primes for the extension. Cosmic expansion would take place as phases transition changing extension of rationals and the larger extension should posses larger ramified primes.

  3. The total energy of the flux tube would be of the form E= (a/S+bS)L corresponding to Kähler contribution and volume contribution. Physical intuition tells that also volume energy decreases during the sequence of the phase transitions thickening the string but also increasing the length of the string. The problem is that if bS is essentially constant, the volume energy of string like objects increases like Lp if so long string like objects in cosmic scales are allowed. Situation changes if the string like objects are much shorter.

    To understand whether this is possible, one must consider an alternative but equivalent view about cosmological constant (see this). The density ρvol of the volume energy has dimensions 1/L4 and can be parametrized also by p-adic length scale Lp1: one would have ρvol ∝ 1/Lp14. The p-adic prime p of the previous parametrization corresponds to cosmic length scale and one would have p∝ p12, which for the size scale assignable to the age of the Universe observable to us corresponds to neutrino Compton length roughly. Galactic strings would however correspond to much longer strings and TGD indeed predicts a Russian doll fractal hierarchy of cosmologies.

  4. The condition that the value of volume part of the action for 4-volume Lp14 remains constant under p-adic evolution gives Λ ∝ 1/Lp4. Parameterize volume energy as Evol=bSL. Assume that string length L scales as Lp and require that the volume energy of flux tube scales as 1/Lp (Uncertainty Principle). Parameter b (that is Λ) would scale as 1/Lp12S. Consistency requires that S scales as Lp12. As a consequence, both volume and Kähler energy would decrease like 1/Lp1. Both Kähler and volume energy would transform to ordinary particles and their dark variants part of which would remain inside flux tube. The transformations would occur as phase transitions changing p1 and generate burst of radiation. The result looks strange at first but is due to the fact that Lp1 is much shorter than Lp: for Lp the result is not possible.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, September 09, 2018

Did LIGO observe non-standard value of G and are galactic blackholes really supermassive?

I have talked (see this) about the possibility that Planck length lP is actually CP2 length R, which is scaled up by factor of order 103.5 from the standard Planck length. The basic formula for Newton's constant G would be a generalization of the standard formula to give G= R2/ℏeff. There would be only one fundamental scale in TGD as the original idea indeed was. ℏeff at "standard" flux tubes mediating gravitational interaction (gravitons) would be by a factor about n∼ 106-107 times larger than h.

Also other values of heff are possible. The mysterious small variations of G known for a long time could be understood as variations for some factors of n. The fountain effect in super-fluidity could correspond to a value of heff/h0=n larger than standard value at gravitational flux tubes increased by some integer factor. The value of G would be reduced and allow particles to get to higher heights already classically. In Podkletnov effect some factor og n would increase and g would be reduced by few per cent. Larger value of heff would induce also larger delocalization height.

Also smaller values are possible and in fact, in condensed matter scales it is quite possible that n is rather small. Gravitation would be stronger but very difficult to detect in these scales. Neutron in the gravitational field of Earth might provide a possible test. The general rule would be that the smaller the scale of dark matter dynamics, the larger the value of G and maximum value would be Gmax= R2/h0, h=6h0.

Are the blackholes detected by LIGO really so massive?

LIGO (see this) has hitherto observed 3 fusions of black holes giving rise to gravitational waves. For TGD view about the findings of LIGO see this and this. The colliding blackholes were deduced to have unexpectedly larger large masses: something like 10-40 solar masses, which is regarded as something rather strange.

Could it be that the masses were actually of the order of solar mass and G was actually larger by this factor and heff smaller by this factor?! The mass of the colliding blackholes could be of order solar mass and G would larger than its normal value - say by a factor in the range [10,50]. If so, LIGO observations would represent the first evidence for TGD view about quantum gravitation, which is very different from superstring based view. The fourth fusion was for neutron stars rather than black holes and stars had mass of order solar mass.

This idea works if the physics of gravitating system depends only on G(M+m). That classical dynamics depends on G(M+m) only, follows from Equivalence Principle. But is this true also for gravitational radiation?

  1. If the power of gravitational radiation distinguishes between different values of M+m, when G(M+m) is kept constant, the idea is dead. This seems to be the case. The dependence on G(M+m) only leads to contradiction at the limit when M+m approaches zero and G(M+m) is fixed. The reason is that the energy emitted per single period of rotation would be larger than M+m. The natural expectation is that the radiated power per cycle and per mass M+m depends on G(M+m) only as a dimensionless quantity .

  2. From arXiv one can find an (see article, in which the energy per unit solid angle and frequency radiated ina collision of blackholes is estimated and the outcome is proportional to E2G(M+m)2, where E is the energy of the colliding blackhole.

    The result is proportional mass squared measured in units of Planck mass squared as one might indeed naively expect since GM2 is analogous to the total gravitational charge squared measured using Planck mass.

    The proportionality to E2 comes from the condition that dimensions come out correctly. Therefore the scaling of G upwards would reduce mass and the power of gravitational radiation would be reduced down like M+m. The power per unit mass depends on G(M+m) only. Gravitational radiation allows to distinguish between two systems with the same Schwartschild radius, although the classical dynamics does not allow this.

  3. One can express the classical gravitational energy E as gravitational potential energy proportional to GM/R. This gives only dependence on GM as also Equivalence Principle for classical dynamics requires and for the collisions of blackholes R is measured by using GM as a natural unit.

Remark: The calculation uses the notion of energym which in general relativity is precisely defined only for stationary solutions. Radiation spoils the stationarity. The calculations of the radiation power in GRT is to some degree artwork feeding in the classical conservation laws in post-Newtonian approximation lost in GRT. In TGD framework the conservation laws are not lost and hold true at the level of M4×CP2.

What about supermassive galactic blacholes?

What about supermassive galactic black holes in the centers of galaxies: are they really super-massive or is G super-large! The mass of Milky Way super-massive blackhole is in the range 105-109 solar masses. Geometric mean is n=107 solar masses and of the order of the standard value of R2/GN=n ∼ 107 . Could one think that this blackhole has actually mass in the range 1-100 solar masses and assignable to an intersection of galactic cosmic string with itself! How galactic blackholes are formed is not well understood. Now this problem would disappear. Galactic blackholes would be there from the beginning!

The general conclusion is that only gravitational radiation allows to distinguish between different masses (M+m) for given G(M+m) in a system consisting of two masses so that classically scaling the opposite scalings of G and M is a symmetry.

See the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant? or the chapter TGD and astrophysics of "Physics in many-sheeted space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, September 05, 2018

The prediction of quantum theory for heat transfer rate fails: evidence for the hierarchy of Planck constants

I encountered in FB a highly interesting finding discussed in two popular articles (see this and this). The original article (see this) is behind paywall but one can find the crucial figure 5 online (see this) . It seems that experimental physics is in the middle of revolution of century and theoretical physicists straying in superstring landscape do not have a slightest idea about what is happening.

The size scale of objects studied - membranes in temperature of order room temperature T=300 K for instance - is about 1/2 micrometers: cell length scale range is in question. They produce radiation and other similar object is heated if there is temperature difference between the objects. The heat flow is proportional to the temperature difference and radiative conductance called Grad characterizes the situation. Planck's black body radiation law, which initiated the development of quantum theory for more than century ago, predicts Grad at large enough distances.

  1. The radiative transfer is larger than predicted by Planck's radiation law at small distances (nearby region) of order average wavelength of thermal radiation deducible from its temperature. This is not a news.

  2. The surprise was that radiative conductance is 100 times larger than expected from Planck's law at large distances (faraway region) for small objects with size of order .5 micron. This is a really big news.

The obvious explanation in TGD framework is provided by the hierarchy of Planck constants. Part of radiation has Planck constant heff=n×h0, which is larger than the standard value of h=6h0 (good guess for atoms). This scales up the wavelengths and the size of nearby region is scaled up by n. Faraway region can become effectively nearby region and conductance increases.

My guess is that this unavoidably means beginning of the second quantum revolution brought by the hierarchy of Planck constants. These experimental findings cannot be put under the rug anymore.

See the chapter Quantum criticality and dark matter "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, September 04, 2018

Galois groups and genes

The question about possible variations of Geff (see this) led again to the old observation that sub-groups of Galois group could be analogous to conserved genes in that they could be conserved in number theoretic evolution. In small variations such as variation of Galois subgroup as analogs of genes would change G only a little bit. For instance, the dimension of Galois subgroup would change slightly. There are also big variations of G in which new sub-group can emerge.

The analogy between subgoups of Galois groups and genes goes also in other direction. I have proposed long time ago that genes (or maybe even DNA codons) could be labelled by heff/h=n . This would mean that genes (or even codons) are labelled by a Galois group of Galois extension (see this) of rationals with dimension n defining the number of sheets of space-time surface as covering space. This could give a concrete dynamical and geometric meaning for the notin of gene and it might be possible some day to understand why given gene correlates with particular function. This is of course one of the big problems of biology.

One should have some kind of procedure giving rise to hierarchies of Galois groups assignable to genes. One would also like to assign to letter, codon and gene and extension of rationals and its Galois group. The natural starting point would be a sequence of so called intermediate Galois extensions EH leading from rationals or some extension K of rationals to the final extension E. Galois extension has the property that if a polynomial with coefficients in K has single root in E, also other roots are in E meaning that the polynomial with coefficients K factorizes into a product of linear polynomials. For Galois extensions the defining polynomials are irreducible so that they do not reduce to a product of polynomials.

Any sub-group H⊂ Gal(E/K)) leaves the intermediate extension EH invariant in element-wise manner as a sub-field of E (see this). Any subgroup H⊂ Gal(E/K)) defines an intermediate extension EH and subgroup H1⊂ H2⊂... define a hierarchy of extensions EH1>EH2>EH3... with decreasing dimension. The subgroups H are normal - in other words Gal(E) leaves them invariant and Gal(E)/H is group. The order |H| is the dimension of E as an extension of EH. This is a highly non-trivial piece of information. The dimension of E factorizes to a product ∏i |Hi| of dimensions for a sequence of groups Hi.

Could a sequence of DNA letters/codons somehow define a sequence of extensions? Could one assign to a given letter/codon a definite group Hi so that a sequence of letters/codons would correspond a product of some kind for these groups or should one be satisfied only with the assignment of a standard kind of extension to a letter/codon?

Irreducible polynomials define Galois extensions and one should understand what happens to an irreducible polynomial of an extension EH in a further extension to E. The degree of EH increases by a factor, which is dimension of E/EH and also the dimension of H. Is there a standard manner to construct irreducible extensions of this kind?

  1. What comes into mathematically uneducated mind of physicist is the functional decomposition Pm+n(x)= Pm(Pn(x)) of polynomials assignable to sub-units (letters/codons/genes) with coefficients in K for a algebraic counterpart for the product of sub-units. Pm(Pn(x)) would be a polynomial of degree n+m in K and polynomial of degree m in EH and one could assign to a given gene a fixed polynomial obtained as an iterated function composition. Intuitively it seems clear that in the generic case Pm(Pn(x)) does not decompose to a product of lower order polynomials. One could use also polynomials assignable to codons or letters as basic units. Also polynomials of genes could be fused in the same manner.

  2. If this indeed gives a Galois extension, the dimension m of the intermediate extension should be same as the order of its Galois group. Composition would be non-commutative but associative as the physical picture demands. The longer the gene, the higher the algebraic complexity would be. Could functional decomposition define the rule for who extensions and Galois groups correspond to genes? Very naively, functional decomposition in mathematical sense would correspond to composition of functions in biological sense.

  3. This picture would conform with M8-M4× CP2 correspondence (see this) in which the construction of space-time surface at level of M8 reduces to the construction of zero loci of polynomials of octonions, with rational coefficients. DNA letters, codons, and genes would correspond to polynomials of this kind.

Could one say anything about the Galois groups of DNA letters?
  1. Since n=heff/h serves as a kind of quantum IQ, and since molecular structures consisting of large number of particles are very complex, one could argue that n for DNA or its dark variant realized as dark proton sequences can be rather large and depend on the evolutionary level of organism and even the type of cell (neuron viz. soma cell). On the other, hand one could argue that in some sense DNA, which is often thought as information processor, could be analogous to an integrable quantum field theory and be solvable in some sense. Notice also that one can start from a background defined by given extension K of rationals and consider polynomials with coefficients in K. Under some conditions situation could be like that for rationals.

  2. The simplest guess would be that the 4 DNA letters correspond to 4 non-trivial finite groups with smaller possible orders: the cyclic groups Z2,Z3 with orders 2 and 3 plus 2 finite groups of order 4 (see the table of finite groups in this). The groups of order 4 are cyclic group Z4=Z2× Z2 and Klein group Z2⊕ Z2 acting as a symmetry group of rectangle that is not square - its elements have square equal to unit element. All these 4 groups are Abelian.

  3. On the other hand, polynomial equations of degree not larger than 4 can be solved exactly in the sense that one can write their roots in terms of radicals. Could there exist some kind of connection between the number 4 of DNA letters and 4 polynomials of degree less than 5 for whose roots one can write closed expressions in terms of radicals as Galois found? Could the polynomials obtained by a a repeated functional composition of the polynomials of DNA letters also have this solvability property?

    This could be the case! Galois theory states that the roots of polynomial are solvable in terms of radicals if and only if the Galois group is solvable meaning that it can be constructed from abelian groups using Abelian extensions (see this).

    Solvability translates to a statement that the group allows so called sub-normal series 1<G0<G1 ...<Gk=G such that Gj-1 is normal subgroup of Gj and Gj/Gj-1 is an abelian group: it is essential that the series extends to G. An equivalent condition is that the derived series is G→ G(1) → G(2) → ...→ 1 in which j+1:th group is commutator group of Gj: the essential point is that the series ends to trivial group.

    If one constructs the iterated polynomials by using only the 4 polynomials with Abelian Galois groups, the intuition of physicist suggests that the solvability condition is guaranteed!

  4. Wikipedia article also informs that for finite groups solvable group is a group whose composition series has only factors which are cyclic groups of prime order. Abelian groups are trivially solvable, nilpotent groups are solvable, and p-groups (having order, which is power prime) are solvable and all finite p-groups are nilpotent. This might relate to the importance of primes and their powers in TGD.

    Every group with order less than 60 elements is solvable. Fourth order polynomials can have at most S4 with 24 elements as Galois groups and are thus solvable. Fifth order polynomial can have the smallest non-solvable group, which is alternating group A5 with 60 elements as Galois group and in this case is not solvable. Sn is not solvable for n>4 and by the finding that Sn as Galois group is favored by its special properties (see this). It would seem that solvable polynomials are exceptions.

    A5 acts as the group of icosahedral orientation preserving isometries (rotations). Icosahedron and tetrahedron glued to it along one triangular face play a key role in TGD inspired model of bio-harmony and of genetic code (see this and this). The gluing of tetrahedron increases the number of codons from 60 to 64. The gluing of tetrahedron to icosahedron also reduces the order of isometry group to the rotations leaving the common face fixed and makes it solvable: could this explain why the ugly looking gluing of tetrahedron to icosahedron is needed? Could the smallest solvable groups and smallest non-solvable group be crucial for understanding the number theory of the genetic code.

An interesting question inspired by M8-H-duality (see this) is whether the solvability could be posed on octonionic polynomials as a condition guaranteeing that TGD is integrable theory in number theoretical sense or perhaps following from the conditions posed on the octonionic polynomials. Space-time surfaces in M8 would correspond to zero loci of real/imaginary parts (in quaternionic sense) for octonionic polynomials obtained from rational polynomials by analytic continuation. Could solvability relate to the condition guaranteeing M8 duality boiling down to the condition that the tangent spaces of space-time surface are labelled by points of CP2. This requires that tangent or normal space is associative (quaternionic) and that it contains fixed complex sub-space of octonions or perhaps more generally, there exists an integrable distribution of complex subspaces of octonions defining an analog of string world sheet.

See the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant? or the new chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time". See also the chapter Does M8-H duality reduce classical TGD to octonionic algebraic geometry?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, September 02, 2018

Is the hierarchy of Planck constants behind the reported variation of Newton's constant?

It has been known for long time that the measurements of G give differing results with differences between measurements larger than the measurement accuracy (see this and this). This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants heff=nh0, h=6h0 together with the condition that theory contains CP2 size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R2/hbareff, where R replaces Planck length ( lP= (ℏ G1/2→ lP=R) and hbareff/h is in the range 106-107.

The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbareff inducing scaling G is accompanied by opposite scaling of M4 coordinates in M4× CP2: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case heff=hgr=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v0/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m.

In this article I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying heff. Also a model for fountain effect of superfluidity as de-localization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of heff due to super-fluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of Geff at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology.

See the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant? or the new chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, August 29, 2018

Dark valence electrons, dark photons, bio-photons, and carcinogens

The possible role of bio-photons in living matter is becoming gradually accepted by biologists and neuroscientists. It seems that the intensity of bio-photon emission increases in sick organisms and bio-photons are used as a diagnostic tool. Fritz Popp (see this) started his work with bio-photons with some observations about the interaction of UV light with carcinogens (see this). Veljckovic has also published results suggesting correlations between carcinogenity and the absorption spectrum of photons in UV (ultraviolet).

I have proposed that bio-photons emerge as ordinary photons from what I call dark photons, which differ from ordinary photons in that they have non-standard value heff= nh0 of Planck constant. Also other particles - electrons, protons, ions,..., can be dark in this sense.

One of the mysteries of biology, which mere biochemistry cannot explain, is that living systems behave coherently in macroscopic scales. The TGD explanation for this is that dark particles forming Bose-Einstein condensates (BECs) and super-conducting phases at magnetic flux tubes of what I call magnetic body possess macroscopic quantum coherence due to the large value of heff. This quantum coherence would force the coherent behavior of living matter. I have already earlier developed rather concrete models for bio-photons on basis of this assumption.

In the sequel I will discuss bio-photons from a new perspective by starting from bio-photon emission as a signature of a morbid condition of organism. The hypothesis is that in sick organism dark photons tend to transform to bio-photons in absence of metabolic feed increasing the value of heff. Hence BECs of dark photons and also of other dark particles decay and this leads to a loss of quantum coherence.

A further hypothesis is that at least a considerable part of bio-photons emerge in the transformations of dark photons emitted in the transitions of lonely dark valence electron of any atom able to have such. Since dark electron has a scaled up orbital radius, it sees the rest of atom as a unit charge and its spectrum is in good approximation hydrogen spectrum. Therefore the corresponding part of the spectrum of bio-photons would be universal in accordance with quantum criticality.

This picture allows to develop some ideas about quantum mechanisms behind cancer in TGD framework.

Some basic notions related to carcinogens

Before continuation it is good to clarify some basic notions. Toxins are poisonous substances created in metabolism. Carcinogens (this) are substances causing cancer, which often cause damage to DNA and induce mutations (mutagenicity).

Free radicals (see this) provide a basic example about carcinogens. They have one un-paired valence electron and are therefore very reactive. The un-paired electron has a strong tendency to pair with an electron and steals it from some molecule. The molecule providing the electron is said to oxidize and free radical to act as oxidant. The outcome is a reaction cascade in which carcinogen receives electron but electron donor becomes highly reactive. Anti-oxidants stop the reaction cascade by getting oxidized to rather stable molecules (this and this).

Benzo[a]pyrene (BAP) C20H12 (see this) is one example of carcinogen. It contains several carcinogenic rings and is formed as a product of incomplete burning and reacts with powerful oxidizers. As such BAP is not free radical but its derivatives BAP+/- obtained by one-electron reduction or oxidation are such (see this).

There are also carcinogens such as bentzene, which as such is not dangerous. What happens is that to the carbon at the ends of bentzene's double bond binds single oxygen atom and so called epoxy bond is formed. This molecule penetrates to the DNA chain and causes damage. Perhaps the fact that DNA nucleotide also contains aromatic 6-rings relates to this.

The emission of bio-photons (see this) increases if carcinogens such as oxidants are present. The idea is that bio-photons could be relevant concerning the understanding of the problem. It has been proposed that bio-photons could be created when anti-oxidants interact with molecules generating triplet states (spin 1) which decay by photon emission. The photons generated in this manner would have discrete spectrum whereas bio-photons seem to have continuous and rather featureless spectrum. Therefore this model must be taken with caution.

It could be that the origin of bio-photons is not chemical. If so, carcinogens would not produce bio-photons in ordinary atomic or molecular transitions. They could be however induce generation of bio-photons indirectly. The understanding of bio-photons might help to understand the mechanisms between carcinogenic activity. I have discussed bio-photons from TGD point of view earlier.

Some basic notions of TGD inspired quantum biology

In the sequel I try to develop a necessarily speculative picture about carcinogen action on basis of TGD based quantum about biology. The goal is to develop the general theory by developing a concrete model for a problem.

Magnetic flux tube and field body/magnetic body are basic notions of TGD implied by the modification of Maxwellian electrodynamics . Actually a profound generalization of space-time concept is in question. Magnetic flux tubes are in well-defined sense building bricks of space-time - topological field quanta - and lead to the notion of field body/magnetic body as a magnetic field identity assignable to any physical system: in Maxwell's theory and ordinary field theory the fields of different systems superpose and one cannot say about magnetic field in given region of space-time that it would belong to some particular system. In TGD only the effects on test particle for induced fields associated with different space-time sheets with overlapping M4 projections sum.

The hierarchy of Planck constants heff=n× h0, where h0 is the minimum value of Planck constant, is second key notion. h0 need not correspond to ordinary Planck constant h and both the observations of Randell Mills and the model for color vision suggest that one has h=6h0. The hierarchy of Planck constants labels a hierarchy of phases of ordinary matter behaving as dark matter.

Magnetic flux tubes would connect molecules, cells and even larger units, which would serve as nodes in (tensor-) networks Flux tubes would also serve as correlates for quantum entanglement and replace wormholes in ER-EPR correspondence proposed by Leonard Susskind and Juan Maldacena in 2014 (see this and this). In biology and neuroscience these networks would be in a central role. For instance, in brain neuron nets would be associated with them and would serve as correlates for mental images. The dynamics of mental images would correspond to that for the flux tube networks.

The proposed model briefly

In the sequel the basic hypothesis will be that dark photons emerging from the transitions of dark valence electrons of any atom possessing lonely unpaired valence electron could give rise to part of bio-photons in they decays to ordinary photons. The hypothesis is developed by considering a TGD based model for a finding, which served as a starting point of the work of Popp (see this): the irradiation of carcinogens with light at wavelength of 380 nm generates radiation with wavelength 218 nm so that the energy of the photon increases in the interaction. Also the findings of Veljkovic about the absorption spectrum of carcinogens have considerably helped in the development of the model.

The outcome is a proposal for dark transitions explaining the findings of Popp and Veljkovic. The spectrum of dark photons also suggests a possible identification of metabolic energy quantum of .5 eV and of the Coulomb energy assignable to the cell membrane potential. The possible contribution to the spectrum of bio-photons is considered, and it is found that spectrum differs from a smooth spectrum since the ionization energies for dark valence electrons depending on the value of heff as 1/heff2 serve as accumulation points for the spectral lines. Also the possible connections with TGD based models of color vision and of music harmony are briefly discussed.

See the article Dark valence electrons, dark photons, bio-photons, and carcinogens or the chapter of "TGD based view about consciousness, living matter, and remote mental interactions" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.