https://matpitka.blogspot.com/2018/09/

Saturday, September 29, 2018

Are toric Hamiltonian cycles consistent with genetic code?

The icosahedral model for bioharmony has ugly feature in that one must add tedrahedral harmony to obtain 64 rather than only 60 codons and a correspondence with the genetic code. This led to ask whether one could consider a modification of the icosahedral harmony by replacing one of the three isosahedral harmonies in bio-harmony plus tedrahedral harmony with toric harmony with 12 vertices and 24 (rather than 20) triangular faces having therefore 64 chords and 64 genetic codons

It seems that this is possible. The toric harmonies have by following argument 12 DNA doublets which each code single amino-acids. In icosahedral model one has 10 doublets. This corresponds to the almost exact U ↔ C and A ↔ G symmetries of the genetic code. In the following I give the argument in detail.

1. Some basic notions

First some basic notions are in order. The graph is said to be equivelar if it is a triangulation of a surface meaning that it has 6 edges emanating from each vertex and each face has 3 vertices and 3 edges (see this) Equivelarity is equivalent with the folllowing conditions;

  1. Every vertex is 6-valent.

  2. The edge graph is 6-connected.

  3. The graph has vertex transitive automorphism group.

  4. The graph can be obtained as a quotient of the universal covering tesselation (3,6) by a sublattice (subgroup of translation group). 6-connectedness means that one can decompose the tesselation into two disconnected pieces by removing 6 or more vertices

  5. Edge graph is n-connected if the elimation of k<n vertices leaves it connected. It is known that every 5-connected triangulation of torus is Hamiltonian (see this). Therefore also 6-connected (6,3)p=2,q=2 tesselation has Hamiltonian cycles.

  6. The Hamiltonian cycles for the dual tesselation are not in any sense duals of those for the tesselation. For instance, in the case of dodecahedron there is unique Hamiltonian cycle and for icosahedron has large number of cycles. Also in the case of 6,3) tesselations the duals have different Hamilton cycles. In fact, the problem of constructing the Hamiltonian cycles is NP complete.

2. What can one say about the number of Hamiltonian cycles?

Can one say anything about the number of Hamiltonian cycles?

  1. For dodecahedron only 3 edges emanates from a given vertex and there is only one Hamiltonian cycle. For icosahedron 5 edges emanate from given vertex and the number of cycles is rather large. Hence the valence and also closely related notion of n-connectedness are essential for the existence of Hamilton's cycles. For instance, for a graph consisting of two connected graphs connected by single edge, there exist no Hamilton's cycles. For toric triangulations one has as many as 6 edges from given vertex and this favors the formation of a large number of Hamiltonian cycles.

  2. Curves on torus are labelled by winding numbers (M,N) telling the homology equivalence class of the cycle. M and M can be any integers. Curve winds M (N) times around the circle defining the first (second) equivalence homology equivalence class. Also Hamiltonian cycles are characterized by their homology equivalence class, that is pair (M,N) of integers. Since there are only V=12 points, the numbers (M,N) are finite. By periodic boundary conditions means that the translations by multiples of 2e1+2e2 do not affect the tesselation (one can see what this means geometrically from the illustration at this). Does this mean that (M,N) belongs to Z2× Z2 so that one would have 4 homologically non-equivalent paths.

    Are all four homology classes realized as Hamiltonian cycles? Does given homology class contain several representatives or only single one in which case one would have 20 non-equivalent Hamiltonian cycles?

3. It is possible to find the Hamiltonian cycles

It turned out that there exist programs coding for an algorithm for finding whether given graph (much more general than tesselation) has Hamiltonian cycles. Having told to Jebin Larosh about the problem, he sent within five minutes a link to a Java algorithm allowing to show whether a given graph is Hamiltonian (see this): sincere thanks to Jebin! By a suitable modification this algorithm find all Hamiltonian cycles.

  1. The number NH of Hamiltonian cycles is expected to be rather large for a torus triangulation with 12 vertices and 24 triangles and it is indeed so: NH=27816! The cycles related by the isometries of torus tesselation are however equivalent. The guess is that the group of isometries is G= Z2,refl⋌ (Z4,tr⋌ Zn,rot). Zn,rot is a subgroup of local Z6,rot. A priori n∈{2,3,6} is allowed.

    On basis of the article about toric tesselations (see this) I have understood that one has n=3 but that one can express the local action of Z6,rot as the action of the semidirect product Z2,refl× Z3,rot at a point of tesselation. The identity of the global actions Z2,refl× Z3,rot and Z6,rot does not look feasible to me. Therefore G= Z2,refl⋌ (Z4,tr⋌ Z3,rot) with order ord(G)=24 will be assumed in the following (note that for icosahedral tesselation one has ord(G)=120 so that there is symmetry breaking).

    Z4 would have as generators the translations e1 and e2 defining the conformal equivalence class of torus. The multiples of 2(e1+e2) would leave the tesselation invariant. If these arguments are correct, the number of isometry equivalence classes of cycles would satisfy NH,I≥ NH/24=1159.

  2. The actual number is obtained as sum of cycles characterized by groups H⊂ Z12 leaving the cycle invariant and one can write NH,I= ∑H (ord(H)/ord(G)) N0(H) , where N0(H) is the number of cycles invariant under H.

4. What can one say about the symmetry group H for the cycle?

Simple arguments below suggest that the symmetry group of Hamiltonian cycles is either trivial or reflection group Z2,refl.

  1. Suppose that the isometry group G leaving the tesselation invariant decomposes into semi-direct product G= Z2,refl⋌ (Z4,tr⋌ Z3,rot), where Z3,rot leaves invariant the starting point of the cycle. The group H decomposes into a semi-direct product H=Z2,refl ⋌ (Zm,tr× Z3,rot) as subgroup of G=Z2,refl ⋌ (Z4,tr× Z3,rot).

  2. Zn,rot associated with the starting point of cycle must leave the cycle invariant at each point. Applied to the starting point, the action of H, if non-trivial - that is Z3,rot, must transform the outgoing edge to incoming edge. This is not possible since Z3 has no idempotent elements so that one can have only n=1. This gives H=Z2,refl ⋌ (Zm,tr. m=1,2 and m=4 are possible.

  3. Should one require that the action of H leaves invariant the starting point defining the scale associated with the harmony? If this is the case, then only the group H=Z2,refl would remain and invariance under Zrefl would mean invariance under reflection with respect to the axis defined by e1 or e2. The orbit of triangle under Z2,refl would consist of 2 triangles always and one would obtain 12 codon doublets instead of 10 as in the case of icosahedral code.

    If this argument is correct, the possible symmetry groups H would be Z0 and Z2,refl. For icosahedral code both Zrot and Z2refl occur but Z2,refl does not occur as a non-trivial factor of H in this case.

    The almost exact U ↔ C and A ↔ G symmetry of the genetic code would naturally correspond to Z2,refl symmetry. Therefore the predictions need not change from those of the icosahedral model except that the 4 additional codons emerge more naturally. The predictions would be also essentially unique.

  4. If H is trivial Z1, the cycle would have no symmetries and the orbits of triangles would contain only one triangle and the correspondence between DNA codons and amino-acids would be one-to-one. One would speak of disharmony. Icosahedral Hamiltonian cycles can also be of this kind. If they are realized in the genetic code, the almost exact U ↔ C and A ↔ G symmetry is lost and the degeneracies of codons assignable to 20+20 icosahedral codons increase by one unit so that one obtains for instance degeneracy 7 instead of 6 not realized in Nature.

5. What can one say about the characer of toric harmonies?

What can one say about the character of toric harmonies on basis of this picture.

  1. It has been already found that the proposal involving three disjoint quartets of subsequent notes can reproduce the basic chords of basic major and minor harmonies. The challenge is to prove that it can be assigned to some Hamiltonian cycle(s).
    The proposal is that the quartets are obtained by Z3rot symmetry from each other and that the notes of each quartet are obtained by Z4,tr symmetry.

  2. A key observation is that classical harmonies involve chords containing 1 quint but not 2 or no quints at all. The number of chords in torus harmonies is 24 =2× 12 and twice the number of notes. The number of intervals in turn is 36, 3 times the number of the notes. This allows a situation in which each triangle contains one edge of the Hamiltonian cycle so that all 3-chords indeed have exactly one quint.

  3. By the above argument harmony possesses Z2 symmetry or no symmetry at all and one has 12 codon doublets. For these harmonies each edge of cycle is shared by two neighboring triangles containing the same quint. A possible identification is as major and minor chords with same quint. The changing of the direction of the scale and the reflection with respect to the edges the Hamiltonian cycle would transforms major chords and minor chords along it to each other and change the mood from glad to sad and vice versa.

    The proposed harmony indeed contains classical chords with one quint per chord and for F,A,C # both minor and major chords are possible. There are 4 transposes of this harmony.

  4. Also Hamiltonian cycles for which n triangles contain two edges of Hamiltonian path (CGD type chords) and n triangles contain no edges. This situation is less symmetric and could correspond to a situtation without any symmetry at all.

  5. One can ask whether the classical harmonies corresponds to 24 codons assignable to the toric harmony and to the 24 amino-acids being thus realizable using only amino-acids. If so, the two icosahedral harmonies would represent kind of non-classical exotics.

See the article Geometric theory of harmony or the article New results in the model of bio-harmony.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, September 25, 2018

Can one imagine a modification of bio-harmony?

I have developed a rather detailed model of bio-harmony as a fusion of 3 icosahedral harmonies and tetrahedral harmony (see this and this). The icosahedral harmonies are defined by Hamiltonian cycles at icosahedron going through every vertex of the icosahedron and therefore assigning to each triangular face an allowed 3-chord of the harmony. The surprising outcome is that the model can reproduces genetic code.

The model for how one can understand how 12-note scale can represent 64 genetic codons has the basic property that each note belongs to 16 chords. The reason is that there are 3 disjoint sets of notes and given 3-chord is obtained by taking 1 note from each set. For bio-harmony obtained as union of 3 icosahedral harmonies and tetrahedral harmony note typically belongs to 15 chords. The representation in terms of frequencies requires 16 chords per note.

If one wants consistency one must somehow modify the model of icosahedral harmony The necessity to introduce tetrahedron for one of the 3 fused harmonies is indeed an ugly looking feature of the model. The question is whether one of the harmonies could be replaced with some other harmony with 12 notes and 24 chords. If this would work one would have 64 chords equal to the number of genetic codons and 5+5+6 =16 chords per note. The addition of tetrahedron would not be needed.

One can imagine toric variants of icosahedral harmonies realized in terms of Hamiltonian cycles and one indeed obtains a toric harmony with 12 notes and 24 3-chords. Bio-harmony could correspond to the fusion of 2 icosahedral harmonies with 20 chords and toric harmony with 24 chords having therefore 64 chords. Whether the predictions for the numbers of codons coding for given amino-acids come out correctly for some choices of Hamiltonian cycles is still unclear. This would require an explicit construction of toric Hamiltonian cycles.

1. Previous results

Before discussing the possible role of toric harmonies some previous results will be summarized.

1.1 Icosahedral bio-harmonies

The model of bio-harmony starts from a model for music harmony as a Hamiltonian cycle at icosahedron having 12 vertices identified as 12 notes and 20 triangular faces defining the allowed chords of the harmony. The identification is determined by a Hamiltonian cycle going once through each vertex of icosahedron and consisting of edges of the icosahedral tesselation of sphere (analog of lattice): each edge corresponds to quint that is scaling of the frequency of the note by factor 3/2 (or by factor 27/12 in well-tempered scale). This identification assigns to each triangle of the icosahedron a 3-chord. The 20 faces of icosahedron define therefore the allowed 3-chords of the harmony. There exists quite a large number of icosahedral Hamiltonian cycles and thus harmonies.

The fact that the number of chords is 20 - the number of amino-acids - leads to the question whether one might somehow understand genetic code and 64 DNA codons in this framework. By combining 3 icosahedral harmonies with different symmetry groups identified as subgroups of the icosahedral group, one obtains harmonies with 60 3-chords.

The DNA codons coding for given amino-acid are identified as triangles (3-chords) at the orbit of triangle representing the amino-acid under the symmetry group of the Hamiltonian cycle. The predictions for the numbers of DNAs coding given amino-acid are highly suggestive for the vertebrate genetic code.

By gluing to the icosahedron tetrahedron along common face one obtains 4 more codons and two slightly different codes are the outcome. Also the 2 amino-acids Pyl and Sec can be understood. One can also regard the tetrahedral 4 chord harmony as additional harmony so that one would have fusion of four harmonies. One can of course criticize the addition of tetrahedron as a dirty trick to get genetic code.

The explicit study of the chords of bio-harmony however shows that the chords do not contain the 3-chords of the standard harmonies familiar from classical music (say major and minor scale and corresponding chords). Garage band experimentation with random sequences of chords requiring conservability that two subsequent chords have at least one common note however shows that these harmonies are - at least to my opinion - aesthetically feasible although somewhat boring.

1.2 Explanation for the number 12 of notes of 12-note scale

One also ends up to an argument explaining the number 12 for the notes of the 12-note scale (see this). There is also second representation of genetic code provided by dark proton triplets. The dark proton triplets representing dark genetic codons are in one-one correspondence with ordinary DNA codons. Also amino-acids, RNA and tRNA have analogs as states of 3 dark protons. The number of tRNAs is predicted to be 40.

The dark codons represent entangled states of protons and one cannot decompose them into a product state. The only manner to assign to the 3-chord representing the triplet ordinary DNA codon such that each letter in {A,T,C,G} corresponds to a frequency is to assume that the frequency depends on the position of the letter in the codon. One has altogether 3× 4=12 frequencies corresponding to 3 positions for given letter selected from four letters.

Without additional conditions any decomposition of 12 notes of the scale to 3 disjoint groups of 4 notes is possible and possible chords are obtained by choosing one note from each group. The most symmetric choice assigns to the 4 letters the notes {C, C #, D, D#} in the first position, {E,F, F #, G} in the second position, and {G #, A, B b, B} in the third position. The codons of type XXX would correspond to CEG# or its transpose. One can transpose this proposal and there are 4 non-quivalent transposes, which could be seen as analogs of music keys.

Remark: CEG# between C-major and A-minor very often finishes finnish tango: something neither sad nor glad!

One can look what kind of chords one obtains.

  1. Chords containing notes associated with the same position in codon are not possible.

  2. Given note belongs to 6 chords. In the icosahedral harmony with 20 chords given note belongs to 5 chords (there are 5 triangles containing given vertex). Therefore the harmony in question cannot be equivalent with 20-chord icosahedral harmony. Neither can the bio-harmony with 64 chords satisfy the condition that given note is contained by 6 3-chords.

  3. First and second notes of the chords are separated by at least major third as also those second and third notes. The chords satisfy however octave equivalence so that the distance between the first and third notes can be smaller - even half step - and one finds that one can get the basic chords A-minor scale: Am, Dm, E7, and also G and F. Also the basic chords of F-major scale can be represented. Also the transposes of these scales by 2 whole steps can be represented so that one obtains Am, C #m, Fm and corresponding major scales. These harmonies could allow the harmonies of classical and popular music.

These observations encourage to ask whether a representation of the new harmonies as Hamiltonian cycles of some tesselation could exist. The tesselation should be such that 6 triangles meet at given vertex. Triangular tesselation of torus having interpretation in terms of a planar parallelogram (or perhaps more general planar region) with edges at the boundary suitable identified to obtain torus topology seems to be the natural option. Clearly this region would correspond to a planar lattice with periodic boundary conditions.

2. Is it possible to have toric harmonies?

The basic question is whether one can have a representation of the new candidate for harmonies in terms of a tesselation of torus having V= 12 vertices and F= 20 triangular faces. The reading of the article "Equivelar maps on the torus" (see this) discussing toric tesselations makes clear that this is impossible. One however have (V,F)= (12,24) (see this). A rather promising realization of the genetic code in terms of bio-harmony would be as a fusion of two icosahedral harmonies and toric harmony with (V,F)= (12,24). This in principle allows also to have 24 3-chords which can realize classical harmony (major/minor scale).

  1. The local properties of the tesselations for any topology are characterized by a pair (m,n) of positive integers. m is the number of edges meeting in given vertex (valence) and n is the number of edges and vertices for the face. Now one has (m,n)= (6,3). The dual of this tesselation is hexagonal tesselation (m,n)= (3,6) obtained by defining vertices as centers of the triangles so that faces become vertices and vice versa.

  2. The rule V-E+F=2(1-g)-h, where V, E and F are the numbers of vertices, edges, and faces, relates V-E-F to the topology of the graph, which in the recent case is triangular tesselation. g is the genus of the surface at which the triangulation is im eded and h is the number of holes in it. In case of torus one would have E=V+F giving in the recent case E=36 for (V,F)= (12,24) (see this) whereas in the icosahedral case one has E=32.

  3. This kind of tesselations are obtained by applying periodic boundary conditions to triangular lattices in plane defining parallelogram. The intuitive expectation is that this lattices can be labelled by two integers (m,n) characterizing the lengths of the sides of the parallelogram plus angle between two sides: this angle defines the conformal equivalence class of torus. One can also
    introduce two unit vectors e1 and e2 characterizing the conformal equivalence class of torus.

    Second naive expectation is that m× n × sin(θ) represents the area of the parallelogram. sin(θ) equals to the length of the exterior product |e1× e2|=sin(θ) representing twice the area of the triangle so that there would be 2m× n triangular faces. The division of the planar lattice by group generated by pe1+qe2 defines boundary conditions. Besides this the rotation group Z6 acts as analog for the symmetries of a unit cell in lattice. This naive expectation need not of course be strictly correct.

  4. As noticed, it is not possible to have triangular toric tesselations with (V,E,F)= (12,30,20). Torus however has a triangular tesselation with (V,E,F)=(12,36,24). An illustration of the tesselation can be found here). It allows to count visually the numbers V, E, F, and the identifications of the boundary edges and vertices. With good visual imagination one might even try to guess what Hamiltonian cycles look like.

    The triangular tesselations and their hexagonal duals are characterized partially by a pair of integers (a,b) and (b,a). a and b must both even or odd. The number of faces is F= (a2+3b2)/2. For (a,b)= (6,2) one indeed has V=12 and F=24. From the article one learns that the number of triangles satisfies F= 2V for a=b at least. If F= 2V holds true more generally one would have V= (a2+3b2)/8, giving tight constraints on a and b.

    Remark: The conventions for the labelling of torus tesselation vary. The above convention based on integers (a,b) is different from the convention based on integer pair (p,q) used in the article this). In this notation torus tesselation with (V,F)=(12,24) corresponds to (p,q)=(2,2) instead of (a,b)= (6,2). This requires (a,b)=(3p,q). In this notation one has V=p2+q2 +pq.

3. The number of triangles in the 12-vertex tesselation is 24: curse or blessing?

One could see as a problem that one has F=24>20? Or is this a problem?

  1. By fusing two icosahedral harmonies and one toric harmony one would obtain a harmony with 20+20+24 =64 chords, the number of DNA codons! One would replace the fusion of 3 icosahedral harmonies and tetrahedral harmony with a fusion of 2 icosahedral harmonies and toric harmony. Icosahedral symmetry with toric symmetry associated with the third harmony would be replaced with a smaller toric symmetry. Note however that the attachment of tetrahedron to a fixed icosahedral face also breaks icosahedral symmetry.

    This raises questions. Could the presence of the toric harmony somehow relate to the almost exact U ↔ C and A ↔ G symmetries of the third letter of codons. This does not of course mean that one could associated the toric harmony with the third letter. Note that in the icosa-tetrahedral model the three harmonies are assumed to have no common chords. Same non-trivial assumption is needed also now in order to obtain 64 codons.

  2. What about the number of amino-acids: could it be 24 corresponding ordinary aminoacids, stopping sign plus 3 additional exotic amino-acids. The 20 icosahedral triangles can corresponds to amino-acids but not to stopping sign. Could it be that one of the additional codons in 24 corresponds to stopping sign and two exotic amino-acids Pyl and Sec appearing in biosystems explained by the icosahedral model in terms of a variant of the genetic code. There indeed exists even third exotic amino-acid! N-formylmethionine (see this) but is usually regarded as as a form of methionine rather than as a separate proteinogenic amino-acid.

  3. Recall that the problem related to the icosa-tetrahedral harmony is that it does not contains the chords of what might be called classical harmonies (the chordds assignable to major and minor scales). If 24 chords of bio-harmony correspond to toric harmony, one could obtain these chords if the chords in question are chords obtainable by the proposed construction.

    But is this construction consistent with the representation of 64 chords by taking to each chord one note from 3 disjoint groups of 4 notes in which each note belongs to 16 chords. The maximum number of chords that note can belong to would be 5+5+6=16 as desired. If there are no common chords between the 3 harmonies the conditions is satisfied. Using for instance 3 toric representations the number would be 6+6+6=18 and would require dropping some chords.

  4. The earlier model for tRNA as fusion of two icosahedral codes predicting 20+20=40 tRNA codons. Now tRNAs as fusion of two harmonies allows two basic options depending on whether both harmonies are icosahedral or whether second harmony is toric. These options would give 20+20=40 or 20+24=44 tRNAs. Wikipedia tells that maximum number is 41. Some sources however tell that there are 20-40 different tRNAs in bacterial cells and as many as 50-100 in plant and animal cells.

See the article Geometric theory of harmony or the article New results in the model of bio-harmony.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, September 17, 2018

Is it possible to determine experimentally whether gravitation is quantal interaction?

Marletto and Vedral have proposed (thanks for link to Ulla) an interesting method for measuring whether gravitation is quantal interaction (see this). I tried to understand what the proposal suggests and how it translates to TGD language.

  1. If gravitational field is quantum it makes possible entanglement between two states. This is the intuitive idea but what it means in TGD picture? Feynman interpreted this as entanglement of gravitational field of an objects with the state of object. If object is in a state, which is superposition of states localized at two different points xi, the classical gravitational fields φgr are different and one has a superposition of states with different locations

    | I>= ∑i=1,2 | mi ~at~ xi> | φgr,xi> == | L> +|R> .

  2. Put two such de-localized states with masses mi at some distance d to get state I1>I2>,
    | i> =| L>i +| R>i. The 4 components pairs of the states interact gravitationally and since there are different gravitational fields between different states the states get different phases, one can obtain entangled state.

    Gravitational field would entangle the masses. If one integrates over the degrees of freedom associated with gravitational field one obtains density matrix and the density matrix is not pure if gravitational field is quantum in the sense that it entangles with the particle position.

    That gravitation is able to entangle the masses would be a proof for the quantum nature of gravitational field. It is not however easy to detect this. If gravitation only serves as a parameter in the interaction Hamiltonian of the two masses, entanglement can be generated but does not prove that gravitational interaction is quantal. It is required that the only interaction between the systems is gravitational so that other interactions do not generate entanglement. Certainly, one should use masses having no em charges.

  3. In TGD framework the view of Feynman is natural. One has superposition of space-time surfaces representing this situation. Gravitational field of particle is associated with the magnetic body of particle represented as 4-surface and superposition corresponds to a de-localized quantum state in the "world of classical worlds" with xi representing particular WCW coordinates.

I am not specialist in quantum information theory nor as quantum gravity experimentalist, and hereafter I must proceed keeping fingers crossed and I can only hope that I have understood correctly. To my best understanding, the general idea of the experiment would be to use interferometer to detect phase differences generated by gravitational interaction and inducing the entanglement. Not for photons but for gravitationally interacting masses m1 and m2 assumed to be in quantum coherent state and be describable by wave function analogous to em field. It is assumed that gravitational interact can be describe classically and this is also the case in TGD by quantum-classical correspondence.
  1. Authors think quantum information theoretically and reduce everything to qubits. The de-localization of masses to a superposition of two positions correspond to a qubit analogous to spin or a polarization of photon.

  2. One must use and analog of interferometer to measure the phase difference between different values of this "polarization".

    In the normal interferometer is a flattened square like arrangement. Photons in superpositions of different spin states enter a beam splitter at the left-lower corner of interferometer dividing the beam to two beams with different polarizations: horizontal (H) and vertical (V). Vertical (horizontal) beam enters to a mirror which reflects it to horizontal (vertical beam). One obtains paths V-H and H-V meeting at a transparent mirror located at the upper right corner of interferometer and interfere.

    There is detector D0 resp. D1 detecting component of light gone through in vertical resp. horizontal direction of the fourth mirror. Firing of D1 would select the H-V and the firing of D0 the V-H path. This thus would tells what path (V-H or H-V) the photon arrived. The interference and thus also the detection probabilities depend on the phases of beams generated during the travel: this is important.

  3. If I have understood correctly, this picture about interferometer must be generalized. Photon is replaced by mass m in quantum state which is superposition of two states with polarizations corresponding to the two different positions. Beam splitting would mean that the components of state of mass m localized at positions x1 and x2 travel along different routes. The wave functions must be reflected in the first mirrors at both path and transmitted through the mirror at the upper right corner. The detectors Di measure which path the mass state arrived and localize the mass state at either position. The probabilities for the positions depend on the phase difference generated during the path. I can only hope that I have understood correctly: in any case the notion of mirror and transparent mirror in principle make sense also for solutions of Schrödinger eequation.

  4. One must however have two interferometers. One for each mass. Masses m1 and m2 interact quantum gravitationally and the phases generated for different polarization states differ. The phase is generated by the gravitational interaction. Authors estimate that phases generate along the paths are of form

    Φi = [m1m2G/ℏ di] Δ t .

    Δ t =L/v is the time taken to pass through the path of length L with velocity v. d1 is the smaller distance between upper path for lower mass m2 and lower path for upper mass m1. d2 is the distance between upper path for upper mass m1 and lower m2. See Figure 1 of the article.

What one needs for the experiment?
  1. One should have de-localization of massive objects. In atomic scales this is possible. If one has heff/h0>h one could also have zoomed up scale of de-localization and this might be very relevant. Fountain effect of superfluidity pops up in mind.

  2. The gravitational fields created by atomic objects are extremely weak and this is an obvious problem. Gm1m2 for atomic mass scales is extremely small: since Planck mass mP is something like 1019 proton masses and atomic masses are of order 10-100 atomic masses.

    One should have objects with masses not far from Planck mass to make Gm1m2 large enough. Authors suggest using condensed matter objects having masses of order m∼ 10-12 kg, which is about 1015 proton masses 10-4 Planck masses. Authors claim that recent technology allows de-localization of masses of this scale at two points. The distance d between the objects would be of order micron.

  3. For masses larger than Planck mass one could have difficulties since quantum gravitational perturbation series need not converge for Gm1m2> 1 (say). For proposed mass scales this would not be a problem.

What can one say about the situation in TGD framework?
  1. In TGD framework the gravitational Planck hgr= Gm1m2/v0 assignable to the flux tubes mediating interaction between m1 and m2 as macroscopic quantum systems could enter into the game and could reduce in extreme case the value of gravitational fine structure constant from Gm1m2/4π ℏ to Gm1m2/4π ℏeff = β0/4π, β0= v0/c<1. This would make perturbation series convergent even for macroscopic masses behaving like quantal objects. The physically motivated proposal is β0∼ 2-11. This would zoom up the quantum coherence length scales by hgr/h.

  2. What can one say in TGD framework about the values of phases Φ?

    1. For ℏ → ℏeff one would have

      Φi = [Gm1m2/ℏeff di] Δ t .

      For ℏ → ℏeff the phase differences would be reduced for given Δ t. On the other hand, quantum gravitational coherence time is expected to increase like heff so that the values of phase differences would not change if Δ t is increased correspondingly. The time of 10-6 seconds could be scaled up but this would require the increase of the total length L of interferometer arms and/or slowing down of the velocity v.

    2. For ℏeff=ℏgr this would give a universal prediction having no dependence on G or masses mi

      Φi = [v0Δ t/di] = [v0/v] [L/di] .

      If Planck length is actually equal to CP2 length R∼ 103.5(GNℏ)1/2, one would has GN = R2/ℏeff, ℏeff∼ 107. One can consider both smaller and larger values of G and for larger values the phase difference would be larger. For this option one would obtain 1/ℏeff2 scaling for Φ. Also for this option the prediction for the phase difference is universal for heff=hgr.

    3. What is important is that the universality could be tested by varying the masses mi. This would however require that mi behave as coherent quantum systems gravitationally. It is however possible that the largest quantum systems behaving quantum coherently correspond to much smaller masses.

See the chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time" or the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, September 10, 2018

Two comments about coupling constant evolution

In the following two comments about coupling constant evolution refining slightly the existing view. First comment proposes that coupling constant evolution is forced by the convergence of perturbation series at the level of "world of classical worlds" (WCW). At the level of cognitive representations provided by adelic physics based on quantum criticality coupling constant evolution would reduce to a sequence of quantum phase transitions between extensions of rationals. Second comment is about evolution of cosmological constant and its relationship to the vision about cosmic expansion as thickening of M4 projections of magnetic flux tubes.

Dicrete coupling constant evolution: from quantum criticality or from the convergence of perturbation series?

  1. heff/h_0 =n identifiable as dimension for extension of rationals has integer spectrum. This allows the generalization of the formula for Newton's constant as Geff= R2/&hbareff with Planck length lP identifies much longer CP2 size so that TGD involves only single fundamental length in accordance with the assumption held for about 35 years before the emergence of twistor lift of TGD. Therefore Newton's constants is varies and is different at different levels of dark matter hierarchy identifiable in terms of hierarchy of extensions of rationals

  2. As a special case one has hbareff= hgr= GMm/v0. Is this case case the gravitational coupling become GeffMm= v0 and does not depend on masses or G at all. In quantum scattering amplitudes a dimensionless parameter (1/4π)v0/c would appear in the role of gravitational fine structure constant and would be obtained from hbareff= hgr= GMm/v0 consistent with Equivalence Principle. The miracle would be that Geff would disappear totally from the perturbative expansion in terms of GMm as one finds by looking what αgr= GMm/ℏgr is! This picture would work when GMm is larger than perturbative expansion fails to converge. For Mm above Planck mass squared this is expected to be the case. What happens below this limit is yet unclear (n is integer).

    Could v0 be fundamental coupling constant running only mildly? This does not seem to be the case: Nottale's original work proposing hbargr proposes that v0 for outer planets is by factor 1/5 smaller than for the inner planets (see this and this).

  3. This picture works also for other interactions (see this). Quite generally, Nature would be theoretician friendly and induce a phase transition increasing hbar when the coupling strength exceeds the value below which perturbation series converges so that perturbation series converges. In adelic physics this would mean increase of the algebraic complexity since heff/h=n is the dimension of extension of rationals inducing the extensions of various p-adic number fields and defining the particular level in the adelic hierarchy (see this). The parameters characterizing space-time surfaces as preferred extremals of the action principle would be numbers in this extension of rationals so that the phase transition would have a well-defined mathematical meaning. In TGD the extensions of rationals would label different quantum critical phases in which coupling constants would not run so that coupling constant evolution would be discrete as function of the extension.
This vision allows also to understand discrete coupling constant evolution replacing continuous coupling constant evolution of quantum field theories as being forced by the convergence of perturbation expansion and induced by the evolution defined by the hierarchy of extensions of rationals.
  1. When convergence is lost, a phase transition increasing algebraic complexity takes place and increases n. Extensions of rationals have also other characteristics than the dimension n.

    For instance, each extension is characterized by ramified primes and the the proposal is that favoured p-adic primes assignable to cognition and also to elementary particles and physics in general correspond to so called ramified primes analogous to multiple zeros of polynomials. Therefore number theoretic evolution would also give rise to p-adic evolution as analog of ordinary coupling constant evolution with length scale.

  2. At quantum criticality coupling constant evolution is trivial. In QFT context this would mean that loops vanish separately or at least they sum up to zero for the critical values of coupling constants. This argument however seems to make theargument about the convergence of coupling constant expansion obsolete unless one allows only the quantum critical values of coupling constants guaranteeing that quantum TGD is quantum critical. There are strong reasons to believe that the TGD analog of twistor diagrammatics involves only tree diagrams and there are strong number theoretic argument for this: infinite sum of diagrams does not in general give a number in given extension of rationals. Quantum criticality would be forced by number theory.

  3. The consistency would be achieved if ordinary continuous coupling constant evolution is obtained as a completion of the discrete coupling constant evolution to real number based continuous evolution. Similar completions should make sense in p-adic sectors. These perturbation series should converge and this condition would force the phase transitions for the critical values of coupling constant strength the sum over the loop corrections would vanish and the outcome would be in the extension of rationals and make sense in extension of fany number field induced by the extension of rationals. Quantum criticality would boil down to number theoretical universality. The completions to continuous evolution are not unique and this would correspond to a finite measurement resolution achieved only at the limit of algebraic numbers.

    One can ask whether one should regard this hierarchy as a hierarchy of approximations for space-time surfaces in M8 represented as zero loci for real or imaginary part (in quaternionic sense) of octonion analytic functions obtained by replacing them with polynomials of finite degree. The picture based on the notion of WCW would correspond to this limit and the hierarchy of rational extensions to what cognitive representations can provide.


Evolution of cosmological constant

The goal is to understand the evolution of cosmological constant number theoretically and correlate it with the intuitive idea that cosmic expansion corresponds to the thickening of the M4 projections of cosmic strings in discrete phase transitions changing the value of the cosmological contant and other coupling parameters.

First some background is needed.

  1. The action for the twistor lift is 6-D analog of Kähler action for the 6-D surfaces in 12-D product of twistor spaces for M4 and CP2. The twistor space for M4 is the geometric variant of twistor space and simply the product M4× S2. For the allowed extremals of this action 6-D surfaces dimensionally reduces to twistor bundle over X4 having S2 as fiber. The action for the space-time surface is sum of Kähler action and 4-volume term. The coefficient of four-volume term has interpretation terms of cosmological constant and I have considered explicitly its p-adic evolution as function of p-adic length scale.

  2. The vision is that cosmological constant Λ behaves in average sense as 1/a2 as function of the light-cone proper time assignable to causal diamond (CD) and serving as analog cosmological time coordinate. One can say that Λ is function of the scale of the space-time sheet and a as cosmological time defines this scale. This solves the problem due to large values of Λ at very early times. The size of Λ is reduced in p-adic length scale evolution occurring via phase transitions reducing Λ.


  3. p-Adic length scales are given by Lp= kR p1/2, where k is numerical constant and R is CP2 size - say radius of geodesic sphere. An attractive interpretation (see this) is that the real Planck length actually corresponds to R although it is by a factor of order 10-3.5 shorter. The point is that one identifies Geff = R2/hbareff with hbareff∼ 107 for GN. Geff would thus depend on heff/h=n, which is essentially the dimension of extension of rationals, whose hierarchy gives rise to coupling constant evolution. Also evolution of Λ which is indeed coupling constant like quantity.

  4. p-Adic length scale evolution predicts discrete spectrum for Λ ∝ Lp-2 ∝ 1/p and p-adic length scale hypothesis stating that p-adic primes p≈ 2k, k some (not arbitrary) integer, are preferred would reduce the evolution to phase transitions in which Λ changes by a power of 2. This would replace the
    continuous cosmic expansion with a sequence of this kind of phase transitions. This would solve the paradox due to the fact that stellar objects participate to cosmic expansion but do not seem to expand themselves. The objects would expand in discrete jerks and so called Expanding Earth hypothesis would have TGD variant: in Cambrian explosion the radius of Earth increased by factor of 2 (see this and this).

One can gain additional insight about the evolution of Λ from the vision that the cosmic evolution to high extent corresponds to evolution of magnetic flux tubes which started from cosmic strings which are objects of form X2× Y2⊂ M4 × CP2, where X2 is minimal surface - string world sheet- and Y2 is complex sub-manifold of CP2 - homologically non-trivial or trivial geodesic sphere in the simplest situation. In homologically non-trivial case there is monopoke flux of Kähler magnetic field along the string. M4 projection of cosmic string is unstable against perturbations and gradually thickens during cosmic evolution.
  1. The Kähler magnetic energy of the flux tube is proprtional to B2SL where B is Kähler magnetic field, whose flux is quantized and does no change. By flux quantization B itself is roughly proportional to 1/S, S the area of M4 projection of the flux tube, which gradually thickens. Kähler energy is proportional to L/S and thus decreases as S increases unless L increases. In any case B weakens and this suggests that Kähler magnetic energy transforms to ordinary particles or their dark counterparts and part of particles remains inside flux tube as dark particles with heff/h0=n characterizing the dimension of extension of rationals.

  2. What happens to the volume energy Evol? One Evol ∝ Λ LS and increases as S increases. This cannot make sense but the p-adic evolution of Λ as Λ ∝ 1/p saves the situation. Primes p possible or given extension of rationals would correspond to ramified primes for the extension. Cosmic expansion would take place as phases transition changing extension of rationals and the larger extension should posses larger ramified primes.

  3. The total energy of the flux tube would be of the form E= (a/S+bS)L corresponding to Kähler contribution and volume contribution. Physical intuition tells that also volume energy decreases during the sequence of the phase transitions thickening the string but also increasing the length of the string. The problem is that if bS is essentially constant, the volume energy of string like objects increases like Lp if so long string like objects in cosmic scales are allowed. Situation changes if the string like objects are much shorter.

    To understand whether this is possible, one must consider an alternative but equivalent view about cosmological constant (see this). The density ρvol of the volume energy has dimensions 1/L4 and can be parametrized also by p-adic length scale Lp1: one would have ρvol ∝ 1/Lp14. The p-adic prime p of the previous parametrization corresponds to cosmic length scale and one would have p∝ p12, which for the size scale assignable to the age of the Universe observable to us corresponds to neutrino Compton length roughly. Galactic strings would however correspond to much longer strings and TGD indeed predicts a Russian doll fractal hierarchy of cosmologies.

  4. The condition that the value of volume part of the action for 4-volume Lp14 remains constant under p-adic evolution gives Λ ∝ 1/Lp4. Parameterize volume energy as Evol=bSL. Assume that string length L scales as Lp and require that the volume energy of flux tube scales as 1/Lp (Uncertainty Principle). Parameter b (that is Λ) would scale as 1/Lp12S. Consistency requires that S scales as Lp12. As a consequence, both volume and Kähler energy would decrease like 1/Lp1. Both Kähler and volume energy would transform to ordinary particles and their dark variants part of which would remain inside flux tube. The transformations would occur as phase transitions changing p1 and generate burst of radiation. The result looks strange at first but is due to the fact that Lp1 is much shorter than Lp: for Lp the result is not possible.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Sunday, September 09, 2018

Did LIGO observe non-standard value of G and are galactic blackholes really supermassive?

I have talked (see this) about the possibility that Planck length lP is actually CP2 length R, which is scaled up by factor of order 103.5 from the standard Planck length. The basic formula for Newton's constant G would be a generalization of the standard formula to give G= R2/ℏeff. There would be only one fundamental scale in TGD as the original idea indeed was. ℏeff at "standard" flux tubes mediating gravitational interaction (gravitons) would be by a factor about n∼ 106-107 times larger than h.

Also other values of heff are possible. The mysterious small variations of G known for a long time could be understood as variations for some factors of n. The fountain effect in super-fluidity could correspond to a value of heff/h0=n larger than standard value at gravitational flux tubes increased by some integer factor. The value of G would be reduced and allow particles to get to higher heights already classically. In Podkletnov effect some factor og n would increase and g would be reduced by few per cent. Larger value of heff would induce also larger delocalization height.

Also smaller values are possible and in fact, in condensed matter scales it is quite possible that n is rather small. Gravitation would be stronger but very difficult to detect in these scales. Neutron in the gravitational field of Earth might provide a possible test. The general rule would be that the smaller the scale of dark matter dynamics, the larger the value of G and maximum value would be Gmax= R2/h0, h=6h0.

Are the blackholes detected by LIGO really so massive?

LIGO (see this) has hitherto observed 3 fusions of black holes giving rise to gravitational waves. For TGD view about the findings of LIGO see this and this. The colliding blackholes were deduced to have unexpectedly larger large masses: something like 10-40 solar masses, which is regarded as something rather strange.

Could it be that the masses were actually of the order of solar mass and G was actually larger by this factor and heff smaller by this factor?! The mass of the colliding blackholes could be of order solar mass and G would larger than its normal value - say by a factor in the range [10,50]. If so, LIGO observations would represent the first evidence for TGD view about quantum gravitation, which is very different from superstring based view. The fourth fusion was for neutron stars rather than black holes and stars had mass of order solar mass.

This idea works if the physics of gravitating system depends only on G(M+m). That classical dynamics depends on G(M+m) only, follows from Equivalence Principle. But is this true also for gravitational radiation?

  1. If the power of gravitational radiation distinguishes between different values of M+m, when G(M+m) is kept constant, the idea is dead. This seems to be the case. The dependence on G(M+m) only leads to contradiction at the limit when M+m approaches zero and G(M+m) is fixed. The reason is that the energy emitted per single period of rotation would be larger than M+m. The natural expectation is that the radiated power per cycle and per mass M+m depends on G(M+m) only as a dimensionless quantity .

  2. From arXiv one can find an (see article, in which the energy per unit solid angle and frequency radiated ina collision of blackholes is estimated and the outcome is proportional to E2G(M+m)2, where E is the energy of the colliding blackhole.

    The result is proportional mass squared measured in units of Planck mass squared as one might indeed naively expect since GM2 is analogous to the total gravitational charge squared measured using Planck mass.

    The proportionality to E2 comes from the condition that dimensions come out correctly. Therefore the scaling of G upwards would reduce mass and the power of gravitational radiation would be reduced down like M+m. The power per unit mass depends on G(M+m) only. Gravitational radiation allows to distinguish between two systems with the same Schwartschild radius, although the classical dynamics does not allow this.

  3. One can express the classical gravitational energy E as gravitational potential energy proportional to GM/R. This gives only dependence on GM as also Equivalence Principle for classical dynamics requires and for the collisions of blackholes R is measured by using GM as a natural unit.

Remark: The calculation uses the notion of energym which in general relativity is precisely defined only for stationary solutions. Radiation spoils the stationarity. The calculations of the radiation power in GRT is to some degree artwork feeding in the classical conservation laws in post-Newtonian approximation lost in GRT. In TGD framework the conservation laws are not lost and hold true at the level of M4×CP2.

What about supermassive galactic blacholes?

What about supermassive galactic black holes in the centers of galaxies: are they really super-massive or is G super-large! The mass of Milky Way super-massive blackhole is in the range 105-109 solar masses. Geometric mean is n=107 solar masses and of the order of the standard value of R2/GN=n ∼ 107 . Could one think that this blackhole has actually mass in the range 1-100 solar masses and assignable to an intersection of galactic cosmic string with itself! How galactic blackholes are formed is not well understood. Now this problem would disappear. Galactic blackholes would be there from the beginning!

The general conclusion is that only gravitational radiation allows to distinguish between different masses (M+m) for given G(M+m) in a system consisting of two masses so that classically scaling the opposite scalings of G and M is a symmetry.

See the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant? or the chapter TGD and astrophysics of "Physics in many-sheeted space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, September 05, 2018

The prediction of quantum theory for heat transfer rate fails: evidence for the hierarchy of Planck constants

I encountered in FB a highly interesting finding discussed in two popular articles (see this and this). The original article (see this) is behind paywall but one can find the crucial figure 5 online (see this) . It seems that experimental physics is in the middle of revolution of century and theoretical physicists straying in superstring landscape do not have a slightest idea about what is happening.

The size scale of objects studied - membranes in temperature of order room temperature T=300 K for instance - is about 1/2 micrometers: cell length scale range is in question. They produce radiation and other similar object is heated if there is temperature difference between the objects. The heat flow is proportional to the temperature difference and radiative conductance called Grad characterizes the situation. Planck's black body radiation law, which initiated the development of quantum theory for more than century ago, predicts Grad at large enough distances.

  1. The radiative transfer is larger than predicted by Planck's radiation law at small distances (nearby region) of order average wavelength of thermal radiation deducible from its temperature. This is not a news.

  2. The surprise was that radiative conductance is 100 times larger than expected from Planck's law at large distances (faraway region) for small objects with size of order .5 micron. This is a really big news.

The obvious explanation in TGD framework is provided by the hierarchy of Planck constants. Part of radiation has Planck constant heff=n×h0, which is larger than the standard value of h=6h0 (good guess for atoms). This scales up the wavelengths and the size of nearby region is scaled up by n. Faraway region can become effectively nearby region and conductance increases.

My guess is that this unavoidably means beginning of the second quantum revolution brought by the hierarchy of Planck constants. These experimental findings cannot be put under the rug anymore.

See the chapter Quantum criticality and dark matter "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, September 04, 2018

Galois groups and genes

The question about possible variations of Geff (see this) led again to the old observation that sub-groups of Galois group could be analogous to conserved genes in that they could be conserved in number theoretic evolution. In small variations such as variation of Galois subgroup as analogs of genes would change G only a little bit. For instance, the dimension of Galois subgroup would change slightly. There are also big variations of G in which new sub-group can emerge.

The analogy between subgoups of Galois groups and genes goes also in other direction. I have proposed long time ago that genes (or maybe even DNA codons) could be labelled by heff/h=n . This would mean that genes (or even codons) are labelled by a Galois group of Galois extension (see this) of rationals with dimension n defining the number of sheets of space-time surface as covering space. This could give a concrete dynamical and geometric meaning for the notin of gene and it might be possible some day to understand why given gene correlates with particular function. This is of course one of the big problems of biology.

One should have some kind of procedure giving rise to hierarchies of Galois groups assignable to genes. One would also like to assign to letter, codon and gene and extension of rationals and its Galois group. The natural starting point would be a sequence of so called intermediate Galois extensions EH leading from rationals or some extension K of rationals to the final extension E. Galois extension has the property that if a polynomial with coefficients in K has single root in E, also other roots are in E meaning that the polynomial with coefficients K factorizes into a product of linear polynomials. For Galois extensions the defining polynomials are irreducible so that they do not reduce to a product of polynomials.

Any sub-group H⊂ Gal(E/K)) leaves the intermediate extension EH invariant in element-wise manner as a sub-field of E (see this). Any subgroup H⊂ Gal(E/K)) defines an intermediate extension EH and subgroup H1⊂ H2⊂... define a hierarchy of extensions EH1>EH2>EH3... with decreasing dimension. The subgroups H are normal - in other words Gal(E) leaves them invariant and Gal(E)/H is group. The order |H| is the dimension of E as an extension of EH. This is a highly non-trivial piece of information. The dimension of E factorizes to a product ∏i |Hi| of dimensions for a sequence of groups Hi.

Could a sequence of DNA letters/codons somehow define a sequence of extensions? Could one assign to a given letter/codon a definite group Hi so that a sequence of letters/codons would correspond a product of some kind for these groups or should one be satisfied only with the assignment of a standard kind of extension to a letter/codon?

Irreducible polynomials define Galois extensions and one should understand what happens to an irreducible polynomial of an extension EH in a further extension to E. The degree of EH increases by a factor, which is dimension of E/EH and also the dimension of H. Is there a standard manner to construct irreducible extensions of this kind?

  1. What comes into mathematically uneducated mind of physicist is the functional decomposition Pm+n(x)= Pm(Pn(x)) of polynomials assignable to sub-units (letters/codons/genes) with coefficients in K for a algebraic counterpart for the product of sub-units. Pm(Pn(x)) would be a polynomial of degree n+m in K and polynomial of degree m in EH and one could assign to a given gene a fixed polynomial obtained as an iterated function composition. Intuitively it seems clear that in the generic case Pm(Pn(x)) does not decompose to a product of lower order polynomials. One could use also polynomials assignable to codons or letters as basic units. Also polynomials of genes could be fused in the same manner.

  2. If this indeed gives a Galois extension, the dimension m of the intermediate extension should be same as the order of its Galois group. Composition would be non-commutative but associative as the physical picture demands. The longer the gene, the higher the algebraic complexity would be. Could functional decomposition define the rule for who extensions and Galois groups correspond to genes? Very naively, functional decomposition in mathematical sense would correspond to composition of functions in biological sense.

  3. This picture would conform with M8-M4× CP2 correspondence (see this) in which the construction of space-time surface at level of M8 reduces to the construction of zero loci of polynomials of octonions, with rational coefficients. DNA letters, codons, and genes would correspond to polynomials of this kind.

Could one say anything about the Galois groups of DNA letters?
  1. Since n=heff/h serves as a kind of quantum IQ, and since molecular structures consisting of large number of particles are very complex, one could argue that n for DNA or its dark variant realized as dark proton sequences can be rather large and depend on the evolutionary level of organism and even the type of cell (neuron viz. soma cell). On the other, hand one could argue that in some sense DNA, which is often thought as information processor, could be analogous to an integrable quantum field theory and be solvable in some sense. Notice also that one can start from a background defined by given extension K of rationals and consider polynomials with coefficients in K. Under some conditions situation could be like that for rationals.

  2. The simplest guess would be that the 4 DNA letters correspond to 4 non-trivial finite groups with smaller possible orders: the cyclic groups Z2,Z3 with orders 2 and 3 plus 2 finite groups of order 4 (see the table of finite groups in this). The groups of order 4 are cyclic group Z4=Z2× Z2 and Klein group Z2⊕ Z2 acting as a symmetry group of rectangle that is not square - its elements have square equal to unit element. All these 4 groups are Abelian.

  3. On the other hand, polynomial equations of degree not larger than 4 can be solved exactly in the sense that one can write their roots in terms of radicals. Could there exist some kind of connection between the number 4 of DNA letters and 4 polynomials of degree less than 5 for whose roots one can write closed expressions in terms of radicals as Galois found? Could the polynomials obtained by a a repeated functional composition of the polynomials of DNA letters also have this solvability property?

    This could be the case! Galois theory states that the roots of polynomial are solvable in terms of radicals if and only if the Galois group is solvable meaning that it can be constructed from abelian groups using Abelian extensions (see this).

    Solvability translates to a statement that the group allows so called sub-normal series 1<G0<G1 ...<Gk=G such that Gj-1 is normal subgroup of Gj and Gj/Gj-1 is an abelian group: it is essential that the series extends to G. An equivalent condition is that the derived series is G→ G(1) → G(2) → ...→ 1 in which j+1:th group is commutator group of Gj: the essential point is that the series ends to trivial group.

    If one constructs the iterated polynomials by using only the 4 polynomials with Abelian Galois groups, the intuition of physicist suggests that the solvability condition is guaranteed!

  4. Wikipedia article also informs that for finite groups solvable group is a group whose composition series has only factors which are cyclic groups of prime order. Abelian groups are trivially solvable, nilpotent groups are solvable, and p-groups (having order, which is power prime) are solvable and all finite p-groups are nilpotent. This might relate to the importance of primes and their powers in TGD.

    Every group with order less than 60 elements is solvable. Fourth order polynomials can have at most S4 with 24 elements as Galois groups and are thus solvable. Fifth order polynomial can have the smallest non-solvable group, which is alternating group A5 with 60 elements as Galois group and in this case is not solvable. Sn is not solvable for n>4 and by the finding that Sn as Galois group is favored by its special properties (see this). It would seem that solvable polynomials are exceptions.

    A5 acts as the group of icosahedral orientation preserving isometries (rotations). Icosahedron and tetrahedron glued to it along one triangular face play a key role in TGD inspired model of bio-harmony and of genetic code (see this and this). The gluing of tetrahedron increases the number of codons from 60 to 64. The gluing of tetrahedron to icosahedron also reduces the order of isometry group to the rotations leaving the common face fixed and makes it solvable: could this explain why the ugly looking gluing of tetrahedron to icosahedron is needed? Could the smallest solvable groups and smallest non-solvable group be crucial for understanding the number theory of the genetic code.

An interesting question inspired by M8-H-duality (see this) is whether the solvability could be posed on octonionic polynomials as a condition guaranteeing that TGD is integrable theory in number theoretical sense or perhaps following from the conditions posed on the octonionic polynomials. Space-time surfaces in M8 would correspond to zero loci of real/imaginary parts (in quaternionic sense) for octonionic polynomials obtained from rational polynomials by analytic continuation. Could solvability relate to the condition guaranteeing M8 duality boiling down to the condition that the tangent spaces of space-time surface are labelled by points of CP2. This requires that tangent or normal space is associative (quaternionic) and that it contains fixed complex sub-space of octonions or perhaps more generally, there exists an integrable distribution of complex subspaces of octonions defining an analog of string world sheet.

See the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant? or the new chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time". See also the chapter Does M8-H duality reduce classical TGD to octonionic algebraic geometry?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, September 02, 2018

Is the hierarchy of Planck constants behind the reported variation of Newton's constant?

It has been known for long time that the measurements of G give differing results with differences between measurements larger than the measurement accuracy (see this and this). This suggests that there might be some new physics involved. In TGD framework the hierarchy of Planck constants heff=nh0, h=6h0 together with the condition that theory contains CP2 size scale R as only fundamental length scale, suggest the possibility that Newtons constant is given by G= R2/hbareff, where R replaces Planck length ( lP= (ℏ G1/2→ lP=R) and hbareff/h is in the range 106-107.

The spectrum of Newton' constant is consistent with Newton's equations if the scaling of hbareff inducing scaling G is accompanied by opposite scaling of M4 coordinates in M4× CP2: dark matter hierarchy would correspond to discrete hierarchy of scales given by breaking of scale invariance. In the special case heff=hgr=GMm/v0 quantum critical dynamics as gravitational fine structure constant (v0/c)/4π as coupling constant and it has no dependence of the value of G or masses M and m.

In this article I consider a possible interpretation for the finding of a Chinese research group measuring two different values of G differing by 47 ppm in terms of varying heff. Also a model for fountain effect of superfluidity as de-localization of wave function and increase of the maximal height of vertical orbit due to the change of the gravitational acceleration g at surface of Earth induced by a change of heff due to super-fluidity is discussed. Also Podkletnov effect is considered. TGD inspired theory of consciousness allows to speculate about levitation experiences possibly induced by the modification of Geff at the flux tubes for some part of the magnetic body accompanying biological body in TGD based quantum biology.

See the article Is the hierarchy of Planck constants behind the reported variation of Newton's constant? or the new chapter About the Nottale's formula for hgr and the possibility that Planck length lP and CP2 length R are identical giving G= R2/ℏeff of "Physics in many-sheeted space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.