https://matpitka.blogspot.com/2016/08/

Tuesday, August 30, 2016

How the hierarchy of Planck constants might relate to the almost vacuum degeneracy for twistor lift of TGD?

The twistor lift of classical TGD is attractive physically but it is still unclear whether it satisfies all constraints. The basic implication of twistor lift would be the understanding of gravitational and cosmological constants. Cosmological constant removes the infinite vacuum degeneracy of Kähler action but because of the extreme smallness of cosmological constant Λ playing the role of inverse of gauge coupling strength, the situation for nearly vacuum extremals of Kähler action in the recent cosmology is non-perturbative.

What is remarkable that twistor lift is possible only in zero energy ontology (ZEO) since the volume term would be infinite by infinite volume of space-time surface in ordinary ontology: by the finite size of causal diamond (CD) the space-time volume is however finite in ZEO. Furthermore, the condition that the destructive interference does not cancel vacuum functional implies Bohr quantization for the action in ZEO. The scale of CD corresponds naturally to the length scale LΛ= (8π/Λ)1/2 defined by the cosmological constant.

One motivation for introducing the hierarchy of Planck constants was that the phase transition increasing Planck constant makes possible perturbation theory in strongly interacting system. Nature itself would take care about the converge of the perturbation theory by scaling Kähler coupling strength αK to αK/n, n=heff/h. This hierarchy might allow to construct gravitational perturbation theory as has been proposed already earlier. This would for gravitation to be quantum coherent in astrophysical and even cosmological scales.

The following picture emerges.

  1. Either Lλ= (8π/Λ)1/2 or the length L characterizing vacuum energy density as ρvac=hbar/L4 or both can obey p-adic length scale hypothesis as analogs of coupling constant parameters. The third option makes sense if the ratio R/lP of CP2 radius and Planck length is power of two: it can be indeed chosen to be R/lP=212 within measurement uncertainties. L(now) corresponds to the p-adic length scale L(k)=∝ 2k/2 for k=175, size scale of neuron and axon.

  2. A microscopic explanation for the vacuum energy realizing strong form of holography is in terms of vacuum energy for radial flux tubes emanating from the source of gravitational field. The independence of energy from the value of heff=n implies analog of Uncertainty Principle: the product Nn for the number N of flux tubes and the value of n defining the number of sheets of the covering associated with heff=n× h is constant. This picture suggests that holography is realized in biology in terms of pixels whose size scale is characterized by L rather than Planck length.

  3. Vacuum energy is explained both in terms of Kähler magnetic energy of flux tubes carrying dark matter and of the vacuum energy associated with cosmological constant. The two explanations could be understood as two limits of the theory in which either homological non-trivial and trivial flux tubes dominate. Assuming quantum criticality in the sense that these two phases can tranform to each other, one obtains a prediction for the cosmological constant, string tension, magnetic field, and thickness of the critical flux tubes.

  4. An especially interesting result is that in the recent cosmology the size scale of a large neuron would be fundamental physical length scale determined by cosmological constant. This gives additional boost to the idea that biology and fundamental physics could relate closely to each other: the size scale of neuron would not be an accident but "determined in stars" and even beyond them!

See the article How the hierarchy of Planck constants might relate to the almost vacuum degeneracy for twistor lift of TGD?. For background see the chapter From Principles to Diagrams or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, August 25, 2016

Did Dragonfly 44 fail as a galaxy?

In Phys.Org there was an article telling about the discovery of a dark galaxy - Dragonfly 44 - with mass, which is of the same order of magnitude as that of Milky Way from the estimate based on standard model of galactic dark matter, for which the region within half-light radius is deduced to be 98 per cent dark. The dark galaxies found earlier have been much lighter. Dragonfly 44 posesses 94 globular clusters and in this respects remembles ordinary galaxies in this mass range.

The abstract of the article telling about the discovery gives a more quantitative summary about the finding.

Recently a population of large, very low surface brightness, spheroidal galaxies was identified in the Coma cluster. The apparent survival of these Ultra Diffuse Galaxies (UDGs) in a rich cluster suggests that they have very high masses. Here we present the stellar kinematics of Dragonfly 44, one of the largest Coma UDGs, using a 33.5 hr integration with DEIMOS on the Keck II telescope. We find a velocity dispersion of 47 km/s, which implies a dynamical mass of Mdyn=0.7× 1010 Msun within its deprojected half-light radius of r1/2=4.6 kpc. The mass-to-light ratio is M/L=48 Msun/Lsun, and the dark matter fraction is 98 percent within the half-light radius. The high mass of Dragonfly 44 is accompanied by a large globular cluster population. From deep Gemini imaging taken in 0.4" seeing we infer that Dragonfly 44 has 94 globular clusters, similar to the counts for other galaxies in this mass range. Our results add to other recent evidence that many UDGs are "failed" galaxies, with the sizes, dark matter content, and globular cluster systems of much more luminous objects. We estimate the total dark halo mass of Dragonfly 44 by comparing the amount of dark matter within r=4.6 kpc to enclosed mass profiles of NFW halos. The enclosed mass suggests a total mass of ∼ 1012 Msun, similar to the mass of the Milky Way. The existence of nearly-dark objects with this mass is unexpected, as galaxy formation is thought to be maximally-efficient in this regime.


To get some order of manitude perspective it is good to start by noticing that r1/2=4.6 kpc is about 15,000 ly - the distance of Sun from galactic center is about 3 kpc. The diameter of Milky Way is 31-55 kpc and the radius of the blackhole in the center of Milky Way, which is smaller than 17 light hours.


The proposed interpretation is as a failed galaxy. What could this failure mean? Did Dragonfly 44 try become an ordinary galaxy but dark matter remained almost dark inside the region defined by half radius? It is very difficult to imagine what the failure of dark matter to become ordinary matter could mean. In TGD framework this would correspond to phase transition transforming dark identified as heff=n×h phases to ordinary matter and could be imagined but this is not done in the following. Could the unexpected finding challenge the standard assumption that dark matter forms a halo around galactic center?

The mass of Dragonfly 44 is deduce from the velocities of stars. The faster they move, the larger the mass. The model for dark matter assumes dark matter halo and this in turn gives estimate for the total mass of the galaxy. Here a profound difference from TGD picture emerges.

  1. In TGD most of dark matter and energy are concentrated at long cosmic strings transformed to magnetic flux tubes like pearls along string. Galaxies are indeed known to be organized to form filaments. Galactic dark energy could correspond to the magnetic energy. The twistorial lift of TGD predicts also cosmological constant (see this). Both forms of dark energy could be involved. The linear distribution of dark matter along cosmic strings implies a effectively 2-D gravitational logarithmic potential giving in Newtonian approximation and neglecting the effect of the ordinary matter constant velocity spectrum serving as a good approximation to the observed velocity spectrum. A prediction distinguishing TGD from halo model is that the motion along the cosmic string is free. The self-gravitation of pearls however prevents them from decaying.

  2. Dark matter and energy at galactic cosmic string (or flux tube) could explain most of the mass of Dragonfly 44 and the velocity spectrum for the stars of Dragonfly 44. No halo of dark stars would be needed and there would be no dark stars within r1/2. Things would be exactly what they look like apart from the flux tube!

    The "failure" of Dragonfly 44 to become ordinary galaxy would be that stars have not been gathered to the region within r1/2. Could the density of the interstellar gas been low in this region? This would not have prevented the formation of stars in the outer regions and feeling the gravitational pull of cosmic string.

This extremely simple explanation of the finding for which standard halo model provides no explanation would distinguish TGD inspired model from the standard intuitive picture about the formation of galaxies as a process beginning from galactic nucleus and proceeding outwards. Dragonfly 44 would be analogous to a hydrogen atom with electrons at very large orbits only. This analogy goes much further in TGD framework since galaxies are predicted to be quantal objects (see this).

See the article Some astrophysical and cosmological findings from TGD point of view. For background see the chapter TGD and Astrophysics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Cloning of maximally negentropic states is possible: DNA replication as cloning of this kind of states?

In Facebook discussion with Bruno Marchal and Stephen King the notion of quantum cloning as copying of quantum state popped up and I ended up to ask about approximate cloning and got a nice link about which more below. From Wikipedia one learns some interesting facts cloning. No-cloning theorem states that the cloning of all states by unitary time evolution of the tensor product system is not possible. It is however possible clone orthogonal basis of states. Does this have some deep meaning?

As a response to my question I got a link to an article of Lamourex et al showing that cloning of entanglement - to be distinguished from the cloning of quantum state - is not possible in the general case. Separability - the absence of entanglement - is not preserved. Approximate cloning generates necessarily some entanglement in this case, and the authors give a lower bound for the remaining entanglement in case of an unentangled state pair.

The cloning of maximally entangled state is however possible. What makes this so interesting is that maximally negentropic entanglement for rational entanglement probabilities in TGD framework corresponds to maximal entanglement - entanglement probabilities form a matrix proportional to unit matrix- and just this entanglement is favored by Negentropy Maximization Principle . Could maximal entanglement be involved with say DNA replication? Could maximal negentropic entanglement for algebraic extensions of rationals allow cloning so that DNA entanglement negentropy could be larger than entanglement entropy?

What about entanglement probabilities in algebraic extension of rationals? In this case real number based entanglement entropy is not maximal since entanglement probablities are different. What can one say about p-adic entanglement negentropies: are they still maximal under some reasonable conditions? The logarithms involved depend on p-adic norms of probabilities and this is in the generic case just inverse of the power of p. Number theoretical universality suggests that entanglement probabilities are of form

Pi= ai/N

with ∑ ai= N with algebraic numbers ai not involving natural numbers and thus having unit p-adic norm.

With this assumption p-adic norms of Pi reduce to those of 1/N as for maximal rational entanglement. If this is the case the p-adic negentropy equals to log(pk) if pk divides N. The total negentropy equals to log(N) and is maximal and has the same value as for rational probabilities equal to 1/N.

The real entanglement entropy is now however smaller than log(N), which would mean that p-adic negentropy is larger than the real entropy as conjectured earlier (see this). For rational entanglement probabilities the generation of entanglement negentropy - conscious information during evolution - would be accompanied by a generation of equal entanglement entropy measuring the ignorance about what the negentropically entangled states representing selves are.

This conforms with the observation of Jeremy England that living matter is entropy producer (for TGD inspired commentary see this). For algebraic extensions of rationals this entropy could be however smaller than the total negentropy. Second law follows as a shadow of NMP if the real entanglement entropy corresponds to the thermodynamical entropy. Algebraic evolution would allow to generate conscious information faster than the environment is polluted, one might concretize! The higher the dimension of the algebraic extension rationals, the larger the difference could be and the future of the Universe might be brighter than one might expect by just looking around! Very consolating! One should however show that the above described situation can be realized as NMP strongly suggests before opening a bottle of champaigne;-).

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, August 24, 2016

Laundry pile phenomenon and various dimensions

Sabine Hossenfelder talks about various notions of dimension in her posting What if the universe was like a pile of laundry?. SH introduces ordinary topological dimension, Hausdorff dimension and spectral dimension.

  1. Sabine claims that the Hausdorff dimension of 3-D pipe is lower than ordinary dimension. This is definitely not the case (see Wikipedia article). One needs very irregular sets, which are not smooth or not even continuous - typically fractals. Sabine seems to confuse spectral dimension of pipe with Haussdorff dimension. For the pipe one could think that spectral dimension could be lower than 3 since at the limit of very thin pipe one obtains line with dimension 1 and if the sound waves do not depend on the transversal coordinates of pipe they effectively live in 1-D line.

  2. Sabine states that one speak about spectral dimension in dimensional reduction. This makes sense. The intuitive understanding of this in case of field theories dimensionally reduced to 4-D space-time is that spectral dimension is just 4 if the field modes involved depend on the coordinates of base space only. Otherwise the spectral dimension is higher than 4. One could imagine defining spectral dimension as quantum/thermal average for the number of coordinates that the modes appearing in the quantum/thermal state depend on.

  3. Sabine also describes what might be called pile of laundry metaphor. Lower-dimensional surface in high-dimensional space effectively fills it. This is a very good metaphor but I am not sure whether it describes what would happen in loop quantum gravity, where one has just 4-D discretized spacetime degenerating locally to 2-D one - possibly by the failure of numerics or of the model itself.

    Pile of laundry phenomenon would require 2-D clothes in 4-D space filling it: can one really have a situation in which 2-D orbits of loops effectively fill the space-time? To my best intuitive understanding "laundry" dimension is something different from Hausdorff and spectral dimensions.

    In any case, the "laundry" dimension would be defined by what happens when one starts from point of cloth and is allowed to move in the higher-D space rather than only along cloth, where everything is just smooth and continuous. This typically leads to another fold and one is lost to the folds of the laundry pile. Perhaps one should speak just about "laundry" dimension. I am not sure.

What about the situation in TGD?
  1. In TGD the twistorial reduction from 6-D twistor space to 4-D space-time defining its base the old-fashioned dimensional reduction could happen and spectral dimension would be 4 since no excitations are involved: twistorial description would be just an alternative description. It seems however that it brings in both Planck length via the radius of the sphere defining the fiber of twistor space and cosmological constant defined the coefficient of a volume term added to Kähler action in dimensional reduction.

    Together with CP2 radius analogous to GUT unification scale one would have 3 fundamental scales. Cosmological constant would be dynamical and in recent cosmology it would correspond to neutrino Compton length, which is of order of cell size scale so that biology and cosmology would meet!

  2. Strong form of holography (SH) states that the information about space-time dynamics (and quantum dynamics in general) is coded by string world sheets and partonic 2-surfaces (or possibly by their metrically 2-D light-like orbits, I must be careful here). Although 4 goes to 2 this is very different to what is claimed to occur in loop gravity numerics. The reduction is at the level of information theory: 2-surfaces define "space-time genes".

  3. The reduction of metric dimensionality for light-like orbits of partonic 2-surfaces from 3 to 2 is however analogous to what is claimed to occur in loop gravity numerics. The point is that 4-D tangent space at points of the light-like 3-surface reduces locally to 3-D since the determinant of 4-D metric vanishes due the fact that the signature changes from Euclidian (-1,-1,-1,-1) to Minkowskian (1,-1,-1,-1) and is (0,-1,-1,-1) at the 3-surface.

  4. Laundry pile phenomenon applies to higher dimensional clothes drying in higher-D space-time. This could happen for the 3-space (4-D space-time) represented as surface in M4× CP2 in TGD - clothes would be space-time sheets drying in 8-D imedding space;-). In this case the "laundry" dimension would become higher than 3 (4) effectively.

    One could imagine that the 3-D light-like orbits of 2-D partonic surfaces effectively fill the space-time surface. In similar manner, their 2-D partonic ends at the light-like boundaries of causal diamond (CD) could fill the 3-D ends of the space-time at the ends of CD. The 4-D regions with Euclidian signature of induced metric and analogous to lines of Feynman diagrams would almost totally fill the space-time surface leaving only very tiny volume of Minkowskian signature, where signals would progate. A very dense and also extremely complex and information rich phase would be in question.

    This kind of net like Feynman diagrams were assigned to non-perturbative phase by t'Hooft in gauge theories and this led to the notion of holography.

  5. Could laundry pile phenomenon apply to the many-sheeted space-time? QFT limit of TGD is defined by replacing the sheets of with single slightly curve region of M4 and induced gauge potentials with the sum of induced gauge potentials from different space-time sheets to define standard model gauge potentials. Gravitational field as deviation of metric from flat M4 metric is defined in the same manner. Could it make sense to assign a "laundry dimension" larger than 4 to the resulting counterpart of GRT space-time and could one make the QFT-GRT limit free of divergences by using dimensional regularization using the "laundry dimension"?

  6. One can add to the list of dimensions also algebraic dimension, which is important in TGD: the dimension of algebraic extension of rationals becomes a measure for the level of number theoretic evolution and complexity.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, August 22, 2016

Does GRT really allow gravitational radiation?

In Facebook discussion Niklas Grebäck mentioned Weyl tensor and I learned something that I should have noticed long time ago. Wikipedia article lists the basic properties of Weyl tensor as the traceless part of curvature tensor, call it R. Weyl tensor C is vanishing for conformally flat space-times. In dimensions D=2,3 Weyl tensor vanishes identically so that they are always conformally flat: this obviously makes the dimension D=3 for space very special. Interestingly, one can have non-flat space-times with nonvanishing Weyl tensor but the vanishing Schouten/Ricci/Einstein tensor and thus also with vanishing energy momentum tensor.

The rest of curvature tensor R can be expressed in terms of so called Kulkarni-Nomizu product P• g of Schouten tensor P and metric tensor g: R=C+P• g, which can be also transformed to a definition of Weyl tensor using the definition of curvature tensor in terms of Christoffel symbols as the fundamental definition. Kulkarni-Nomizu product • is defined as tensor product of two 2-tensors with symmetrization with respect to first and second index pairs plus antisymmetrization with respect to second and fourth indices.

Schouten tensor P is expressible as a combination of Ricci tensor Ric defined by the trace of R with respect to the first two indices and metric tensor g multiplied by curvature scalar s (rather than R in order to use index free notation without confusion with the curvature tensor). The expression reads as

P= 1/(D-2)×[Ric-(s/2(D-1))×g] .

Note that the coefficients of Ric and g differ from those for Einstein tensor. Ricci tensor and Einstein tensor are proportional to energy momentum tensor by Einstein equations relate to the part.

Weyl tensor is assigned with gravitational radiation in GRT. What I see as a serious interpretational problem is that by Einstein's equations gravitational radiation would carry no energy and momentum in absence of matter. One could argue that there are no free gravitons in GRT if this interpretation is adopted! This could be seen as a further argument against GRT besides the problems with the notions of energy and momentum: I had not realized this earlier.

Interestingly, in TGD framework so called massless extremals (MEs) (see this and this) are four-surfaces, which are extremals of Kähler action, have Weyl tensor equal to curvature tensor and therefore would have interpretation in terms of gravitons. Now these extremals are however non-vacuum extremals.

  1. Massless extremals correspond to graphs of possibly multi-valued maps from M4 to CP2. CP2 coordinates are arbitrary functions of variables u=k• m and w= ε • m (here "•" denotes M4 inner product). k is light-like wave vector and ε space-like polarization vector orthogonal to k so that the interpretation in terms of massless particle with polarization is possible. ME describes in the most general case a wave packet preserving its shape and propagating with maximal signal velocity along a kind of tube analogous to wave guide so that they are ideal for precisely targeted communications and central in TGD inspired quantum biology. MEs do not have Maxwellian counterparts. For instance, MEs can carry light-like gauge currents parallel to them: this is not possible in Maxwell's theory.

  2. I have discussed a generalization of this solution ansatz so that the directions defined by light-like vector k and polarization vector ε orthogonal to it are not constant anymore but define a slicing of M4 by orthogonal curved surfaces (analogs of string world sheets and space-like surfaces orthogonal to them). MEs in their simplest form at least are minimal surfaces and actually extremals of practically any general coordinate invariance action principle. For instance, this is the case if the volume term suggested by the twistorial lift of Kähler action (see this) and identifiable in terms of cosmological constant is added to Kähler action.

  3. MEs carry non-trivial induced gauge fields and gravitational fields identified in terms of the induced metric. I have identified them as correlates for particles, which correspond to pairs of wormhole contacts between two space-times such that at least one of them is ME. MEs would accompany to both gravitational radiation and other forms or radiation classically and serve as their correlates. For massless extremals the metric tensor is of form

    g= m+ a ε⊗ ε+ b k⊗ k + c(ε⊗ kv +k⊗ ε) ,

    where m is the metric of empty Minkowski space. The curvature tensor is necessarily quadrilinear in polarization vector ε and light-like wave vector k (light-like ifor both M4 and ME metric) and from the general expression of Weyl tensor C in terms of R and g it is equal to curvature tensor: C=R.

    Hence the interpretation as graviton solution conforms with the GRT interpretation. Now however the energy momentum tensor for the induced Kähler form is non-vanishing and bilinear in velocity vector k and the interpretational problem is avoided.

What is interesting that also at GRT limit cosmological constant saves gravitons from reducing to vacuum solutions. The deviation of the energy density given by cosmological term from that for Minkowski metric is identifiable as gravitonic energy density. The mysterious cosmological constant would be necessary for making gravitons non-vacuum solutions. The value of graviton amplitude would be determined by the continuity conditions for Einstein's equations with cosmological term. The p-adic evolution of cosmological term predicted by TGD is however difficult to understand in GRT framework.

See the article Does GRT really allow gravitational radiation?. For background see the chapter Basic extremals of the Kähler action.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, August 19, 2016

New findings about high-temperature super-conductors

Bozovic et al have reported rather interesting new findings about high Tc super-conductivity: for over-critical doping the critical temperature is proportional to the density of what is identified as Cooper pairs of electronic super-fluid. Combined with the earlier findings that super-conductivity is lost - not by splitting of Cooper pairs - but by reduction of the scale of quantum coherence, and that below minimal doping fraction critical temperature goes abruptly to zero, allows to add details to the earlier TGD inspired model of high Tc super-conductivity. The super-conductivity would be indeed lost by the reconnection of flattened square shaped long flux loops to shorter loops of pseudogap phase. Quantum coherence would be reduced to smaller scale as heff is reduced. Transversal flux tube "sound waves" would induce the reconnections. Electrons at flux loops would stabilize them by contributing to the energy density and thus to the inertia increasing the string tension so that the average amplitude squared of oscillations is reduced and critical temperature increases with electron density.

For details see the chapter Quantum Model for Bio-Superconductivity: I of "TGD and EEG" or the article New findings about high-temperature super-conductors.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, August 16, 2016

Combinatorial Hierarchy: two decades later

Combinatorial Hierarchy (CH) proposed by Noyes and Bastin is a hierarchy consisting of Mersenne integers M(n)= MM(n-1)=2M(n-1)-1 and starting from M1=2. The first members of the hierarchy are given by 2,3,7,127,M127=2127-1 and are primes. The conjecture of Catalan is that the hierarchy continues to some finite prime. It was proposed by Peter Noyes and Ted Bastin that the first levels of hierarchy up to M127 are important physically and correspond to various interactions (see this). I have proposed the levels of CH define a hierarchy of codes containing genetic code corresponding to M7 and also memetic code assignable to M127 (see this).

Pierre Noyes and Ted Bastin proposed also an argument why CH contains only the levels mentioned above. This has not been part of TGD view about CH: instead of this argument I have considered the possibility that CH does not extend beyond M127. With the inspiration coming from email discussion I tried to understand the argument stating that CH contains M127 as the highest level and ended up with a possible interpretation of the condition. Zero energy ontology (ZEO) and the representation of quantum Boolean statements A→ B as fermionic parts of positive and negative energy parts of zero energy states is essential. This led to several interesting new results.

  1. To my best understanding the original argument of Noyes and Bastin does not allow M127 level whereas prime property allows. States at M127 level cannot be mapped to zero energy states at M7 level. Allowing a wild association with Gödel's theorem, one could say that that there is hube number of truths at M127 level not realizable as theorems at M7 level.

    A possible interpretation is that M127 level corresponds to next level in the abstraction hierarchy defined by CH and to the transition from imbedding space level to the level of "world of classical worlds (WCW) in TGD. The possible non-existence of higher levels (perhaps implied if MM127 is not prime) could be perhaps interpreted by saying that there is no "world of WCWs"!

  2. Rather remarkably, for M7, which corresponds to genetic code (see this), the inequality serving as consistency condition is saturated. One can say that any set of 64 mutually consistent statements at M7 level can be represented in terms of 64 Boolean maps at M3 level representable in terms of zero energy states. One obtains an explicit identification for the Boolean algebras involved in terms of spin and isospin states of fermions in TGD framework at level M7 so that genetic code seems to be realized at the fundamental elementary particle level thanks to the dimension D=8 of imbedding space. Even more, the level M127 corresponding to memetic code emerges in the second quantization of fermions at M7 level. Here color triplet property of quarks and color singletness of leptons and the identification of elementary particles as pairs of wormhole contacts are in essential role.

The conclusion would be that in TGD Universe genetic code and its memetic counterpart are realized at the level of fundamental particles. Already earlier I have ended up with alternative realizations at the level of dark nucleons and sequences of 3 dark nucleons (see this).

For details see the chapter Genes and Memes of "Genes and Memes" or the article Combinatorial Hierarchy: two decades later.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, August 15, 2016

Two styles of theorizing: conservative and radical

The Facebook discussions with Andrei Patrascu and others have created a need to emphasize that there are two styles of working in theoretical physics. These approaches might be called conservative and radical.

  1. Conservative approach takes some axiomatics as a starting point and deduces theorems. In the highly competitive academic environment this approach is good for survival. The best juggler wins but theorems about physics of say K3 surface are not very useful physically.

  2. Second approach might be called radical. The rebel tries to detect the weak points of existing paradigms and propose new visions generalizing the old ones. This approach is intellectually extremely rewarding but does not lead to academic success. There is a continual battle between the two approaches and the first approach has been the winner for about four decades since the establishment of standard model (and perhaps has lead to the recent dead end).

Personally I belong to the camp of rebels. The work that I have done during last decades is an attempt to clean what I see as staple of Aigeas by going beyond quantisation, which I see just as a set of rules producing amplitudes but is mathematically and conceptually highly unsatisfactory.

The rough physical idea was that in classical theory space-times are 4-surfaces in H=M4× CP2 satisfying some variational principle: this allows to lift Poincare invariance to the level of H have Poincare charges as Noether charges so that the energy problem of general relativity is circumvented. My personal conviction is that the loss of Noether charges is the deep reason for the failure to quantize general relativity: the success of string models would reflect the fact that in string models Poincare invariance is obtained.

The challenge is to quantize this theory.

  1. I of course started by trying to apply canonical quantisation taking GRT as a role model. After its failure I tried path integral in fashion at that time: every young theoretician wanted to perform a really difficult path integral explicitly and become the hero! It turned out that these methods failed completely by the non-linearity of any general coordinate invariant variational principle dictating the dynamics of space-time surface.

    I had good luck since this failure forced me quite soon to deeper waters: at this point my path deviated radically from that still followed by colleagues. Note that canonical quantization relies also on Newtonian time as also the notion of unitary time evolution and this is conceptually highly unsatisfactory in the new framework.

  2. Around 1985 I indeed realized that much more radical approach is required and around 1990 I had finally solved the problem at general level. TGD must be formulated as a geometrization of not only classical physics but also of quantum theory by geometrizing the infinite-D "world of classical worlds" consisting of 3-surfaces.

    WCW must be endowed with Kähler geometry - already in case of much simpler loop spaces Freed showed that Kähler geometry is unique. Physics would be unique from the mere mathematical existence of WCW geometry!

    Also superstringers had this idea for some time but during the wandering in landscape they began to see the total loss of predictivity as a blessing. After LHC they speak only about formal string theory so that the conservative approach has taken the lead.

    Kähler function would correspond to action for a preferred extremal of Kähler action and have interpretation as analog of Bohr orbit. Classical physics in the sense of Bohr orbitology would be exact part of quantum theory rather than a limit. This follows from general coordinate invariance (GCI) only.

  3. Physical states would correspond to spinor fields in WCW: for given a 3-surface they correspond to fermionic Fock states. An important point is that WCW spinor fields are formally purely classical: "quantization without quantization" would Wheeler say.

    Induced spinor fields are quantized at space-time surface and the gamma matrices of WCW are linear combinations of fermionic oscillator operators so that at space-time level quantization of free fermions is needed and anticommutativity has geometric interpretation at WCW level: fermionic statistics is geometrized.

  4. The generalisation of S-matrix in zero energy ontology (ZEO) - also needed - is associated with the modes of WCW Dirac operator satisfying analog of Dirac equation as conditions stating supersymplectic invariance (formally analogous to Super Virasoro conditions in string models). One just solves the free Dirac equation in WCW!

    Childishly simple at the general conceptual level but the practical construction of the S-matrix as coefficients from zero energy state identified as a bilinear formed by positive and negative energy states is a really hard task!

The only way to proceed is to identify general principles.
  1. General Coordinate Invariance is what led to the idea that WCW geometry assigns to 3-surface (more precisely, to a a pair 3-surfaces at boundaries of causal diamond) a space-time surface unique space-time surface as preferred
    extremal of Kähler action (most plausible guess). Kähler function is the value of Kähler action for regions of space-time surface with Euclidian signature of induced metric and Minkowskian regions give imaginary contribution as analog of QFT action. Mathematically ill-defined path integral transforms to a well-defined functional integral over 3-surfaces.

  2. Infinite dimensional WCW geometry requires maximal symmetries. Four-dimensionality of M4 factor and space-time surface is necessary condition since 3-D light-like surfaces by their metric 2-dimensionality allow ab extension of ordinary 2-D conformal invariance. M4 and CP2 are unique in that they allow twistor space with Kähler structure. The twistorial lift of TGD predicts cosmological term as an additional term besides Kähler action in the dimensional reduction of 6-D Kähler action and also predicts the value of Planck length as radius of the sphere of twistor bundle having sphere as fiber and space-time surface as base. I am still not sure whether twistorial lift is really necessary or whether cosmological constant and gravitational constant emerge in some other manner.

  3. Infinite number of conditions stating the vanishing of classical super-symplectic Noether charges (not all of them) is satisfied and guarantee strong form of holography (SH) implied by strong form of general coordinate invariance (SGCI): space-time dynamics is coded by 2-D string world sheets and partonic 2-surfaces serving as "space-time genes": a close connection with string models is obtained. These conditions are satisfied also by quantal counterparts of super-symplectic charges and there is a strong formal resemblance with super Virasoro conditions. These conditions include also the analog of Dirac equation in WCW.

  4. Feynman/twistor/scattering diagrammatics (last one is the best choice) is something real and must be generalized and diagrams have a concrete geometric and topological interpretation at space-time level: it must be emphasized and thickly underlined that also this is something completely new. By strong form of holography this diagrammatics reduces to a generalization of string diagrammatics and 8-D generalization of twistor diagrammatics based on octonionic representation of sigma matrices is highly suggestive. Recall that massive particles are a nuisance for twistor approach and massive momenta in 4-D sense would correspond to massless momenta in 8-D sense.

    Twistor approach suggests a Yangian generalization of super-symplectic symmetries with polylocal generators (polylocal with respect to partonic 2-surfaces). Yangian symmetries should dictate the S-matrix as in twistor Grassmann approach. The symmetries are therefore monstrous and the formidable challenge is to understand them mathematically.

  5. One has half dozen of new deep principles: physics as WCW spinor geometry and quantum physics formulated in terms of modes of classical WCW spinor fields so that the only genuinely quantal aspect of quantum theory would be state function reduction; SGCI implying SH; maximal isometry group of WCW to guarantee existence of its Kähler geometry and its extension to Yangian symmetry, ZEO; number theoretic vision and extension of physics to adelic physics by number theoretic universality; hierarchy of Planck constants assignable to the fractal hierarchy of super-symplectic algebras and inclusions of hyperfinite factors of type II1.

    I also learned that one cannot proceed without a proper quantum measurement theory and this led to a theory of consciousness and applications in quantum biology: the most important implication is the new view about the relationship of experienced and geometric time.

    What remarkable, that all this has followed during almost four decades from a childishly simple observation: the notion of energy in GRT is ill-defined since Noether theorem does not apply, and one can cure the situation by assuming that space-times are 4-surfaces in space-time having M4 as Cartesian factor. This should show how important it is for a theoretician to be keenly aware of the weak points of the theory.

I believe that I have now identified the general principles (axiomatics) but a huge collective effort would be needed to deduce the detailed rules. There are of course many open problems and questions about details. In any case, this involves a lot of new mathematics and mathematicians are needed to build it.

To get some perspective a comparison with what most mathematical physicists are doing is in order. They start from a given axiomatics since they want to deduce theorems to be the most skillful juggler and get the reward. My goals have been different. I have used these years to identify the axioms of a theory allowing to lift quantum theory to a real unified theory of fundamental interactions. I have been also forced to use every bit of experimental information whereas mathematical physicist could not care less of anomalies.

For a summary of earlier postings see Latest progress in TGD.

Have magnetic monopoles been detected?

LNC scientist report that they have discovered magnetic monopoles (see this and this). The claim that free monopoles are discovered is to my opinion too strong, at least in TGD framework.

TGD allows monopole fluxes but no free monopoles. Wormhole throats however behave effectively like monopoles when looked at either space-time sheet, A or B. The first TGD explanation that comes in mind is in terms of 2-sheeted structures with wormhole contacts at the ends and monopole flux tubes connecting the wormhole throats at A and B so that closed monopole flux is the outcome. All elementary particles are predicted to be this kind of structures in the scale of Compton length. First wormhole carries throat carries the elementary particle quantum numbers and second throat neutrino pair neutralizing the weak isospin so that weak interaction is finite ranged. Compton length scales like heff and can be nano-scopic or even large for large values of heff. Also for abnormally large p-adic length scale implying different mass scale for the particle, the size scale increases.

How to explain the observations? Throats with opposite apparent quantized magnetic charges at given space-time sheet should move effectively like independent particles (although connected by flux tube) in opposite directions to give rise to an effective monopole current accompanied by an opposite current at the other space-time sheet. This is like having balls at the ends of very soft strings at the two sheets. One must assume that only the current only at single sheet is detected. It is mentioned that ohmic component corresponds to effectively free monopoles (already having long flux tubes connecting throats with small magnetic string tension). In strong magnetic fields shorter pairs of monopoles are reported to become "ionised" and give rise to a current increasing exponentially as function of square root of external magnetic field strength. This could correspond to a phase transition increasing heff with no change in particle mass. This would increase the length of monopole flux tube and the throats would be effectively free magnetic charges in much longer Compton scale. The space-time sheet at which the throat carrying the quantum numbers of fermion is preferred in the case of elementary fermions.

The analog of color de-confinement comes in mind and one cannot exclude color force since non-vanishing Kähler field is necessarily accompanied by non-vanishing classical color gauge fields. Effectively free motion below the length scale of wormhole contact would correspond to asymtotic freedom. Amusingly, one would have zoomed up representation of dynamics of colored objects! One can also consider interpretation in terms of Kähler monopoles: induced Kähler form corresponds to classical electroweak U(1) field coupling to weak hypercharge but asymptotic freedom need not fit with this interpretation. Induced gauge fields are however strongly constrained: the components of color gauge fields are proportional to Hamiltonians of color rotation and induced K\"ahler form. Hence it is difficult to draw any conclusions.

For a summary of earlier postings see Latest progress in TGD.

Wigner's friend and Schrödinger's cat

I encountered in Facebook discussion Wigner's friend paradox (see this and this). Wigner leaves his friend to the laboratory together with Schrödinger's cat and the friend measures the state of cat: the outcome is "dead" or "alive". Wigner returns and learns from his friend what the state of the cat is. The question is: was the state of cat fixed already earlier or when Wigner learned it from his friend. In the latter case the state of friend and cat would have been superposition of pairs in which cat was alive and friend new this and cat was dead also now friend new this. Entanglement between cat and bottle would have been transferred to that between cat+bottle and Wigner's friend. Recall that this kind of information transfer occur in quantum computation and quantum teleportation allows to transfer arbitrary quantum state but destroys the original.

The original purpose of Wigner was to demonstrate that consciousness is involved with the state function collapse.
TGD view is that the state function collapse can be seen as moment consciousness. Or more precisely, self as conscious entity corresponds to the repeated state function reduction sequence to the same boundary of causal diamond (CD). One might say that self is generalized Zeno effect in Zero Energy Ontology (ZEO). The first reduction to the opposite boundary of CD means death of self and re-incarnation at opposite boundary as time reversed self. The experiencet flow of time corresponds to the shift of the non-fixed boundary of self reduction by reduction farther from the fixed boundary - also the state at it changes. Thus subjective time as sequence of reductions is mapped to clock time identifiable as the temporal distance between the tips of CD. Arrow of time is generated but changes in death-reincarnation.

In TGD inspired theory of consciousness the intuitive answerto the question of Wigner looks obvious. If the friend measured the state of cat, it was indeed dead or alive already before Wigner arrived. What remains is the question what it means for Wigner, the "ultimate observer", to learn about the state of the cat from his friend. The question is about what conscious communications are.

Consider first the situation in the framework of standard quantum information theory.

  1. Quantum teleportation could make it possible to transfer arbitrary quantum state from the brain of Wigner's friend to Wigner's brain. Quantum teleportation involves generation of Bell state of qubits assignable with Wigner's friend (A) and Wigner (B).

  2. This quantum state can be constructed by a joint measurement of component of spin in same direction at both A and B. One of the four eigenstates of (by convention) the operator Qz= Jx1)⊗ Jy2)-Jy1)⊗ Jx2) is the outcome. For spinors the actions of Jx and Jy change the sign of Jz eigenvalue so that it becomes possible to construct the Bell states as eigenstates of Qz.

  3. After that Wigner's friend measures both the qubit representing cat's state, which is to be communicated and the qubit at A. The latter measurement does not allow to predict the state at B. Wigner's friend communicates the two bits resulting from this measurement to Wigner classically. On basis of these two classical bits his friend performs some unitary operation to the qubit at his end and transforms it to qubit that was to be communicated.


This allows to communicate the qubit representing measurement outcome (alive/dead). But what about meaning? What guarantees that the meaning of the bit representing the state of the cat is the same for Wigner and his friend? One can also ask how the joint measurement can be realized: its seems to require the presence of system containing A⊗ B. To answer these questions one must introduce some notions of TGD inspired theory of consciousness: self hierarchy and subself=mental image identification.

TGD inspired theory of consciousness predicts that during communication Wigner and his friend form a larger entangled system: this makes possible sharing of meaning. Directed attention means that subject and object are entangled. The magnetic flux tubes connecting the two systems would serve as a correlate for the attention. This mechanism would be at work already at the level of molecular biology. Its analog would be wormholes in ER-EPR corresponence proposed by Maldacena and Susskind. Note that directed attention brings in mind the generation of the Bell entangled pair A-B. It would make also possible quantum teleportation.

Wigner's friend could also symbolize the "pointer of the measurement apparatus" constructed to detect whether cats are dead of alive. Consider this option first. If the pointer is subsystem defining subself of Wigner, it would represent mental image of Wigner and there would be no paradox. If qubit in the brain in the brain of Wigner's friend replaces the pointer of measurement apparatus then during communication Wigner and his friend form a larger entangled system experiencing this qubit. Perhaps this temporary fusion of selves allows to answer the question about how common meaning is generated. Note that this would not require quantum teleportation protocol but would allow it.

Negentropically entangled objects are key entities in TGD inspired theory of consciousness and the challenge is to understand how these could be constructed and what their properties could be. These states are diametrically opposite to unentangled eigenstates of single particle operators, usually elements of Cartan algebra of symmetry group. The entangled states should result as eigenstates of poly-local operators. Yangian algebras involve a hierarchy of poly-local operators, and twistorial considerations inspire the conjecture that Yangian counterparts of super-symplectic and other algebras made poly-local with respect to partonic 2-surfaces or end-points of boundaries of string world sheet at them are symmetries of quantum TGD. Could Yangians allow to understand maximal entanglement in terms of symmetries?

  1. In this respect the construction of maximally entangled states using bi-local operator Qz=Jx⊗ Jy - Jy⊗ Jx is highly interesting since entangled states would result by state function. Single particle operator like Jz would generate un-entangled states. The states obtained as eigenstates of this operator have permutation symmetries. The operator can be expressed as Qz=fzijJi⊗ Jj, where fABC are structure constants of SU(2) and could be interpreted as co-product associated with the Lie algebra generator Jz. Thus it would seem that unentangled states correspond to eigenstates of Jz and the maximally entangled state to eigenstates of co-generator Qz. Kind of duality would be in question.

  2. Could one generalize this construction to n-fold tensor products? What about other representations of SU(2)? Could one generalize from SU(2) to arbitrary Lie algebra by replacing Cartan generators with suitably defined co-generators and spin 1/2 representation with fundamental representation? The optimistic guess would be that the resulting states are maximally entangled and excellent candidates for states for which negentropic entanglement is maximized by NMP.

  3. Co-product is needed and there exists a rich spectrum of algebras with co-product (quantum groups, bialgebras, Hopf algebras, Yangian algebras). In particular, Yangians of Lie algebras are generated by ordinary Lie algebra generators and their co-generators subject to constraints. The outcome is an infinite-dimensional algebra analogous to one half of Kac-Moody algebra with the analog of conformal weight N counting the number of tensor factors. Witten gives a nice concrete explanation of Yangian for which co-generators of TA are given as QA= ∑i<j fABC TBi ⊗ TCj, where the summation is over discrete ordered points, which could now label partonic 2-surfaces or points of them or points of string like object. For a practically totally incomprehensible description of Yangian one can look at the Wikipedia article .

  4. This would suggest that the eigenstates of Cartan algebra co-generators of Yangian could define an eigen basis of Yangian algebra dual to the basis defined by the totally unentangled eigenstates of generators and that the quantum measurement of poly-local observables defined by co-generators creates entangled and perhaps even maximally entangled states. A duality between totally unentangled and completely entangled situations is suggestive and analogous to that encountered in twistor Grassmann approach where conformal symmetry and its dual are involved. A beautiful connection between generalization of Lie algebras, quantum measurement theory and quantum information theory would emerge.

For a summary of earlier postings see Latest progress in TGD.

Saturday, August 13, 2016

What does one mean with quantum fluctuations in TGD framework?

The notion of quantum fluctuation is to my opinion far from being well-defined. Often Uncertainty Principle is assigned with quantum fluctuations. Eigenstate of say momentum is necessary de-localized maximally, and one could say that there are quantum fluctuations in position. Same applies to eigenstates of energy. Personally I would prefer stronger notion for which quantum fluctuation would be analogous to thermodynamical fluctuation at criticality.

Path integral formalism provides a stronger definition. In path integral formalism one interprets transition amplitude as a sum over all paths - not only classical one - leading from given initial to given final state. In stationary phase approximation making possible perturbative approach one performs path integral around the classical path. The paths which differ from the classical path correspond naturally to quantum fluctuations. For interacting quantum field theories (QFT) the expansion in powers of coupling constant - say α=e2/4πhbar - would give radiative corrections identifiable as being due quantum fluctuations. The problem of path integral formalism is that path integral does not exist in a strict mathematical sense.

Some relevant ideas about TGD

To discuss the situation in TGD framework one must first introduce some key ideas of TGD related to the path and functional integrals. The first thing to notice is of course that path integral and functional integral might be un-necessary historical load in the definition scattering amplitudes in TGD framework. Scattering diagrams are however central in QFT and also in dramatically simpler twistorial approach. In ZEO one indeed expects that diagrams in some form are present. The functional integral over 3-surfaces replaces path integral over all 4-surfaces - the first naive guess for quantization, which completely fails in TGD framework. The recent view indeed is that the scattering amplitudes reduce to discrete sums of generalized scattering diagrams.

  1. In TGD framework path integral is replaced with a functional integral over pairs of 3-surfaces at boundaries of causal diamonds (CDs: CDs form a scale hierarchy). The functional integral is weighted by a vacuum functional, which is product of two exponent. The real exponent guarantees the convergence as functional integral and hopefully makes it mathematically well-defined. The exponent of imaginary phase is analogous to action exponential appearing in QFTs and gives rise to the crucial interference effects. In Zero Energy Ontology (ZEO) one can say that TGD is a complex square root of thermodynamics with partition function replaced with exponential of complex quantity so that fusion of QFT and thermodynamics is obtained.

  2. There is no integration over paths and the interpretation is in terms of holography implying that pair of 3-surfaces is accompanied by a highly unique space-time surface analogous to Bohr orbit.

    Holography follows from general coordinate invariance (GCI): the definition of WCW Kähler geometry must assign to given 3-surface(!) a hopefully unique space-time surface for 4-D(!) general coordinate transformations to act on. This space-time surface corresponds to a preferred extremal of Kähler action.

  3. One can strengthen the notion of holography by demanding that it does not matter whether one identifies 3-surfaces as pairs of space-like 3-surface pairs at the boundaries of causal diamond CD or as light-like 3-surfaces between Euclidian and Minkowskian space-time regions defining" orbits of partonic 2-surfaces". 2-D string world sheets carrying induced fermion fields and partonic 2-surfaces would carry all information needed to construct physical state and would have strong form of holography (SH). This would mean almost string model like description of TGD. Preferred extremals satisfy at their ends infinite number of conditions analogous to Super Virasoro conditions and defining the analog of Dirac equation at level of WCW. This is what makes the situation almost - or effectively 2-dimensional.

    The localization of modes of induced spinor fields to 2-surfaces follows from the physically well-motivated requirement that the modes of induced spinor field have well-defined eigenvalue of em charge. This demands that the induced W gauge potentials vanish: the condition requires 2-D CP2 projection and is in the generic situation satisfied at string world sheets. Also number theoretic arguments favor the condition: in particular, the idea that string world sheets are either complex or co-complex 2-surfaces of quaternionic space-time surface is highly attractive. The boundaries of string world sheets would in term be real/co-real (imaginary) "surfaces" of string world sheets. It is not clear how unique the choice of strings world sheets is.

    The localization of modes of induced spinor fields to 2-surfaces follows from the physically well-motivated requirement that the modes of induced spinor field have well-defined eigenvalue of em charge. This demands that the induced W gauge potentials vanish: the condition requires 2-D CP2 projection and is in the generic situation satisfied at string world sheets. Also number theoretic arguments favor the condition: in particular, the idea that string world sheets are either complex or co-complex 2-surfaces of quaternionic space-time surface is highly attractive. The boundaries of string world sheets would in term be real/co-real (imaginary) "surfaces" of string world sheets. It is not clear how unique the choice of strings world sheets is.

  4. It would be very nice if preferred extemals were unique. In fact, extended number theoretic vision suggests that this might not be the case. There could be a kind of gauge symmetry analogous to that encountered in M-theory where two different Calabi-Yau geometries would describe the same physics.

    Number theoretic vision states that space-time surfaces are correlates for sequences of algebraic operations transforming incoming collection of algebraic objects to an outgoing collection of them. There would be an infinite number of equivalent computations and in absence of some natural cutoff this in turn would suggest that infinite number of space-time surfaces - generalized scattering diagrams - corresponds to the same scattering amplitude.

    This would extend the old fashioned string model duality to an infinite number of dualities allowing to transform all loopy diagrams to braided tree diagrams as in QFT without interactions. The functional integration over WCW would not involve summation over different topologies of generalized scattering diagrams, choice of gauge would select one of them: in the similar manner in hadronic string model one does not sum separately over s-channel and t-channel exchanges.

    It must be however emphasized that these loops are topological, and include besides stringy loops (having different physical interpretation in TGD: particle just travels along different paths as in double slit experiment) also new kind of loops due to the new kind of vertices analogous to those for ordinary Feynman diagrams. The new elements are that the lines of Feynman diagrams become 4-D: orbits of 3-surfaces and at generalized vertices these generalized lines meet at their ends. At this kind of vertex not encountered in string models the 4-surface is locally singular although 3-surface at the vertex is non-singular. In string model string world sheets are non-singular but strings are singular at vertices (eye glass type closed string is basic example).

  5. There is also a functional integral over small deformations of a diagram with given topology (which could be chosen to be the tree topology). Quantum criticality suggests that coupling constants do not evolve locally being analogous to critical temperature and change in phase transition manner as the character of quantum criticality changes. Also p-adic considerations suggest that coupling constant evolution reduces to a discrete p-adic coupling constant evolution. Coupling constants would be piecewise constants and depend only on the p-adic length scale and the value of Planck constant heff=n× h. Theory would be as near as possible to a physically trivial theory in which couplings constants do not evolve at all. The local vanishing of radiative corrections would guarantee absence of divergences and would have interpretation in terms of integrability of TGD.

  6. By SH functional integral should reduce to that over string world sheets and partonic 2-surfaces assignable to special prefered extremals. For them vacuum functional would receive contributions from Euclidian and Minkowskian regions and have maximum modulus and stationary phase. Functional integral over deformations of these 2-surfaces would reduce to exactly calculable Gaussian. As a matter fact, the Gaussian determinant and metric determinant from WCW geometry should cancel so that one would have only discrete sum of action exponentials - perhaps only single one for given diagram topology which by generalization of string model duality would be equivalent as representations of equivalent series of algebraic computations. This would be a highly desired result from the point of view of number theoretic universality.

One could of course challenge the entire idea about functional integral. Why not just replace the functional integral with a sum over amplitudes assignable to preferred extremals corresponding maxima/stationary phase 3-surfaces weighted by exponent of Kähler action? Classically the theory would be effectively on mass shell theory. This would automatically give number theoretic universality. If the generalization of duality symmetry inspired by the idea about scattering as computation holds then one could include only braided tree diagrams.

Quantum fluctuations in ZEO

What quantum fluctuations could mean in TGD Universe? Here the quantum criticality of TGD suggests that interesting quantum fluctuations are associated with quantum criticality.

  1. One can start from a straightforward generalization of the definition of quantum fluctuations suggested by path integral approach. Holography would suggest that quantum fluctuations correspond to a delocalization in the space of highly correlated pairs of 3-surfaces. This is a nice definition consistent also with the weakest definition relying on Uncertainty Principle but this looks somewhat trivial.

  2. Quantum criticality is key feature of TGD and would suggest that quantum fluctuations are analogous to thermodynamical fluctuations at criticality and thus involve long range correlations and non-determinism. Thermodynamical fluctuations induce phase transitions. Same should apply to quantum critical quantum fluctuations.

    In the adelic approach to TGD p-adic primes and values of heff correspond to various quantum phases of matter. In ZEO phase transition should correspond for particle space-time surface a situation in which the two ends of CD correspond to different phases, that is to different values of p-adic prime p and/or heff and other collective parameters: note that algebraic extensions of rationals define an evolutionary hierarchy and should also appear as this kind of parameters.

    Zero energy state would have well-defined values of prime p and/or heff at the passive boundary of CD left unchanged by a sequence of repeated state function reductions (generalized Zeno effect). At the active end of CD, which changes during the sequence and at which members of state pairs change, one would have quantum superposition of phases with different values of p and/or heff. This conforms with the idea about what quantum critical fluctuations should mean. Passive end would not fluctuate but active end would do so. Quantum fluctuations would become part of the definition of quantum state in ZEO. The state function reduction to the opposite boundary of CD would change the roles of active and passive boundaries of CD. This would have interpretation as a quantum phase transition leading to well-defined phase at the formerly active boundary. Hence the notion of quantum phase transition would also become precisely defined.

For a summary of earlier postings see Latest progress in TGD.

Tuesday, August 09, 2016

Misbehaving b-quarks and proton's magnetic body

Science News tells about misbehaving bottom quarks (see also the ICHEP talk). Or perhaps one should talk about misbehaving b-hadrons - hadrons containing b- quarks. The mis-behavior appears in proton-proton collisions at LHC. This is not the only anomaly associated with proton. The spin of proton is still poorly understood and proton charge radius if quite not what it should be. Now we learn that there are more b-containing hadrons (b-hadrons) in the directions deviating considerably from the direction of proton beam: discrepancy factor is of order two.

How this could reflect the structure of proton? Color magnetic flux tubes are the new TGD based element in the model or proton: could they help? I assign to proton color magnetic flux tubes with size scale much larger than proton size - something like electron Compton length: most of the mass of proton is color magnetic energy associated with these tubes and they define the non-perturbative aspect of hadron physics in TGD framework. For instance, constituent quarks would be valence quarks plus their color flux tubes. Current quarks just the quarks whose masses give rather small contribution to proton mass.

What happens when two protons collide? In cm system the dipolar flux tubes get contracted in the direction of motion by Lorentz contraction. Suppose b-hadrons tend to leave proton along the color magnetic flux tubes (also ordinary em flux tubes could be in question). Lorentz contraction of flux tubes means that they tend to leave in directions orthogonal to the collision axis. Could this explain the misbehavior of b-hadrons?

But why only b-hadrons or some fraction of them should behave in this manner? Why not also lighter hadrons containing c and s? Could this relate to the much smaller size of b-quark defined by its Compton length λ= hbar/m(b) , m(b) = 4.2 GeV, which is much shorter than the Compton length of u-quark (the mass of constituent u quark is something like 300 MeV and the mass of current u quark is few MeVs. Could it be that lighter hadrons do not leave proton along flux tubes? Why? Are these hadrons or corresponding quarks too large to fit (topologically condense) inside protonic flux tube? b-quark is much more massive and has considerably smaller size than say c-quark with mass m(c) = 1.5 GeV and could be able to topologically condense inside the protonic flux tube. c quark should be too large, which suggests that the radius of flux tubes is larger than proton Compton length. This picture conforms with the view of perturbative QCD in which the primary processes take place at parton level. The hadronization would occur in longer time scale and generate the magnetic bodies of outgoing hadrons. The alternative idea that also the color magnetic body of hadron should fit inside the protonic color flux tube is not consistent with this view.

For a summary of earlier postings see Latest progress in TGD.

Friday, August 05, 2016

Is the new physics really so elementary as believed?

Last night I was thinking about the situation in particle physics. The inspiration of course comes from the 750 GeV particle, which does not exist anymore officially. I am personally puzzled. Various bumps about which Lubos have kept count fit nicely to the spectrum of mesons of M89 hadron physics (almost)-predicted by TGD (see this, this, this, and this) . They have precisely the predicted masses differing by a factor 512 from those of M107 hadron physics, the good old hadron physics. Is it really possible that Universe has made a conspiracy to create so many statistical fluctuations just to the correct places? Could it be that something is wrong in the basic philosophy of experimental particle physics, which leads to the loss of information?

First of all, it is clear that new physics is badly needed to solve various theoretical problems such as fine tuning problem for Higgs mass to say nothing about the problem of understanding particle mass scales. New physics is necessary but it is not found. What goes wrong? Could it be that we are trying to discover wrong type of new physics?

Particle physics is thought to be about elementary objects. There would be no complications like those appearing in condensed matter physics: criticality or even quantum criticality, exotic quasiparticles, ... This simplifies the situation enormously but still one is dealing with a gigantic complexity. The calculation of scattering rates is technically extremely demanding but basically application of well-defined algorithms; Monte Carlo modelling of the actual scattering experiments such as high energy proton-proton collisions is also needed. One must also extract the signal from a gigantic background. These are extremely difficult challenges and LHC is a marvellous achievement of collaboration and coherence: like string quartet but with 10,000 players.

What one does is however not to just look what is there. There is no label in the particle telling "I am the exotic particle X that you are searching for". What one can do is to check whether the small effects - signatures - caused by a given particle candidate can be distinguished from the background noise. Finding a needle in haystack is child's play when compared with what one must achieve. If some totally new physics not fitting into the basic paradigms behind search algorithms is there, it is probably lost.

Returning to the puzzle under consideration: the alarming fact is that the colliding protons at LHC form a many-particle system! Could it happen that the situation is even more complex than believed and that phenomena like emergence and criticality encountered in condensed matter physics could be present and make life even more difficult?

As a matter of fact, already the phase transition from confined phase to perturbative QCD involving thermodynamical criticality would be example of this complexity. The surprise from RHIC and later LHC was that something indeed happened but was different than expected. The transition did not seem to take place to perturbative QCD predicting thermal "forgetfulness" and isotropic particle distributions from QCD plasma as black body radiation. For peripheral collisions - colliding particles just touching - indications for string like objects emerged. The notion of color glass was introduced and even AdS/CFT was tried (strings in 10-D space-time!) but without considerable success. As if a new kind of hadron physics with long range correlation in proton scale but with energy scale of hundreds of proton masses would have been present. This is mysterious since Compton lengths for this kind of objects should be of order weak boson Compton length.

In TGD Universe this new phase would be M89 hadron physics with large value heff =n×h, with n =512 to scale up M89 hadron Compton length to proton size scale to give long range correlations and fluctuation in proton scale characterizig quantum criticality. Instanton density I ∝ E• B for colliding protons would appear as a state variable analogous to say pressure in condensed matter and would be large just for the peripheral collisions. The production amplitude for pseucoscalar mesons of new hadron physics would by anomaly arguments be obtained as Fourier transform of I. The value of I would be essentially zero for head-on collisions and large only for peripheral collisions - particles just touching - in regions where E and B tend to be parallel. This would mean criticality. There could be similar criticality with respect to energy. If experimenter poses kinematical cutoffs - say pays attention only to collisions not too peripheral - the signal would be lost.

This would not be new. Already at seventies anomalous production of electron-positron pairs perhaps resulting from pseudoscalar state created near collision energy allowing to overcome Coulomb wall where reported: criticality again. The TGD model was in terms of leptopions (electro-pions) (see this) and later evidence for their muonic and tau counterparts have been reported. The model had of course a bad problem: the mass of leptopion is essentially twice that of lepton and one expects that colored lepton is also light. Weak boson decay widths do not allow this. If the leptopions are dark in TGD sense, the problem disappears. These exotic bumps where later forgotten: a good reason for this is that they are not allowed by the basic paradigms of particle physics and if they appear only at criticality they are bound to experience the fate of being labelled as statistical fluctuations.

This has served as an introduction to a heretic question: Could it be that LHC did not detect 750 GeV bosons because the kinematical cuts of the analysis eliminate the peripheral collisions for which protons just touch each other? Could these candidates for pseudo-scalars of M89 hadron physics be created by the instanton anomaly mechanism and only in periphery? And more generally, should particle physicists consider the possibility that they are not anymore studying collisions of simple elementary systems?

To find M89 pseudoscalars one should study peripheral collisions in which protons do not collide quite head-on and in which M89 pseudoscalars could be generated by em instanton mechanism. In peripheral situation it is easy to measure the energy emitted as particles since strong interactions are effectively absent - only the E•B interaction plus standard em interaction if TGD view is right (note that for neutral vector mesons the generalization of vector meson dominance based on effective action coupling neutral vector boson linearly to em gauge potential is highly suggestive). Unfortunately, peripheral collisions are undesired since beams are deflected from head-on course! These events are however detected but data tend to end up to trash bin usually as also deflected protons!! Luckily, Risto Orava's team (see this and this) is studying just those p-p collisions, which are peripheral! It would be wonderful if they would find Cernettes and maybe also other M89 pseudo-scalars from the trashbin!

Large statistical fluctuation certainly occurred. The interpretation for the large statistical fluctuation giving rise to Cernette boom could be as the occurrence of un-usually large portion of peripheral events allowing the production of M89 mesons, in particular Cernettes.

To sum up, the deep irony is that particle physicists are trying desperately to find new physics although it has been found long ago but put under the rug since it did not conform with QCD and standard model. The reductionistic dogma dictates that the acceptable new physics must be consistent with the standard model: no wonder that everything indeed continues to be miraculously consistent with standard model and no new physics is found! Same is true in gravitational sector: reductionism demands that string model leads to GRT and the various anomalies challenging GRT are simply forgotten.

For details see the earlier blog post, the chapter New Physics predicted by TGD: part I of "p-Adic Physics" or the article M89 hadron physics and quantum criticality.

For a summary of the earlier postings see Latest progress in TGD.

Cosmic redshift but no expansion of receding objects: one further piece of evidence for TGD cosmology

"Universe is Not Expanding After All, Controversial Study Suggests" was the title of very interesting Science News article telling about study which forces to challenge Big Bang cosmology. The title of course involve the typical popular exaggeration.

The idea behind the study was simple. If Universe expands, one expects that also astrophysical objects - such as stars and galaxies - should participate the expansion, and should increase in size. The observation was that this does not happen! One however observes the cosmic redshift so that it is quite too early to start to bury Big Bang cosmology. The finding is however a strong objection against the strongest version of expanding Universe. That objects like stars do not participate the expansion was actually already known when I started to develop TGD inspired cosmology for quarter century ago, and the question is whether GRT based cosmology can model this fact naturally or not.

The finding supports TGD cosmology based on many-sheeted space-time. Individual space-time sheets do not expand continuously. They can however expand in jerk-wise manner via quantum phase transitions increasing the p-adic prime characterizing space-time sheet of object by say factor two of increasing the value of heff=n× h for it. This phase transition could change the properties of the object dramatically. If the object and suddenly expanded variant of it are not regarded as states of the same object, one would conclude that astrophysical objects do not expand but only comove. The sudden expansions should be however observable and happen also for Earth. I have proposed a TGD variant of Expanding Earth hypothesis along these lines (see this ).

When one approximates the many-sheeted space-time of TGD with GRT space-time, one compresses the sheets to single region of slightly curved piece of M4 and gauge potentials and the deviation of induced metric from M4 metric are replaced with their sums over the sheets to get standard model. This operation leads to a loss of information about many-sheetedness. Many-sheetedness demonstrates its presence only through anomalies such as different value of Hubble constant in scales of order large void and cosmological scales (see this ), arrival of neutrinos and gamma rays from supernova SN1987A as separate bursts (see this ), and the above observation.

One can of course argue that cosmic redshift is a strong counter argument against TGD. Conservation of energy and momentum implied by Poincare invariance at the level of imbedding space M4× CP2 does not seem to allow cosmic redshift. This is not the case. Photons arrive from the source without losing their energy. The point is that the properties of the imagined observer change as its distance from the source increases! The local gravitational field defined by the induced metric induces Lorentz boost of the M4 projection of the tangent space of the space-time surface so that the tangent spaces at source and receiver are boosted with respect to other: this causes the gravitational redshift as analog of Doppler effect in special relativity. This is also a strong piece of evidence for the identification of space-time as 4-surface in M4× CP2.

For details see the chapter More about TGD inspired cosmology of "Physics in Many-sheeted Space-time" or the article Some astrophysical and cosmological findings from TGD point of view.

For a summary of earlier postings see Latest progress in TGD.

Thursday, August 04, 2016

Nothing new at LHC?

Lubos Motl has a commentary about articles released after ICHEP 2016 conference held in Chicago. Experimentalists tell "Nothing going beyond standard model". Depressing! Especially so because theorists have "known" that the New Physics must be there!

What looks strange from TGD point of view that alarge number of mesons of M89 =289-1 hadron physics predicted by TGD - scaled up variants of mesons of ordinary hadron physics (to which I assign Mersenne prime M107 =2107-1) appeared in the older data at lower energies as bumps with the predicted masses (see this.

Is there a nasty cosmic conspiracy to ridicule me;-) . Or are the produced mesons - at least the light ones indeed M89 mesons with large heff=n× h - as assumed in the model for the string like objects observed already at RHIC and later at LHC - and produced only at quantum criticality, which would be lost at higher energies. Of course not a single, experimentalist or theorist would take this seriously! Could this explanation apply to 750 GeV bump as I thought so first? No! This bump was announced in December 2015 on basis of the first analysis of data gathered since May 15 2015 (see this). Thus the diphoton bump that I identified as M89 eta meson is lost if one takes the results of analysics as final word.

One should of course give up so easily. If the production mechanism is same as for electro-pion (see this), the production amplitude is by anomaly considerations proportional to the Fourier transform of the classical "instanton density" I= E• B. In head-on collisions one tends to have I=0 because E (nearly radial in cylindrical coordinates) and B (field lines rotating around z-axis) for given proton are orthogonal and differ only apart from sign factors when the protons are in same position. For peripheral collisions in which also strange looking production of string like configurations parallel to beams was observed in both heavy ion and proton-proton collisions, E1• B2 can be vanishing as one can understand by figuring out what the electric and magnetic fields lookl ike in the cm coordinates. There is clearly a kind of quantum criticality involved also in this sense. Could these events be lost by posing reasonable looking constraints on the production mechanism or kinematical cutoff? But why the first analysis would have shown the presence of these events? Have some criteria changed?

Addition:To find M89 pseudoscalars one should study peripheral collisions in which protons do not collide quite head-on and in which M89 pseudoscalars could be generated by em instanton mechanism. In peripheral situation it is easy to measure the energy emitted as particles since strong interactions are effectively absent - only the E•B interaction plus standard em interaction if TGD view is right. Unfortunately peripheral collisions are undesired since beams are deflected from head-on course! These events are however detected but data end up to trash bin usually as also deflected protons!! Luckily, Risto Orava's team (see this and this) is studying just those p-p collisions, which are peripheral! It would be wonderful if they would find Cernettes and maybe also other M89 pseudo-scalars from the trashbin!

For details see the earlier blog post, the chapter New Physics predicted by TGD: part I of "p-Adic Physics" or the article M89 hadron physics and quantum criticality.

For a summary of the earlier postings see Latest progress in TGD.

Wednesday, August 03, 2016

The new findings about the structure of Milky Way from TGD viewpoint

I learned about two very interesting findings forcing to update the ideas about to the structure of Milky Way and allowing to test the TGD inspired Bohr model of galaxy based on the notion of gravitational Planck constant (see this, this, this, and this)

The first popular article tells about a colossal void extending from radius r0=150 ly to a radius of r1= 8,000 ly (ly=light year) around galactic nucleus discovered by a team led by professor Noriyuki Matsunaga. What has been found that there are no young stars known as Cepheids in this region. For Cepheids luminosity and the period of pulsation in brightness correlate and from the period for pulsation one can deduce luminosity and from the luminosity the distance. There are however Cepheids in the central region with radius about 150 ly.



Second popular article tells about the research conducted by an international team led by Rensselaer Polytechnic Institute Professor Heidi Jo Newberg. Researchers conclude that Milky Way is at least 50 per cent larger than estimated extending therefore to Rgal= 150,000 ly and has ring like structures in galactic plane. The rings are actually ripples in the disk having a higher density of matter. Milky way is said to be corrugated: there are at least 4 ripples in the disk of Milky Way. The first apparent ring of stars about at distance of R0=60,000 ly from the center. Note that R0 is considerably larger than r1=8,000 ly: the ratio is R0/r1= 15/2 so that this findings need not have anything to do with the first one.

Consider now the TGD based quantum model of galaxy. Nottale proposed that the orbits of planets in solar system are actually Bohr orbits with gravitational Planck constant (different for inner and outer planets and proportional to the product of masses of Sun and planet). In TGD this idea is developed furthe (see this): ordinary matter would condense around dark matter at spherical cells or tubes with Bohr radius. Bohr model is certainly over-simplification but can be taken as a starting point in TGD approach.

Could Bohr orbitology apply also to the galactic rings and could it predict ring radii as radii with which dark matter concentrations - perhaps at flux tubes - are associated? One can indeed apply Bohr orbitology by assuming TGD based model for galaxy formation.

  1. Galaxies are associated with long cosmic string like objects carrying dark matter and energy (as magnetic energy) (see this and this). Galaxies are like pearls along necklace and experience gravitational potential which is logarithmic potential. Gravitational force is of form F=mv12/ρ, where ρ is the orthogonal distance from cosmic string. Here v12 has dimensions of velocity squared being proportional to v12∝ GT, T=dM/dl the string tension of cosmic string.

  2. Newton's law v2/r= v12/r gives the observed constant velocity spectrum

    v=v1 .

    The approximate constancy originally led to the hypothesis that there is dark matter halo. As a matter of fact, the velocity tends to increase). Now there is no halo but cosmic string orthogonal to galactic plane: the well-known galactic jets would travel along the string. The prediction is that galaxies are free to move along cosmic string. There is evidence for large scale motions.

This was still just classical Newtonian physics. What comes in mind that one could apply also Bohr quantization for angular momentum to deduce the radii of the orbits.
  1. This requires estimate for the gravitational Planck constant

    hgr=GMm/v0

    assignable to te flux tubes connecting mass m to central mass M.

  2. The first guess for v0 would be as

    v0=v1 .

    The value of v1 is approximately v1= 10-3/3 (unit c=1 are used) (see this).

  3. What about mass M? The problem is that one does not have now a central mass M describable as a point mass but an effective mass characterizing the contributions of cosmic string distributed along string and also the mass of galaxy itself inside the orbit of star. It is not clear what value of central mass M should be assigned to the galactic end of the flux tubes.

    One can make guesses for M.

    1. The first guess for M would be as the mass of galaxy x× 1012× M(Sun), x∈ [.8-1.5]. The corresponding Schwartschild radius can be estimated from that of Sun (3 km) and equals to .48 ly for x=1.5. This would give for the mass independent gravitational Compton length the value

      Λgr= hgr/m= GM/v0=rS/2v0 (c=1) .

      For v0=v1 this would give Λgr= 4.5× 103 ly for x=1.5. Note that the colossal void extends from 150 ly to 8× 103 ly. This guess is very probably too large since M should correspond to a mass within R0 or perhaps even within r0.

    2. A more reasonable guess is that the mass corresponds to mass within R0=60,000 ly or perhaps even radius r0=150 ly. r0 turns out to make sense and gives a connection between the two observations.

  4. The quantization condition for angular momentum reads as

    mv1ρ= n× hgr/2 π .

    This would give

    ρn= n× ρ0 , ρ0=GM/[2π v1× v0] =Λgr/[2π v1] .

    The radii ρn are integer multiples of a radius ρ0.

    1. Taking M=Mgal, the value of ρ0 would be for the simplest guess v0=v1 about ρ0=2.15× 106 ly. This is roughly 36 times larger than the value of the radius R0=6× 104 ly for the lowest ring. The use of the mass of the entire galaxy as estimate for M of course explains the too large value.

    2. By scaling M down by factor 1/36 one would obtain R0=6× 104 ly and M= Mgal/36=.033.× Mgal: this mass should reside within R0 ly, actually within radius Λgr. Remarkably, the estimate for Λgr= 2π v1M gives Λgr= 127 ly, which is somewhat smaller than r0= 150 ly associated with void. The model therefore relates the widely different scales r0 and R0 assignable with the two findings to each other in terms of small parameter v0 appearing in the role of dimensionless gravitational "fine structure constant" αgr= GMm/2hgr= v0/2.

The TGD inspired prediction would be that the radii of the observed rings are integer multiples of basic radius. 4 rings are reported implying that the outermost ring should be at distance of 240,000 ly, which is considerably larger than the claimed updated size of 150,000 ly. The simple quantization as integer multiples would not be quite correct. Orders of magnitude are however correct.

This would suggest that visible matter has condensed around dark matter at Bohr quantized orbits or circular flux tubes. This dark matter would contribute to the gravitational potential and imply that the velocity spectrum for distance stars is not quite constant but increases slowly as observed . The really revolutionary aspect of this picture is that gravitation would involve quantum coherence in galactic length scales. The constancy of the CMB temperature supports gravitational quantum coherence in cosmic scales.

For details see the chapter TGD and Astrophysics of "Physics in Many-sheeted Space-time" or the article Three astrophysical and cosmological findings from TGD point of view.

For a summary of earlier postings see Latest progress in TGD.