https://matpitka.blogspot.com/search?updated-max=2007-05-13T07:31:00-07:00

Thursday, May 10, 2007

Sierpinski topology and quantum measurement theory with finite measurement resolution

I have been trying to understand whether category theory might provide some deeper understanding about quantum TGD, not just as a powerful organizer of fuzzy thoughts but also as a tool providing genuine physical insights. Kea is also interested in categories but in much more technical sense. Her dream is to find a category theoretical formulation of M-theory as something, which is not the 11-D something making me rather unhappy as a physicist with second foot still deep in the muds of low energy phenomenology.

Kea talks about topos, n-logos,... and their possibly existing quantum variants. I have used to visit Kea's blog in the hope of stealing some category theoretic intuition. It is also nice to represent comments knowing that they are not censored out immediately if their have the smell of original thought: this is quite too often the case in alpha male dominated blogs. It might be that I had luck this morning!

1. Locales, frames, Sierpinski topologies and Sierpinski space

Kea mentioned the notions of locale and frame . In Wikipedia I learned that complete Heyting algebras, which are fundamental to category theory, are objects of three categories with differing arrows. CHey, Loc and its opposite category Frm (arrows reversed). Complete Heyting algebras are partially ordered sets which are complete lattices. Besides the basic logical operations there is also algebra multiplication. From Wikipedia I learned also that locales and the dual notion of frames form the foundation of pointless topology. These topologies are important in topos theory which does not assume the axiom of choice.

So called particular point topology assumes a selection of single point but I have the physicist's feeling that it is otherwise rather near to pointless topology. Sierpinski topology is this kind of topology. Sierpinski topology is defined in a simple manner: set is open only if it contains a given point p. The dual of this topology defined in the obvious sense exists also. Sierpinski space consisting of just two points 0 and 1 is the universal building block of these topologies in the sense that a map of an arbitrary space to Sierpinski space provides it with Sierpinski topology as the induced topology. In category theoretical terms Sierpinski space is the initial object in the category of frames and terminal object in the dual category of locales. This category theoretic reductionism looks highly attractive to me.

2. Particular point topologies, their generalization, and finite measurement resolution

Pointless, or rather particular point topologies might be very interesting from physicist's point of view. After all, every classical physical measurement has a finite space-time resolution. In TGD framework discretization by number theoretic braids replaces partonic 2-surface with a discrete set consisting of algebraic points in some extension of rationals: this brings in mind something which might be called a topology with a set of particular algebraic points.

Perhaps the physical variant for the axiom of choice could be restricted so that only sets of algebraic points in some extension of rationals can be chosen freely. The extension would depend on the position of the physical system in the algebraic evolutionary hierarchy defining also a cognitive hierarchy. Certainly this would fit very nicely to the formulation of quantum TGD unifying real and p-adic physics by gluing real and p-adic number fields to single super-structure via common algebraic points.

There is also a finite measurement resolution in Hilbert space sense not taken into account in the standard quantum measurement theory based on factors of type I. In TGD framework one indeed introduces quantum measurement theory with a finite measurement resolution so that complex rays becomes included hyper-finite factors of type II1 (HFF, see this).

  • Could topology with particular algebraic points have a generalization allowing a category theoretic formulation of the quantum measurement theory without states identified as complex rays?

  • How to achieve this? In the transition of ordinary Boolean logic to quantum logic in the old fashioned sense (von Neuman again!) the set of subsets is replaced with the set of subspaces of Hilbert space. Perhaps this transition has a counterpart as a transition from Sierpinski topology to a structure in which sub-spaces of Hilbert space are quantum sub-spaces with complex rays replaced with the orbits of subalgebra defining the measurement resolution. Sierpinski space {0,1} would in this generalization be replaced with the quantum counterpart of the space of 2-spinors. Perhaps one should also introduce q-category theory with Heyting algebra being replaced with q-quantum logic.

3. Fuzzy quantum logic as counterpart for Sierpinksi space

This program, which I formulated only after this section had been written, might indeed make sense (ideas never learn to emerge in the logical order of things;-)). The lucky association was to the ideas about fuzzy quantum logic realized in terms of quantum 2-spinor that I had developed a couple of years ago. Fuzzy quantum logic would reflect the finite measurement resolution. I just list the pieces of the argument.

Spinors and qbits: Spinors define a quantal variant of Boolean statements, qbits. One can however go further and define the notion of quantum qbit, qqbit. I indeed did this for couple of years ago (the last section in Was von Neumann Right After All?).

Q-spinors and qqbits: For q-spinors the two components a and b are not commuting numbers but non-Hermitian operators. ab= qba, q a root of unity. This means that one cannot measure both a and b simultaneously, only either of them. aa+ and bb+ however commute so that probabilities for bits 1 and 0 can be measured simultaneously. State function reduction is not possible to a state in which a or b gives zero! The interpretation is that one has q-logic is inherently fuzzy: there are no absolute truths or falsehoods. One can actually predict the spectrum of eigenvalues of probabilities for say 1. q-Spinors bring in mind strongly the Hilbert space counterpart of Sierpinski space. One would however expect that fuzzy quantum logic replaces the logic defined by Heyting algebra.

Q-locale: Could one think of generalizing the notion of locale to quantum locale by using the idea that sets are replaced by sub-spaces of Hilbert space in the conventional quantum logic. Q-openness would be defined by identifying quantum spinors as the initial object, q-Sierpinski space. a (resp. b for dual category) would define q-open set in this space. Q-open sets for other quantum spaces would be defined as inverse images of a (resp. b) for morphisms to this space. Only for q=1 one could have the q-counterpart of rather uninteresting topology in which all sets are open and every map is continuous.

Q-locale and HFFs: The q-Sierpinski character of q-spinors would conform with the very special role of Clifford algebra in the theory of HFFs, in particular, the special role of Jones inclusions to which one can assign spinor representations of SU(2). The Clifford algebra and spinors of the world of classical worlds identifiable as Fock space of quark and lepton spinors is the fundamental example in which 2-spinors and corresponding Clifford algebra serves as basic building brick although tensor powers of any matrix algebra provides a representation of HFF.

Q-measurement theory: Finite measurement resolution (q-quantum measurement theory) means that complex rays are replaced by sub-algebra rays. This would force the Jones inclusions associated with SU(2) spinor representation and would be characterized by quantum phase q and bring in the q-topology and q-spinors. Fuzzyness of qqbits of course correlates with the finite measurement resolution.

Q-n-logos: For other q-representations of SU(2) and for representations of compact groups (see appendix of this) one would obtain something which might have something to do with quantum n-logos, quantum generalization of n-valued logic. All of these would be however less fundamental and induced by q-morphisms to the fundamental representation in terms of spinors of the world of classical worlds. What would be however very nice that if these q-morphisms are constructible explicitly it would become possible to build up q-representations of various groups using the fundamental physical realization - and as I have conjectured (see this) - McKay correspondence and huge variety of its generalizations would emerge in this manner.

The analogs of Sierpinski spaces: The discrete subgroups of SU(2), and quite generally, the groups Zn associated with Jones inclusions and leaving the choice of quantization axes invariant, bring in mind the n-point analogs of Sierpinski space with unit element defining the particular point. Note however that n≥3 holds true always so that one does not obtain Sierpinski space itself. Could it be that all of these n preferred points belong to any open set? Number theoretical braids identified as subsets of the intersection of real and p-adic variants of algebraic partonic 2-surface define second candidate for the generalized Sierpinski space with set of preferred points. Recall that the generalized imbedding space related to the quantization of Planck constant is obtained by gluing together coverings of M4×CP2→ M4×CP2/Ga×Gb along their common points. The topology in question would mean that if some point in the covering belongs to an open set, all of them do so. The interpretation could be that the points of fiber form a single inseparable quantal unit.

For more details see the chapter Was von Neumann Right After All?.

Sunday, May 06, 2007

Witten's new ideas about 1+2 -dimensional quantum gravity

Witten has been talking at Friday in string cosmology workshop in New York about his new ideas relating to 1+2 D quantum gravity. Peter Woit has been listening the talk and represents his understanding in Not-Even-Wrong. Lubos Motl gives a nice summary about the conformal field theoretic ideas involved. Why I got interested (even with this miserable technical background) is that Witten's talk gives an interesting perspective to quantum TGD, which reduces to almost topological QFT for light-like partonic surfaces defined by Chern-Simons action and its fermionic super counterpart.

1. Brief summary of main points

Very concisely, the message seems to be following.

  • The motivation of Witten is to find an exact quantum theory for blackholes in 3-D case. Witten proposes that the quantum theory for 3-D AdS3 blackhole with a negative cosmological constant can be reduced by AdS3/CFT2 correspondence to a 2-D conformal field theory at the 2-D boundary of AdS3 analogous to blackhole horizon. This conformal field theory would be a Chern-Simons theory associated with the isometry group SO(1,2)×SO(1,2) of AdS3.

  • This conformal theory would have so called monster group (see also the posting of Lubos) as the group of its discrete hidden symmetries. The primary fields of the corresponding conformal field theory would form representations of this group.

2. Questions

  1. Why negative cosmological constant? The answer is that Λ=0 does not allow 1+2 D blackholes and Witten believes that for Λ>0 is non-perturbatively unstable.

    Remark: The situation changes in TGD framework, where AdS3 is replaced by a generic light-like 3-surface so that only Λ=0 situation is encountered.

  2. Why monster group should act as symmetry group?

    • The existence of this kind of conformal theory has been demonstrated already earlier. Lubos Motl gives a nice description of how this symmetry results when one compactifies chiral bosonic fields in 24-dimensional space to a torus defined by Leech lattice.

    • The crucial observation is that the partition function of this conformal field theory contains no term coming from massless particles. Hence one can hope that it could correspond via AdS correspondendence to 1+2 D quantum gravitational theory which does not allow any gravitons since empty space Einstein equations stating the vanishing of Ricci tensor also imply the vanishing of curvature tensor.

  3. Could these results generalize to 1+3 D case? According to Witten this is not the case.

    Remark: In TGD framework the lightlike partonic 3-surfaces are boundaries of 4-D space-time sheets providing classical physics representations for the partonic quantum dynamics required by quantum measurement theory. Kind of inverse of holography. The generalization is therefore trivial in TGD framework.

  4. Does one obtain this 3-D quantum gravity from string- or M-theory in the sense that this 3-D gravity would emerge by compactication. Lubos is pessimistic about this.

3. How could this relate to Quantum TGD?

There are very strong resemblances between Witten's model and the formulation quantum TGD at parton level.

  1. In quantum TGD holography would be realized in 4-D case by identifying light-like 3-surfaces as basic dynamical objects and 4-D space-time sheets as surfaces containing partonic lightlike surface (of arbitrarily large size) as analogs of black -hole horizons. It should be emphasized that general coordinate invariance in 4-D sense allows the assumption about light-likeness. The additional conformal symmetries correspond to the symmetries predicted already much before the realization of fundamental role of light-likeness from the fact that space-time surfaces correspond to preferred extremals of Kähler action analogous to Bohr orbits.

    Partonic 3-surfaces would be something much more general than blackhole horizons. They define the "world of classical worlds", arena of quantum dynamics. In Witten's theory these degrees of freedom are frozen. The absolutely essential point would be a huge extension of 2-D conformal invariance to 3-D case made possible by lightlikeness implying metric 2-dimensionality. Obviously this picture provides a different approach to 4-D gravitational holography using light-like 3-surfaces as basic dynamical objects.

  2. AdS3 is replaced with light-like partonic 3-surface. Euclidian partonic 2-surface corresponds to the boundary of AdS3. This surface is determined as the intersection of the light-like 3-surface and future or past lightcone of M4× CP2 and is thus unique. There is thus no need for the analog of black hole horizon to result as a singularity of Einstein equations.

    These light-cones are an essential element in the definition of S-matrix and are associated with each argument of N-point function. They are also essential for the TGD inspired many-sheeted Russian doll cosmology. Jones inclusions and related quantization of Planck constant make them also necessary and they relate very closely to the representation of choice of quantization axes at the level of space-time, imbedding space, and "world of classical worlds" (everything quantal must have geometric correlates).

  3. Vacuum Einstein's equations are satisfied in the following sense. Due to the effective 2-dimensionality of the induced metric the situation is effectively 2-dimensional so that Einstein tensor vanishes identically and vacuum Einstein equations are satisfied for Λ=0. Gravitation would become purely topological in absence of the overall important attribute "light-like". Note that curvature tensor is non-vanishing but since the time direction disappears from metric there can be no propagating waves.

  4. Chern-Simons action for the induced Kähler form - or equivalently, for the induced classical color gauge field proportional to Kähler form and having Abelian holonomy - corresponds to the Chern-Simons action in Witten's theory. Also the fermionic counterpart of this action for induced spinors and dictated by super-conformal symmetry is present.

    The very notion of light-likeness involves induced metric implying that the theory is almost topological but not quite. This small but important distinction guarantees that the theory is physically interesting.

  5. In Witten's theory the gauge group corresponds to the isometry group SO(1,2)× SO(1,2) of AdS3. The group of isometries of lightlike 3-surface is something much much mightier. It corresponds to the conformal transformations of 2-dimensional section of the 3-surfaces made local with respect to the radial lightlike coordinate in such a manner that radial scaling compensates the conformal scaling of the metric produced by the conformal transformation.

    The direct TGD counterpart of the Witten's gauge group is thus infinite-dimensional and essentially same as the group of 2-D conformal transformations! Presumably this can be interpreted in terms of the extension of conformal invariance implied by the presence of ordinary conformal symmetries associated with 2-D cross section plus "conformal" symmetries with respect to the radial lightlike coordinate.

  6. Monster group does not have any special role in TGD framework. However, all finite groups and - as it seems - also compact groups can appear as groups of dynamical symmetries at the partonic level in the general framework provided by the inclusions of hyper-finite factors of type II1(see this). Compact groups and their quantum counterparts would closely relate to a hierarchy of Jones inclusions associated with the TGD based quantum measurement theory with finite measurement resolution defined by inclusion as well as to the generalization of the imbedding space related to the hierarchy of Planck constants. Discrete groups would correspond to the number theoretical braids providing representations of Galois groups for extensions of rationals realized as braidings (see this).

  7. To make it clear, I am not suggesting that AdS3/CFT2 correspondence should have a TGD counterpart. If it had, a reduction of TGD to a closed string theory would take place. The almost-topological QFT character of TGD excludes this on general grounds.

    [I think this was asked in the blog of Woit by some-one before Woit censored out both the link to my blog and the question to keep the discussion "scientific" in the sense of uncritical worship of Witten: sad that due his well-known background Peter lacks the competence to decide which is crackpottery and which is not so that he must apply the methods of Prussian school masters.]

    More concretely, the dynamics would be effectively 2-dimensional if the radial superconformal algebras associated with the light-like coordinate would act as pure gauge symmetries. Concrete manifestations of the genuine 3-D character are following.

    • Generalized super-conformal representations decompose into infinite direct sums of stringy super-conformal representations.

    • In p-adic thermodynamics explaining successfully particle massivation radial conformal symmetries act as dynamical symmetries crucial for the particle massivation interpreted as a generation of a thermal conformal weight.

    • The maxima of Kähler function defining Kähler geometry in the world oc classical worlds correspond to special light-like 3-surfaces analogous to bottoms of valleys in spin glass energy landscape meaning that there is infinite number of different 3-D lightlike surfaces associated with given 2-D partonic configuration each giving rise to different background affecting the dynamics in quantum fluctuating degrees of freedom (see this ). This is the analogy of landscape in TGD framework but with a direct physical interpretation in say living matter.

For more details see the chapter Construction of Configuration Space Kähler Geometry from Symmetry Principles: part II .

About microscopic description of dark matter

Every step of progress induces a handful of worried questions about consistency with the existing network of beliefs and almost as a rule the rules must be modified slightly or be made more precise.

The construction of a model for the detection of gravitational radiation assuming that gravitons correspond to a gigantic gravitational constant was the last step of progress. It was carried out in TGD and Astrophysics, see also the earlier posting . One can say that dark gravitons are Bose-Einstein condensates of ordinary gravitons. This suggests that Bose-Einstein condensates of some kind could accompany and perhaps even characterize also the dark variants of ordinary elementary particles. The question is whether the new picture is consistent with the earlier dark rules.

1. Higgs boson Bose-Einstein condensate as characterized of Planck constant

The following picture is the simplest I have been able to imagine hitherto.

  1. Suppose that darkness corresponds to the darkness of the field bodies (em, Z0,W,...) of the elementary particle so that the elementary particle proper is not affected in the transition to large hbar phase. This stimulates the idea that some Bose-Einstein condensate associated with the field body provides a microscopic description for the darkness and that one can relate the value of hbar to the properties of Bose-Einstein condensate.

  2. Since the spin of the particle is not affected in the transition, it would seem that the bosons in question are Lorentz scalars. Hence a Bose-Einstein condensate of Higgs suggests itself as the relevant structure. Higgs would have a double role since the coherent state of Higgs bosons associated with the field body would be responsible for or at least closely relate to the contribution to the mass of fermion identified usually in terms of a coupling to Higgs. The ground state would correspond to a coherent state annihilated by the new annihilation operators unitarily related to the original ones. Bose-Einstein condensate would be obtained as a many-Higgs state obtaining by applying these creation operators and would not be an eigen state of particle number in the old basis.

  3. As a rule, quantum classical correspondence is a good guideline. Suppose that the field body corresponds to a pair of positive and negative energy MEs connected by wormhole contacts representing the bosons forming the Bose-Einstein condensate. This structure could be more or less universal. In the general case MEs carry light-like gauge currents and light-like Einstein tensor. These currents can also vanish and should do so for the ground state. MEs could carry both coherent states of gauge bosons and gravitons but would not be present in the ground state. The CP2 part of the trace of second fundamental form transforming as SO(4) vector and doublet with respect to the groups SU(2)L and SU(2)R, is the only possible candidate for the classical Higgs field. The Fourier spectrum of CP2 coordinates has only light-like longitudinal momenta so that four-momenta are slightly tachyonic for non-vanishing transverse momenta. This state of facts might be a space-time correlate for the tachyonic character of Higgs.

  4. The quantum numbers of the particle should not be affected in the transition changing the value of Planck constant. The simplest explanation is that Higgs bosons have a vanishing net energy. This is possible since in the case of bosons the two wormhole throats have different sign of energy. Indeed, if the energies, spins, and em charges of fermion and antifermion at wormhole throats are of opposite sign, one is left with a coherent state of zero energy Higgs particles as a microscopic description for constant value of Higgs field.

  5. How do the properties of the Bose-Einstein condensate of Higgs relate to the value of Planck constant? MEs should remain invariant under the discrete groups Zna and Znb and the bosons at the sheets of the multiple covering should be in identical state. The number na× nb of zero energy Higgs bosons in the Bose-Einstein condensate would characterize the darkness at microscopic level.

2. How this affects the view about particle massivation?

This scenario would allow to add some details to the general picture about particle massivation reducing to p-adic thermodynamics plus Higgs mechanism, both of them having description in terms of conformal weight.

  1. The mass squared equals to the p-adic thermal average of the conformal weight. There are two contributions to this thermal average. One from the p-adic thermodynamics for super conformal representations, and one from the thermal average related to the spectrum of generalized eigenvalues λ of the modified Dirac operator D. Higgs expectation value appears in the role of a mass term in the Dirac equation just like λ in the modified Dirac equation. For the zero modes of D λ vanishes.

  2. There are good motivations to believe that λ is expressible as a superposition of zeros of Riemann zeta or some more general zeta function. The problem is that λ is complex. Since Dirac operator is essentially the square root of d'Alembertian (mass squared operator), the natural interpretation of λ would be as a complex "square root" of the conformal weight.

    Confession: The earlier interpretation of lambda as a complex conformal weight looks rather stupid in light of this observation. It seems that there is again some updating to do;-)!

    This encourages to consider the interpretation in terms of vacuum expectation of the square root of Virasoro generator, that is generators G of super Virasoro algebra, or something analogous. The super generators G of the super-conformal algebra carry fermion number in TGD framework, where Majorana condition does not make sense physically. The modified Dirac operators for the two possible choices t+/- of the light-like vector appearing in the eigenvalue equation DΨ = λ tk+/-ΓkΨ could however define a bosonic algebra resembling super-conformal algebra.

    The p-adic thermal expectation values of contractions of t-kΓkD+ and t+kΓkD- should co-incide with the vacuum expectations of Higgs and its conjugate. This makes sense if the two generalized eigenvalue spectra of D are complex conjugates. Note that D+ and D- would be same operator but with different definition of the generalized eigenvalue and hermitian conjugation would map these two kinds of eigen modes to each other. The real contribution to the mass squared would thus come naturally as <λλ*>. Of course, < H>=<λ> is only a hypothesis encouraged by the internal consistency of the physical picture, not a proven mathematical fact.

3. Questions

This leaves still some questions.

  1. Does the p-adic thermal expectation < λ> dictate < H> or vice versa? Physically it would be rather natural that the presence of a coherent state of Higgs wormhole contacts induces the mixing of the eigen modes of D. On the other hand, the quantization of the p-adic temperature Tp suggests that Higgs vacuum expectation is dictated by Tp.

  2. Also the phase of <λ> should have physical meaning. Could the interpretation of the imaginary part of < λ> make possible the description of dissipation at the fundamental level?

  3. Is p-adic thermodynamics consistent with the quantal description as a coherent state? The approach based on p-adic variants of finite temperature QFTs associate with the legs of generalized Feynman diagrams might resolve this question neatly since thermodynamical states would be genuine quantum states in this approach made possible by zero energy ontology.

For more details see the chapter Does TGD Predict the Spectrum of Planck Constants? of "Towards S-Matrix".

Saturday, May 05, 2007

The First Edge of a Cube

I learned in Kea's blog about posting concerning something related to n-categories: the posting was The First Edge of the Cube. I did not understand much of it. I however tried to make something out of it and this boiled down to a posting to Kea's blog whose polished version with a couple of additional questions I attach below.
Dear Kea,

I wish I could connect these n-transports to something having a concrete physical meaning! For years ago I tried to understand n-parallel transport (or perhaps it was something related;-)) in terms of simple geometric mental images. I try to formulate my mis-understandings using the ancient terminology still used by physicists like me and these mental images. No arrows nor commuting diagrams which make me mad!

  • n=1: One starts with a parallel transport from point a to b along curve C1(a,b). 1-parallel transport defines a map between fibers.

  • n=2: 1-parallel transport along C1(a,b) is parallelly transported to a 1-parallel transport along curve C1(c,d). One can say that one parallelly transports curve instead of point. 2-connection would define this parallel transport of parallel transport. One obtains a kind of square like structure C2(a,b|c,d).

  • n>2: One can continue this and obtains at n:th level parallel transport of parallel transport of.....

Some comments.

  • The ordered exponential representation for parallel transport suggests that n=1 parallel transport could define n-parallel transport. Probably something trivial and un-interesting.

  • If the n-connection is non-flat, the n-parallel transport depends on how the curve evolves from the initial state to the final state.

  • A physically highly attractive possibility is generalized general coordinate invariance stating that the parallel transport depends only on the n-surface spanned by the curve. Is n-parallel transport induced by 1-parallel transport the only solution to this requirement?

  • One can wonder about the counterparts of geodesic lines. 1-parallel transport leaves the tangent vector field of geodesic line invariant. n-parallel transport should leave invariant the n-form defining tangent spaces of a geodesic n-surface? For n-parallel transport induced by 1-parallel transport geodesic sub-manifolds would probably result. What is the n-counterpart for the equations of geodesic line? Could one model the behaviour of extended objects in gravitational fields using these kind of equations? Could one model the effect of non-gravitational forces on the motion by using n-connection not induced from 1-parallel transport?

  • One could also generalize the notion of holonomy group. 2-holonomy group would be associated with cylinder-like surfaces C2(a,b|a,b) with topology D×S1. At higher levels you would have topology D×S1×S1 and so on. You could also consider closed curve at n=1 level and get hierarchy of n-holonomy groups associated with n-tori. Of course also other topologies can result if the parallel transport is such that the surface develops pinches. Could one generalize the notion so that one could assign say 2-parallel transport to a 2-torus. What to do when the curve for 1-parallel transport decomposes into two separate pieces? Just hop? Why not?

For years ago I assigned this kind of hierarchical structure of parallel transports to a hierarchical structure defined by infinite primes (see this). I believe that this kind of abstractions about abstractions about..., thoughts about thoughts about... , statements about statements about... , and repeated second quantization, represent fundamental new physics especially relevant for quantum consciousness theories.

Cheers,

Matti

Friday, May 04, 2007

About Equivalence Principle and weaknesses of General Relativity

I wrote a lengthy email to Arkady Khodolenko explaining my views about Equivalence Principle (EP) and possible weaknesses of General Relativity. After having done all this work I realized that it would be a pity to not add the email to blog page after some necessary polishing of typos and addition of some juicy marketing details about the relationship to string models to the end (forgive me, I do my best but I cannot resist this temptation!;-)).

Dear Arkady,

the best manner to tell my own view about weaknesses of GRT is by comparing GRT and TGD based views about gravitational and inertial four momentum, about star models and black hole like singularities, and about cosmology.

1. Problems associated with the definition of four-momentum

The starting point of TGD was the well-known fact that energy-momentum cannot be defined as conserved Noether charges. This means that no general definition of even gravitational mass exists for general solutions of field equations. One can argue that the weakness of gravitation allowing perturbative approach around Minkowski space resolves the situation. In strong gravitational fields the situation is however very unsatisfactory.

The assumption that space-times are 4-surfaces in H=M4× CP2 cures this problem and exact Poincare invariance is realized at the level of H (M4). Space-time surface itself can have even Euclidian signature of metric.

2. The identification of Minkowski coordinates

This picture resolves the problem about physical identification of Minkowski coordinates which is highly non-trivial in GRT and makes the interpretation of the post-Newtonian approximation difficult. The reason is that Minkowski coordinates for M4 define unique choice of coordinates for space-time surface. Lorentz invariance of space-time surface defines unique cosmological time and cosmological principle reduces to the Lorentz invariance of space-time surface implying Robertson-Walker metric.

3. Equivalence Principle

I have been thinking a lot of about what form of Equivalence Principle is possible to realize in TGD framework.

Gravitational four-momentum and inertial four-momentum are well-defined in TGD framework. Inertial four-momentum is conserved but gravitational four-momentum is not. Therefore Einstein's equations cannot hold true generally except perhaps at some macroscopic structural equations analogous to D=epsilon E of electrodynamics under some additional conditions.

The recent formulation of Equivalence Principle (EP) in TGD framework at 3-D parton level is following. In quantum TGD light-like 3-surfaces ("partons") are fundamental objects and holography holds true in the sense that 4-D space-time surfaces associated with with these 3-D surfaces provide classical representation of quantum physics making possible quantum measurement theory among other things. EP holds true in the sense that inertial four-momentum for space-time sheet is temporal average of non-conserved gravitational four-momentum. This form of EP must have some counterpart at 4-D level but Einstein's equations cannot be it.

4. Zero energy ontology

A further distinction from GRT is what I call zero energy ontology which says that all conserved Noether charge type quantum numbers of physical states vanish. This means that physical states consist of positive and negative energy parts with opposite energies separated by some typical temporal distance T. Only in time scales short in the scale defined by TGD positive energy ontology applies. In longer time scales the state could be regarded as a quantum fluctuation: something pops up from vacuum and disappears. This resolves the problem about what where the initial values at the moment of big bang. Quantum jumps replaced the entire 4-D Universe (or superposition of them) with a new one all the "subjective" time. Entire 4-D Universe is created again and again. This resolves also the basic problem of quantum measurement theory.

Only this interpretation suggested very strongly by quantum TGD is consistent with the classical TGD which predicts that the imbeddings of Robertson-Walker cosmologies are always vacuum extremals with respect to inertial four-momentum but non-vacuum extremals with respect to gravitational four-momentum (in general not conserved). The interpretation would be that in the cosmological time resolution the positive and negative energy parts of states are not seen: one has quantum fluctuation below resolution. Gravitational four-momentum density which is not a conserved Noether charge can be however non-vanishing.

5. The problem of cosmological constant

Cosmological constant is basic problems of GRT based theories. Why I take TGD based cosmology very seriously is that it predicts that mass density for all globally imbeddable Robertson-Walker cosmologies is always sub-critical. Cosmologies with over-critical or critical density of gravitational mass make a transition to subcritical one after some time interval. Thus the large value of cosmological constant is not a problem in TGD.

6. Black-holes singularities and model for asymptotic state of star

In TGD framework the model for the asymptotic state of star is based on the assumption that inertial vacuum extremal is in question and gravitational four momentum densities are conserved locally. This gives a modification of minimal surface equation in which the contraction of contravariant metric with second fundamental form is replaced with the contraction of Einstein tensor (plus metric if cosmological constant is non-vanishing) with second fundamental form.

  • Schwartschild metric results as vacuum extremal solution fails below some radius because of non-imbeddability to 8-D space. Black-hole singularities are therefore not possible in TGD framework.

  • More complex vacuum extremals which are not gravitational vacua represent star interior. Solutions represent always rotating system carrying non-vanishing gauge charges. The special feature is dynamo-like structure meaning the existence of strong electric and magnetic fields orthogonal to each other. Solution is stationary and the collapse to a point like object is impossible.

  • Rotating black holes do not allow 8-D imbedding and for small vacuum extremals representing perturbations of the Schwartschild metric representing rotating system the predicted gravimagnetic field equals to that in GRT only at equator but becomes strong near poles. Unfortunately, the satellite tests are carried out at equator.

Here it was and it is actually essentially the same as the original email. I want however add some comments about the relationship of TGD to string models.

  • Light-like 3-surfaces as basic dynamical objects implies that super-conformal symmetries of string models generalize to larger symmetries by the metric 2-dimensionality of these surfaces. The reverse of quantum holography results in the sense that 4-D space-time surfaces provide classical representation of quantum dynamics at parton level (essential for quantum measurement theory). There is thus deep and completely unexpected connection between quantum measurement theory and quantum gravitational holography.

  • Fermions and gauge bosons are not string like objects in TGD framework. This is very important distinction from string models. A generalization of stringy mathematics of course applies inside partonic 3-surfaces by generalized super-conformal symmetries but this is different level.

  • Gravitons are necessarily parton-antiparton bound states that is superpositions of parton-antiparton states with parton and antiparton connected by gauge flux so that they and many many other similar states have a stringy character. Gravitational constant is proportional to the p-adic length scale squared so that an entire spectrum of gravitations with increasing value of G is possible. p-Adic length scale hypothesis suggests however Mersenne primes as preferred ones and M127=2127-1 corresponds to the largest Mersenne defining a not completely super-astrophysical p-adic length scale. The observed gravitation corresponds to this Mersenne. This p-adic length scale corresponds also to electron and color flux tubes associated with this Mersenne are crucial in the nuclear string model. This means a totally unexpected connection between gravitation and nuclear physics. A model of cold fusion, whose experimental verification was discussed in previous posting, is one of the implications of new nuclear physics.

Thursday, May 03, 2007

Cold fusion - hot news again

Cold fusion, whose history begins from the announcement of Fleischman and Pons 1989, is gradually making its way through the thick walls of arrogant dogmatism and prejudices, and - expressing it less diplomatically - of collective academic stupidity. The name of Frank Gordon is associated with the breakthrough experiment. Congratulations to the pioneers.

There are popular articles in Nature and New Scientist. Unfortunately these articles articles are not accessible to everyone, including me. The article Cold Fusion - Extraordinary Evidence, Cold fusion is real should be however available to any one.

For few weeks ago I revised the earlier model of cold fusion. The model explains nicely the selection rules of cold fusion and also the observed transmutations in terms of exotic states of nuclei for which the color bonds connecting A≤4 nuclei to nuclear string can be also charged. This makes possible neutral variant of deuteron nucleus making possible to overcome the Coulomb wall.

It seems that the emission of highly energetic charged particles which cannot be due to chemical reactions and could emerge from cold fusion has been demonstrated beyond doubt by Frank Gordon's team using detectors known as CR-39 plastics of size scale of coin used already earlier in hot fusion research. The method is both cheap and simple. The idea is that travelling charged particles shatter the bonds of the plastic's polymers leaving pits or tracks in the plastic. Under the conditions claimed to make cold fusion possible (1 deuterium per 1 Pd nucleus making in TGD based model possible the phase transition of D to its neutral variant by the emission of exotic dark W boson with interaction range of order atomic radius) tracks and pits appear during short period of time to the detector.

For details see the new chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy". The older model is discussed in the chapter TGD and Nuclear Physics.

Sunday, April 29, 2007

Precise definition of fundamental length as a royal road to TOE

The visit to Kea's blog inspired a little comment about the notion of fundamental length. During morning walk I realized that a simple conceptual analysis of this notion, usually taken as granted, allows to say something highly non-trivial about the basic structure of quantum theory of gravitation.

1. Stringy approach to Planck length as fundamental length fails

In string models in their original formulation Planck length is identified as the fundamental length and appears in string tension. As such this means nothing unless one defines Planck length operationally. This means that one must refer to standard meter stick in Paris or something more modern. This is very very ugly if one is speaking about theory of everything. Perhaps this is why the writers of books about strings prefer not to go the details about what one means with Planck length;-). One could of course mumble something about the failure of Riemannian geometry at Planck lengths but this means going outside the original theory and the question remains why this failure at a length scale which is just this particular fraction of the meter stick in Paris.

2. Fundamental length as length of closed geodesic

What seems clear that theory of everything should identify the fundamental length as something completely inherent to the structure of theory, not something in Paris. The most natural manner to define the fundamental length is as the length of a closed geodesic. Unfortunately, flat Minkowski space does not have closed geodesics. If one however adds compact dimensions to Minkowski space, problem disappears. If one requires that the unit is unique this leaves only symmetric spaces for which all geodesics have unit length into consideration.

3. Identification of fundamental length in terms of spontaneous compactification fails

The question is how to do this. In super-string models and M-theory spontaneous compactification brings in the desired closed geodesics. Usually the lengths of geodesics however vary. Even worse, the length is defined by a particular solution of the theory rather than something universal so that we are in Paris again. It seems that the compact dimensions cannot be dynamical but in some sense God given.

4. The approach based on non-dynamical imbedding space works

In TGD imbedding space is not dynamical but fixed by the requirement that the "world of classical worlds" consisting of light-like 3-surfaces representing orbits of partons (size can be actually anything) has a well-defined Kähler geometry. Already the Kähler geometry of loop spaces is unique and possesses Kac-Moody group as isometries. Essentially a constant curvature space is in question. The reason for the uniqueness is that the mathematically respectable existence of Riemann connection is not at all obvious in infinite-dimensional context. The fact that the curvature scalar is infinite however strongly suggests that strings are not fundamental objects after all.

In 3-dimensional situation the existential constraints are much stronger. A generalization of Kac-Moody symmetries is expected to be necessary for the existence of the geometry of the world of classical worlds. The uniqueness of infinite-dimensional Kähler geometric existence would thus become the Mother of All Principles of Physics.

This principle seems to work in the desired manner. The generalization of conformal invariance is possible only for 4-dimensional space-time surfaces and imbedding space of form H=M4×S, and number theoretical arguments involving octonions and quaternions fix the symmetric space S (also Kähler manifold) to S=CP2. Standard model symmetries lead to the same identification.

The conclusion is that in TGD the universal unit of length is explicitly present in the definition of the theory and that TGD in principle predicts the length of the meter stick in Paris using CP2 geodesic length as unit rather than expressing Planck length in terms of the length of the meter stick in Paris. It is actually enough to predict the Compton lengths of particles correctly (that is their masses) in terms of CP2 size. p-Adic mass calculations indeed predict particle masses correctly and p-adic length scale hypothesis brings in p-adic length scales as a hierarchy of derived and much more practical units of length. In particular, the habitant of many-sheeted space-time can measure distances at a given space-time sheet using smaller space-time sheet as a unit.

Thursday, April 26, 2007

In what sense dark matter is dark?

The notion of dark matter as something which has only gravitational interactions brings in mind the concept of ether and is very probably only an approximate characterization of the situation. As I have been gradually developing the notion of dark matter as a hierarchy of phases of matter with an increasing value of Planck constant, the naivete of this characterization has indeed become obvious. While writing yesterday's long posting Gravitational radiation and large value of gravitational Planck constant I understood what the detection of dark gravitons might mean.

During the last night I realized that dark matter is dark only in the sense that the process of receiving the dark bosons (say gravitons) mediating the interactions with other levels of dark matter hierarchy, in particular ordinary matter, differs so dramatically from that predicted by the theory with a single value of Planck constant that the detected dark quanta are unavoidably identified as noise. Dark matter is there and interacts with ordinary matter and living matter in general and our own EEG in particular (identified in standard neuroscience as a noise however correlating with contents of consciousness, God Grief!) provide the most dramatic examples about this interaction. Hence we can consider the dropping of "dark matter" from our vocabulary altogether and replace "dark" with the spectrum of Planck constants characterizing the particles (dark matter) and their field bodies (dark energy).

A. Background

A.1 Generalization of the imbedding space concept

The idea about quantized Planck constant in ordinary space-time is not promising since in a given interaction vertex the values of Planck constants should be identical and it is difficult to imagine how this could be realized mathematically. With the realization that the hierarchy of Jones inclusions might relate directly to the value hierarchy of Planck constants emerged the idea about the modification of imbedding space obtained by gluing together H→ H/Ga×Gb, Ga and Gb discrete subgroups of SU(2) associated with Jones inclusions along their common points (see this and this). Ga and Gb could be restricted to be cyclic and thus leaving the choice of quantization axis invariant. A book like structure results with different copies of H analogous to the pages of the book. Each sheet corresponds to a particular kind of dark matter or dark energy depending on whether it corresponds to a particle or its field body.

A.2 Darkness at the level of elementary particles

If elementary particles are maximally quantum critical in the sense that the corresponding partonic 2-surfaces belong to the 4-D intersection of all copies of the imbedding space (assuming Ga and Gb are cyclic and leave quantization axes invariant) so that one cannot say which value of Planck constant they correspond to. The most conservative criterion for the darkness at elementary particle level is that elementary particles are quantum critical systems so that only their field bodies are dark and that particle space-time sheet to which I assign the p-adic prime p characterizing particle corresponds to its em field body mediating its electromagnetic self interactions. Also Compton length as determined by em interaction would characterize to this field body. Compton length would be completely operational concept. This option is implied by the strong hypothesis that elementary particles are maximally quantum critical meaning that they belong to sub-space of H left invariant by all groups Ga×Gb leaving quantization axis invariant so that all dark variants of particle identified as 2-D partonic surface would be identical.

The implication would be that particle possess field body associated with each interaction and an extremely rich repertoire of phases emerges if these bodies are allowed to be dark and characterized by p-adic primes. Planck constant would be assigned with a particular interaction of particle rather than particle. This conforms with the formula of gravitational Planck constant hbargr= 211×GMm (not the most general formula but giving order of magnitude, for details see this), whose dependence on particle masses indeed forces the assignment of this constant to the gravitational field body as something characterizing interaction rather than particle.

Of course, nothing prevents from accepting the existence of elementary particles, which are not completely quantum critical and the subgroups of Ga×Gb define a hierarchy for which elementary particle proper is also dark. This extends the repertoire and leads to idea like N-atom in which electrons correspond to N-sheeted partonic 2-surfaces so that as many as N electrons can be in identical quantum state in the sense as the word is used in single sheeted space-time. I have proposed applications of N-atom and hierarchy of subgroups to the basic biology (see this, this, and this.)

B. How various levels of dark matter hierarchy interact?

At classical level the interaction between various levels of hierarchy means that electric and magnetic fluxes flow between sectors of the imbedding space with different values of Planck constant. Faraday's induction law made it from beginning clear that the levels of the hierarchy must interact.

It became also clear that dark bosons can decay to ordinary bosons by a phase transition that I called de-coherence. For instance, a dark boson defining an N-sheeted covering of M4 decays in this process to N 1-sheeted coverings. The N-fold value of Planck constant means that energy is conserved if the frequency is not changed and "Rieman sheets" ceases to form a folded connected structure in the process.

The so called massless extremals (see this, this, and this) and define n(Ga)×n(Gb) fold coverings of H/Ga×Gb, n(Ga)-fold coverings of CP2, and n(Gb)-fold coverings of M4. They are are ideal representatives for dark bosonic quanta. The proposal was that dark EEG photons with energies above thermal energy at room temperature can have non-negligible quantum effects on living matter although their frequencies would correspond to ridiculously small energies for the ordinary value of Planck constant. Those who are weak can combine their forces! Marxism (or synergy, as you will) at the elementary particle level!

Quite generally, field bodies can mediate interaction between particles at any level of the hierarchy. The visualization is in terms of the book metaphor. Virtual boson, or more generally 3-D partonic variant of the field body mediating the interaction, emitted by a particle at a given page leaks via the rim of the book to another page. The mediating 2-surface must become partially quantum critical at some stage of process. This applies to both the static topological field quanta (quanta of electric and magnetic fluxes) connecting particles in bound states and to the dynamical topological field quanta exchanged in the scattering (say MEs). Thus dark matter and ordinary matter interact, and only the value of Planck constant associated with the mediators of the interaction is different and should explain the apparent darkness.

C. What does this mean experimentally?

At this moment I would say that dark matter has standard interactions but because of large value of Planck constant these interactions occur in different manner. Even gravitons would be dark. Single graviton with a large value of hbar generates much larger effect than the ordinary graviton with the same frequency.

The good news is that the possibility to detect gravitational radiation improves dramatically. The bad news is that experimenters firmly believe in the dogma of single universal Planck constant and continue to eliminate the signals which are quite too strong as a shot noise, seismic noise, and all kind of noises produced by the environment. Ironically, not only gravitational radiation but also dark gravitons(!), might have been detected long ago, but we continue to get the frustrating null result just because we have a wrong theory.

The same would apply to dark quanta of gauge interactions and we might be receiving continually direct signals about dark matter but misinterpreting them. The mystery of dark matter would be generated by a theoretical prejudice just as the notion of ether for a century ago.

From the foregoing it should be clear "dark" is just an unfortunate letter combination, which I happened to pick up as I started this business. My sincere apologies! To sum up some basic points.

  1. In TGD Universe dark matter interacts with the ordinary matter and is detectable but the interactions are realized as bursts of collinear quanta resulting when a dark boson de-coheres to bosons at the lower level of hierarchy. This quantum jump corresponds to some characteristic time interval at the level of the space-time correlates and things look classical from the point of detector at the lowest level of the hierarchy. After the elimination of noise the time averages for the detection rates over sufficiently long time interval should be identical to those predicted by a theory based on ordinary Planck constant.

  2. In the previous posting I told about additional fascinating aspects related to the detection of dark gravitons due to the fact that gravitational Planck constant hgr= 211GMm (in the simplest case) of absorbed dark graviton characterizes the field body connecting detector and source and is proportional the masses of receiver and source. Both the total energy of dark graviton and the duration of process induced by the receival of large hbar graviton are proportional to the masses of the receiving and emitting system and thus carry information about the mass of the distance source. Some day this could make possible the gravitational counterpart of atomic spectroscopy. This additional information theoretic candy gives one further good reason to take the hierarchy of Planck constants seriously.

  3. Planck constant should carry information about the interacting systems also in the case of other dark interactions. In the case of em interactions the condition hbar= 211Z1Z2e2 or its generalization should hold true when perturbative approach fails for the em interaction between two charged systems. Z1Z2e2> 1 is the naive criterion for this to happen. Heavy ion collisions would be an obvious application (I discussed RHIC findings as one of the earliest attempts to develop ideas by applying them, see this). Gamma ray burst might also be an outcome of single very dark boson giving rise to precisely targeted pulse of radiation. The criterion should apply also to self interactions and suggests that in the case of heavy nuclei the electromagnetic field body of the nucleus becomes dark. Color confinement provides also a natural application.

  4. Dark photons with large value of hbar could transmit large energies through long distances and their phase conjugate variants could make possible a new kind of energy transfer mechanism essential in TGD based quantum model of metabolism and having also possible technological applications. Various kinds of sharp pulses suggest themselves as a manner to produce dark bosons in laboratory. Interestingly, after having given us alternating electricity, Tesla spent the rest of his professional life by experimenting with effects generated by electric pulses. Tesla claimed that he had discovered a new kind of invisible radiation, scalar wave pulses, which could make possible wireless communications and energy transfer in the scale of globe (for a possible but not the only TGD based explanation see this). This notion of course did not conform with Maxwell's theory, which had just gained general acceptance so that Tesla's fate was to spend his last years as a crackpot. Great experimentalists seem to be to see what is there rather than what theoreticians tell them they should see. They are often also visionaries too much ahead of their time.

For more details see the chapter TGD and Astrophysics of "Classical Physics in Many-Sheeted Space-time". For the applications of dark matter ideas to biosystems and living matter see the online books at my homepage and the links in the text.

Gravitational radiation and large value of gravitational Planck constant

Gravitational waves has been discussed on both Lubos's blog and Cosmic Variance. This raised the stimulus of looking how TGD based predictions for gravitational waves differ classical predictions. The article Gravitational Waves in Wikipedia provides excellent background material which I have used in the following. This posting is an extended and twice corrected version of the original.

The description of gravitational radiation provides a stringent test for the idea about dark matter hierarchy with arbitrary large values of Planck constants. In accordance with quantum classical correspondence, one can take the consistency with classical formulas as a constraint allowing to deduce information about how dark gravitons interact with ordinary matter. In the following standard facts about gravitational radiation are discussed first and then TGD based view about the situation is sketched.

A. Standard view about gravitational radiation

A.1 Gravitational radiation and the sources of gravitational waves

Classically gravitational radiation corresponds to small deviations of the space-time metric from the empty Minkowski space metric (see this). Gravitational radiation is characterized by polarization, frequency, and the amplitude of the radiation. At quantum mechanical level one speaks about gravitons characterized by spin and light-like four-momentum.

The amplitude of the gravitational radiation is proportional to the quadrupole moment of the emitting system, which excludes systems possessing rotational axis of symmetry as classical radiators. Planetary systems produce gravitational radiation at the harmonics of the rotational frequency. The formula for the power of gravitational radiation from a planetary system given by

P= dE/dt=(32/π)×G2M1M2(M1+M2)/R5.

This formula can be taken as a convenient quantitative reference point.

Planetary systems are not very effective radiators. Because of their small radius and rotational asymmetry supernovas are much better candidates in this respect. Also binary stars and pairs of black holes are good candidates. In 1993, Russell Hulse and Joe Taylor were able to prove indirectly the existence of gravitational radiation. Hulse-Taylor binary consists of ordinary star and pulsar with the masses of stars around 1.4 solar masses. Their distance is only few solar radii. Note that the pulsars have small radius, typically of order 10 km. The distance between the stars can be deduced from the Doppler shift of the signals sent by the pulsar. The radiated power is about 1022 times that from Earth-Sun system basically due to the small value of R. Gravitational radiation induces the loss of total energy and a reduction of the distance between the stars and this can be measured.

A.2 How to detect gravitational radiation?

Concerning the detection of gravitational radiation the problems are posed by the extremely weak intensity and large distance reducing further this intensity. The amplitude of gravitational radiation is measured by the deviation of the metric from Minkowski metric, denote by h.

Weber bar (see this) provides one possible manner to detect gravitational radiation. It relies on a resonant amplification of gravitational waves at the resonance frequency of the bar. For a gravitational wave with an amplitude h≈10-20 the distance between the ends of a bar with length of 1 m should oscillate with the amplitude of 10-20 meters so that extremely small effects are in question. For Hulse-Taylor binary the amplitude is about h=10-26 at Earth. By increasing the size of apparatus one can increase the amplitude of stretching.

Laser interferometers provide second possible method for detecting gravitational radiation. The masses are at distance varying from hundreds of meters to kilometers(see this). LIGO (the Laser Interferometer Gravitational Wave Observatory) consists of three devices: the first one is located with Livingston, Lousiana, and the other two at Hanford, Washington. The system consist of light storage arms with length of 2-4 km and in angle of 90 degrees. The vacuum tubes in storage arms carrying laser radiation have length of 4 km. One arm is stretched and one arm shortened and the interferometer is ideal for detecting this. The gravitational waves should create stretchings not longer that 10-17 meters which is of same order of magnitude as intermediate gauge boson Compton length. LIGO can detect a stretching which is even shorter than this. The detected amplitudes can be as small as h≈ 5× 10-22.

B. Gravitons in TGD

In this subsection two models for dark gravitons are discussed. Spherical dark graviton (or briefly giant graviton) would be emitted in quantum transitions of say dark gravitational variant of hydrogen atom. Giant graviton is expected to de-cohere into topological light rays, which are the TGD counterparts of plane waves and are expected to be detectable by human built detectors.

B.1 Gravitons in TGD

Unlike the naive application of Mach's principle would suggest, gravitational radiation is possible in empty space in general relativity. In TGD framework it is not possible to speak about small oscillations of the metric of the empty Minkowski space imbedded canonically to M4× CP2 since Kähler action is non-vanishing only in fourth order in the small deformation and the deviation of the induced metric is quadratic in the deviation. Same applies to induced gauge fields. Even the induced Dirac spinors associated with the modified Dirac action fixed uniquely by super-symmetry allow only vacuum solutions in this kind of background. Mathematically this means that both the perturbative path integral approach and canonical quantization fail completely in TGD framework. This led to the vision about physics as Kähler geometry of "world of classical worlds" with quantum states of the universe identified as the modes of classical configuration space spinor fields.

The resolution of various conceptual problems is provided by the parton picture and the identification of elementary particles as light-like 3-surfaces associated with the wormhole throats. Gauge bosons correspond to pairs of wormholes and fermions to topologically condensed CP2 type extremals having only single wormhole throat.

Gravitons are string like objects in a well defined sense. This follows from the mere spin 2 property and the fact that partonic 2-surfaces allow only free many-fermion states. This forces gauge bosons to be wormhole contacts whereas gravitons must be identified as pairs of wormhole contacts (bosons) or of fermions connected by flux tubes. The strong resemblance with string models encourages to believe that general relativity defines the low energy limit of the theory. Of course, if one accepts dark matter hierarchy and dynamical Planck constant, the notion of low energy limit itself becomes somewhat delicate.

B.2 Model for the giant graviton

Detector, giant graviton, source, and topological light ray will be denoted simply by D, G, and S, and ME in the following. Consider first the model for the giant graviton.

  1. Orbital plane defines the natural quantization axis of angular momentun. Giant graviton and all dark gravitons corresponds to na-fold coverings of CP2 by M4 points, which means that one has a quantum state for which fermionic part remains invariant under the transformations φ→ φ+2π/na. This means in particular that the ordinary gravitons associated with the giant graviton have same spin so that the giant graviton can be regarded as Bose-Einstein condensate in spin degrees of freedom. Only the orbital part of state depends on angle variables and corresponds to a partial wave with a small value of L.

  2. The total angular momentum of the giant graviton must correspond to the change of angular momentum in the quantum transition between initial and final orbit. Orbital angular momentum in the direction of quantization axis should be a small multiple of dark Planck constant associated with the system formed by giant graviton and source. These states correspond to Bose-Einstein condensates of ordinary gravitons in eigen state of orbital angular with ordinary Planck constant. Unless S-wave is in question the intensity pattern of the gravitational radiation depends on the direction in a characteristic non-classical manner. The coherence of dark graviton regarded as Bose-Einstein condensate of ordinary gravitons is what distinguishes the situation in TGD framework from that in GRT.

  3. If all elementary particles with gravitons included are maximally quantum critical systems, giant graviton should contain r(G,S) =na/nb ordinary gravitons. This number is not an integer for nb>1. A possible interpretation is that in this case gravitons possess fractional spin corresponding to the fact that rotation by 2π gives a point in the nb-fold covering of M4 point by CP2 points. In any case, this gives an estimate for the number of ordinary gravitons and the radiated energy per solid angle. This estimate follows also from the energy conservation for the transition. The requirement that average power equals to the prediction of GRT allows to estimate the geometric duration associated with the transition. The condition hbar ω = Ef-Ei is consistent with the identification of hbar for the pair of systems formed by giant-graviton and emitting system.

B.3 Dark graviton as topological light ray

Second kind of dark graviton is analog for plane wave with a finite transversal cross section. TGD indeed predicts what I have called topological light rays, or massless extremals (MEs) as a very general class of solutions to field equations ((see this, this, and this).

MEs are typically cylindrical structures carrying induced gauge fields and gravitational field without dissipation and dispersion and without weakening with the distance. These properties are ideal for targeted long distance communications which inspires the hypothesis that they play a key role in living matter (see this and this) and make possible a completely new kind of communications over astrophysical distances. Large values of Planck constant allow to resolve the problem posed by the fact that for long distances the energies of these quanta would be below the thermal energy of the receiving system.

Giant gravitons are expected to decay to this kind of dark gravitons having smaller value of Planck constant via de-decoherence and that it is these gravitons which are detected. Quantitative estimates indeed support this expectation.

At the space-time level dark gravitons at the lower levels of hierarchy would naturally correspond to na-Riemann sheeted (r=GmE/v0=na/nb for m>>E) variants of topological light rays ("massless extremals", MEs), which define a very general family of solutions to field equations of TGD (see this). na-sheetedness is with respect to CP2 and means that every point of CP2 is covered by na M4 points related by a rotation by a multiple of 2π/na around the propagation direction assignable with ME. nb-sheetedness with respect to M4 is possible but does not play a significant role in the following considerations. Using the same loose language as in the case of giant graviton, one can say that r=na/nb copies of same graviton have suffered a topological condensation to this kind of ME. A more precise statement would be na gravitons with fractional unit hbar0/na for spin.

C. Detection of gravitational radiation

One should also understand how the description of the gravitational radiation at the space-time level relates to the picture provided by general relativity to see whether the existing measurement scenarios really measure the gravitational radiation as they appear in TGD. There are more or less obvious questions to be answered (or perhaps obvious after a considerable work).

What is the value of dark gravitational constant which must be assigned to the measuring system and gravitational radiation from a given source? Is the detection of primary giant graviton possible by human means or is it possible to detect only dark gravitons produced in the sequential de-coherence of giant graviton? Do dark gravitons enhance the possibility to detect gravitational radiation as one might expect? What are the limitations on detection due to energy conservation in de-coherence process?

C.1 TGD counterpart for the classical description of detection process

The oscillations of the distance between the two masses defines a simplified picture about the receival of gravitational radiation. Now ME would correspond to na-"Riemann-sheeted" (with respect to CP2)graviton with each sheet oscillating with the same frequency. Classical interaction would suggest that the measuring system topologically condenses at the topological light ray so that the distance between the test masses measured along the topological light ray in the direction transverse to the direction of propagation starts to oscillate.

Obviously the classical behavior is essentially the same as as predicted by general relativity at each "Riemann sheet". If all elementary particles are maximally quantum critical systems and therefore also gravitons, then gravitons can be absorbed at each step of the process, and the number of absorbed gravitons and energy is r-fold.

C.2. Sequential de-coherence

Suppose that the detecting system has some mass m and suppose that the gravitational interaction is mediated by the gravitational field body connecting the two systems.

The Planck constant must characterize the system formed by dark graviton and measuring system. In the case that E is comparable to m or larger, the expression for r=hbar/hbar0 must replaced with the relativistically invariant formula in which m and E are replaced with the energies in center of mass system. This gives

r= GmE/[v0(1+β)(1-β)1/2], β= z(-1+(1+2/x))1/2) , x= E/2m .

Assuming m>>E0 this gives in a good approximation

r=Gm1 E0/v0= G2 m1mM/v02.

Note that in the interaction of identical masses ordinary hbar is possible for m≤ (v0)1/2MPl. For v0=2-11 the critical mass corresponds roughly to the mass of water blob of radius 1 mm.

One can interpret the formula by saying that de-coherence splits from the incoming dark graviton dark piece having energy E1= (Gm1E0/v0)ω, which makes a fraction E1/E0= (Gm1/v0)ω from the energy of the graviton. At the n:th step of the process the system would split from the dark graviton of previous step the fraction

En/E0= (Gωn/v0)ni(mi).

from the total emitted energy E0. De-coherence process would proceed in steps such that the typical masses of the measuring system decrease gradually as the process goes downwards in length and time scale hierarchy. This splitting process should lead at large distances to the situation in which the original spherical dark graviton has split to ordinary gravitons with angular distribution being same as predicted by GRT.

The splitting process should stop when the condition r≤ 1 is satisfied and the topological light ray carrying gravitons becomes 1-sheeted covering of M4. For E<<m this gives GmE≤ v0 so that m>>E implies E<<MPl. For E>>m this gives GE3/2m1/2 <2v0 or

E/m≤ (2v0/Gm2)2/3 .

C.3. Information theoretic aspects

The value of r=hbar/hbar0 depends on the mass of the detecting system and the energy of graviton which in turn depends on the de-coherence history in corresponding manner. Therefore the total energy absorbed from the pulse codes via the value of r information about the masses appearing in the de-coherence process. For a process involving only single step the value of the source mass can be deduced from this data. This could some day provide totally new means of deducing information about the masses of distant objects: something totally new from the point of view of classical and string theories of gravitational radiation. This kind of information theoretic bonus gives a further good reason to take the notion of quantized Planck constant seriously.

If one makes the stronger assumption that the values of r correspond to ruler-and-compass rationals expressible as ratios of the number theoretically preferred values of integers expressible as n=2ksFs, where Fs correspond to different Fermat primes (only four is known), very strong constraints on the masses of the systems participating in the de-coherence sequence result. Analogous conditions appear also in the Bohr orbit model for the planetary masses and the resulting predictions were found to be true with few per cent. One cannot therefore exclude the fascinating possibility that the de-coherence process might in a very clever manner code information about masses of systems involved with its steps.

C.4. The time interval during which the interaction with dark graviton takes place?

If the duration of the bunch is T= E/P, where P is the classically predicted radiation power in the detector and T the detection period, the average power during bunch is identical to that predicted by GRT. Also T would be proportional to r, and therefore code information about the masses appearing in the sequential de-coherence process.

An alternative, and more attractive possibility, is that T is same always and correspond to r=1. The intuitive justification is that absorption occurs simultaneously for all r "Riemann sheets". This would multiply the power by a factor r and dramatically improve the possibilities to detect gravitational radiation. The measurement philosophy based on standard theory would however reject these kind of events occurring with 1/r time smaller frequency as being due to the noise (shot noise, seismic noise, and other noise from environment). This might relate to the failure to detect gravitational radiation.

D. Quantitative model

In this subsection a rough quantitative model for the de-coherence of giant (spherical) graviton to topological light rays (MEs) is discussed and the situation is discussed quantitatively for hydrogen atom type model of radiating system.

D.1. Leakage of the giant graviton to sectors of imbedding space with smaller value of Planck constant

Consider first the model for the leakage of giant graviton to the sectors of H with smaller Planck constant.

  1. Giant graviton leaks to sectors of H with a smaller value of Planck constant via quantum critical points common to the original and final sector of H. If ordinary gravitons are quantum critical they can be regarded as leakage points.

  2. It is natural to assume that the resulting dark graviton corresponds to a radial topological light ray (ME). The discrete group Zna acts naturally as rotations around the direction of propagation for ME. The Planck constant associated with ME-G system should by the general criterion be given by the general formula already described.

  3. Energy should be conserved in the leakage process. The secondary dark graviton receives the fraction Δ ω/4π= S/4π r2 of the energy of giant graviton, where S(ME) is the transversal area of ME, and r the radial distance from the source, of the energy of the giant graviton. Energy conservation gives

    S(ME)/4π r2 hbar(G,S)ω= hbar(ME,G)ω .

    or

    S(ME)/4π r2= hbar(ME,G)/hbar(G,S)≈ E(ME)/M(S) .

    The larger the distance is, the larger the area of ME. This means a restriction to the measurement efficiency at large distances for realistic detector sizes since the number of gravitons must be proportional to the ratio S(D)/S(ME) of the areas of detector and ME.

D.2. The direct detection of giant graviton is not possible for long distances

Primary detection would correspond to a direct flow of energy from the giant graviton to detector. Assume that the source is modellable using large hbar variant of the Bohr orbit model for hydrogen atom. Denote by r=na/nb the rationals defining Planck constant as hbar= r×hbar0.

For G-S system one has

r(G,S)= GME/v0 =GMmv0× k/n3 .

where k is a numerical constant of order unity and m refers to the mass of planet. For Hulse-Taylor binary m≈ M holds true.

For D-G system one has

r(D,G)=GM(D) E/v0 = GM(D)mv0× k/n3 .

The ratio of these rationals (in general) is of order M(D)/M.

Suppose first that the detector has a disk like shape. This gives for the total number n(D) of ordinary gravitons going to the detector the estimate

n(D)=(d/r)2 × na(G,S)= (d/r)2× GMmv0× nb(G,S)× k/n3 .

If the actual area of detector is smaller than d2 by a factor x one has

n(D)→ xn(D) .

n(D) cannot be smaller than the number of ordinary gravitons estimated using the Planck constant associated with the detector: n(D)≥ na(D,G)=r(D,G)nb(D,G). This gives the condition

d/r≥(M(D)/M(S))1/2× (nb(D,G)/nb(G,S))1/2×(k/xn3)1/2.

Suppose for simplicity that nb(D,G)/nb(G,S)=1 and M(D)=103 kg and M(S)=1030 kg and r= 200 MPc ≈ 109 ly, which is a typical distance for binaries. For x=1,k=1,n=1 this gives roughly d≥ 10-4 ly ≈ 1011 m, which is roughly the size of solar system. From energy conservation condition the entire solar system would be the natural detector in this case. Huge values of nb(G,S) and larger reduction of nb(G,S) would be required to improve the situation. Therefore direct detection of giant graviton by human made detectors is excluded.

D.3. Secondary detection

The previous argument leaves only the secondary detection into consideration. Assume that ME results in the primary de-coherence of a giant graviton. Also longer de-coherence sequences are possible and one can deduce analogous conditions for these.

Energy conservation gives

S(D)/S(ME)× r(ME,G) = r(D,ME) .

Using the expression for S(ME) this gives an expression for S(ME) for a given detector area:

S(ME)= r(ME,G)/r(D,ME) × S(D)≈ E(G)/M(D)× S(D) .

From S(ME)=E(ME)/M(S)4π r2 one obtains

r = (E(G)M(S)/E(ME)M(D))1/2×S(D)1/2

for the distance at which ME is created. The distances of binaries studied in LIGO are of order D=1024 m. Using E(G)≈ Mv02 and assuming M=1030 kg and S(D)= 1 m2 (just for definiteness), one obtains r≈ 1025(kg/E(ME)) m. If ME is generated at distance r≈ D and if one has S(ME)≈ 106 m2 (from the size scale for LIGO) one obtains from the equation for S(ME) the estimate E(ME)≈ 10-25 kg ≈ 10-8 Joule.

D.4 Some quantitative estimates for gravitational quantum transitions in planetary systems

To get a concrete grasp about the situation it is useful to study the energies of dark giant gravitons in the case of planetary system assuming Bohr model.

The expressions for the energies of dark gravitons can be deduced from those of hydrogen atom using the replacements Ze2→4π GMm, hbar →GMm/v0. I have assumed that second mass is much smaller. The energies are given by

En= 1/n2E1 , E1= (Zα)2 m/4= (Ze2/4π×hbar)2× m/4→m/4v02.

E1 defines the energy scale. Note that v0 defines a characteristic velocity if one writes this expression in terms of classical kinetic energy using virial theorem T= -V/2 for the circular orbits. This gives En= Tn= mvn2/2= mv02/4n2 giving

vn=(v0/21/2)/n . Orbital velocities are quantized as sub-harmonics of the universal velocity v0/2-1/2=2-23/2 and the scaling of v0 by 1/n scales does not lead out from the set of allowed velocities.

Bohr radius scales as r0= hbar/Zα m→ GM/v02.

For v0=211 this gives r0= 222GM ≈ 4× 106GM. In the case of Sun this is below the value of solar radius but not too much.

The frequency ω(n,n-k) of the dark graviton emitted in n→n-k transition and orbital rotation frequency ωn are given by

ω(n,n-k) = v03/GM× (1/n2-1/(n-k)2)≈ kωn.

ωn= v03/GMn3.

The emitted frequencies at the large n limit are harmonics of the orbital rotation frequency so that quantum classical correspondence holds true. For low values of n the emitted frequencies differ from harmonics of orbital frequency.

The energy emitted in n→n-k transition would be

E(n,n-k)= mv02× (1/n2-1/(n-k)2) ,

and obviously enormous. Single spherical dark graviton would be emitted in the transition and should decay to gravitons with smaller values of hbar. Bunch like character of the detected radiation might serve as the signature of the process. The bunch like character of liberated dark gravitational energy means coherence and might play role in the coherent locomotion of living matter. For a pair of systems of masses m=1 kg this would mean Gm2/v0≈ 1020 meaning that exchanged dark graviton corresponds to a bunch containing about 1020 ordinary gravitons. The energies of graviton bunches would correspond to the differences of the gravitational energies between initial and final configurations which in principle would allow to deduce the Bohr orbits between which the transition took place. Hence dark gravitons could make possible the analog of spectroscopy in astrophysical length scales.

E. Generalization to gauge interactions

The situation is expected to be essentially the same for gauge interactions. The first guess is that one has r= Q1Q2g2/v0, were g is the coupling constant of appropriate gauge interaction. v0 need not be same as in the gravitational case. The value of Q1Q2g2 for which perturbation theory fails defines a plausible estimate for v0. The naive guess would be v0≈ 1. In the case of gravitation this interpretation would mean that perturbative approach fails for GM1M2=v0. For r>1 Planck constant is quantized with rational values with ruler-and-compass rationals as favored values. For gauge interactions r would have rather small values. The above criterion applies to the field body connecting two gauge charged systems. One can generalize this picture to self interactions assignable to the "personal" field body of the system. In this case the condition would read as Q2g2/v0>>1.

E.1 Applications/p>

One can imagine several applications.

  • A possible application would be to electromagnetic interactions in heavy ion collisions.

  • Gamma ray bursts might be one example of dark photons with very large value of Planck constant. The MEs carrying gravitons could carry also gamma rays and this would amplify the value of Planck constant form them too.

  • Atomic nuclei are good candidates for systems for which electromagnetic field body is dark. The value of hbar would be r=Z2e2/v0, with v0≈ 1. Electromagnetic field body could become dark already for Z>3 or even for Z=3. This suggest a connection with nuclear string model (see this) in which A< 4 nuclei (with Z<3) form the basic building bricks of the heavier nuclei identified as nuclear strings formed from these structures which themselves are strings of nucleons.

  • Color confinement for light quarks might involve dark gluonic field bodies.

  • Dark photons with large value of hbar could transmit large energies through long distances and their phase conjugate variants could make possible a new kind of transfer mechanism (see this) essential in TGD based quantum model of metabolism and having also possible technological applications. Various kinds of sharp pulses suggest themselves as a manner to produce dark bosons in laboratory. Interestingly, after having given us alternating electricity, Tesla spent the rest of his professional life by experimenting with effects generated by electric pulses. Tesla claimed that he had discovered a new kind of invisible radiation, scalar wave pulses, which could make possible wireless communications and energy transfer in the scale of globe (see this for a possible but not the only TGD based explanation).

E.2 In what sense dark matter is dark?

The notion of dark matter as something which has only gravitational interactions brings in mind the concept of ether and is very probably only an approximate characterization of the situation. As I have been gradually developing the notion of dark matter as a hierarchy of phases of matter with an increasing value of Planck constant, the naivete of this characterization has indeed become obvious.

If the proposed view is correct, dark matter is dark only in the sense that the process of receiving the dark bosons (say gravitons) mediating the interactions with other levels of dark matter hierarchy, in particular ordinary matter, differs so dramatically from that predicted by the theory with a single value of Planck constant that the detected dark quanta are unavoidably identified as noise. Dark matter is there and interacts with ordinary matter and living matter in general and our own EEG in particular provide the most dramatic examples about this interaction. Hence we could consider the dropping of "dark matter" from the glossary altogether and replacing the attribute "dark" with the spectrum of Planck constants characterizing the particles (dark matter) and their field bodies (dark energy).

For more details see the chapter TGD and Astrophysics of "Classical Physics in Many-Sheeted Space-time".

Wednesday, April 25, 2007

Would I fish if fish had face?

Yesterday evening I was contacted by a friend. During the phone discussion he told me that during the flight some finnish physicist had started to talk with him. During the discussion my friend had mentioned my name and to his surprise had received a very strong negative reaction.

My friend who knows me personally of course wondered why this. It had become clear that mentioning my name touches very sensitive emotional nerves. Is there some reason? Does the mentioning of my name induce the same kind of weird aggression in all my finnish colleagues? I told as my impression that this negative attitude characterizes most of the collective. I also proposed that it might reflect the characteristic socio-pathology of the finnish society which one might refer to as "misunderstood equality". It is very difficult for us in Finland to tolerate the feeling that one of us, just one of these ordinary Finnish people, might possibly have done something that might possibly distinguish him or her among other Finns some day. It is easy to find historical reasons why for this syndrome.

I however felt that this explanation was not all of it. During the night I saw a strange dream which gave a hint about the deeper psychology. I was fishing (not my hobbies during day-time!). Strangely, half of the fish was out of the water. It had taken the bait to its mouth and was just about to swallow it. There was however something that disturbed me. The "face" of fish was very clever, I might say even human. It was clear to me that the fish would feel horror and pain if it were to swallow the bait. I woke up and decided that I would not continue my dream hobby if it depends on me.

When I began to ponder about this, I realized the connection with the phone discussion. What made the situation so unpleasant was that I saw the "face" of the fish and its ability to suffer. I also felt that in the dream I was both the fisher and fish whereas in this something that we are used to call reality my colleagues were the fisher and I was the fish. The dream clearly wanted me to imagine what it is to be in the position of my colleagues and think what they have felt during these 28 years. I have of course done this many times in my attempts to understand but not in this context.

During years I have repeatedly encountered two obvious but strange things which dream expressed symbolically. First of all, most colleagues have avoided personal contact with me to the extent that the situation has often reached comic proportions. On the other hand, colleagues have wanted to label me to be something so weird that it simply cannot belong to the same species. During first years I was labelled as a kind of idiot savant lacking all forms of intelligence and even the ability to feel insults (sic!).

Later the idea about some kind of psychopath was added to the repertoire of my psychopathologies. I got a very concrete demonstration about this as I visited Kazan for more than decade ago. One of young professors who had just labelled my work as a pure rubbish in an official statement had allowed the locals to understand that I am more or less a complete psychopath who has cold-heartedly rejected his family (I had divorced at that time).

In light of the dream I find it easy to understand these strange and cruel behaviors of the community towards individual. Dehumanization is the only possible justification for the cruel behavior of the collective against individual and saves individuals from feeling directly how unethical it is. But dehumanization is possible only if you do not see the victim. If you see that the victim is intelligent living creature able to suffer, you cannot continue. You wake up, as I did from my dream. Without the refusal from personal contacts these people could not continue their dreaming.

Net age has made this painful conflict even more difficult since it provided me with communication channels making even more obvious that I am an intelligent human being and even theoretical physicist. 15 books on home page and name in Wikipedia in category "Physicists" makes it very difficult to seriously continue believing that I am a miserable crackpot. It is clear than I can write. Probably also my ability talk would become manifest if some academic instance in Finland would invite me to tell about my work. I can express myself also in my blog, and anyone can become convinced that the label of a mentally retarded psychopath is not from this world.

My belief is that the reason why this situation has reached the verge of collective madness is that for a collective it is extremely difficult to admit that the path once light-heartedly chosen is wrong. Even if it becomes obvious that something horribly wrong and cruel is being done. As perfectly normal and benevolent individuals these people certainly feel that they are doing something very bad and have probably sometimes experienced that I have become a fish with human face crying like a mad for pain and even more horrible, refuses to die. This kind of unresolved psychological conflict must be very painful as it continues. For myself, I have been done my best to not evoke these feelings in my colleagues as anyone in this kind of position is biologically programmed to do (Stockholm syndrome, not that one;-)!)

My story is of course a rather tame version about what happens everywhere around the world all the time. There is however something very special in the community of theoretical physics. There is a horrible competition and the situation is not improved by the fact that community resembles in many respects primitive communities dominated by archaic myth figures (Newton, Einstein,...). There is something very primitive in that these persons are given a status of God and that individual who just dares to think aloud, can be professionally destroyed by dooming him to be a sufferer of Einstein syndrome.

If each of us had the social maturity of sixty years old at age of twenty, I would not be writing this. At the first pole of the problem is the self-centeredness of young person and his poor ability to put himself in the shoes of another human being. At the other pole is the desire for the social acceptance and our very poor ability to resist social pressures. No institutional reform can serve as a fast miracle cure since each individual must start from scratch his personal social evolution. Cultural evolution is needed. Although this evolution have been very fast and perhaps exponential in biological time scale it is desperately slow when you taken human lifetime as a time unit.

There are however flashes of light from darkness now and then. Just few days ago I experienced something historic. The number of those finnish colleagues, who have contacted me during these years on their own initiative, can be counted using the fingers of single hand, actually single finger is enough! Now I must add to my counting system second finger;-). A colleague, who has already resigned, sent me an email and asked my opinion about something as a theoretical physicist. My opinion! As a theoretical physicist!! I was totally embarrassed about my own happiness and had to work hard to calm down myself!

P.S. I have two especially active net-enemies in Finland: Lauri Gröhn and "Optimistx". Both of these fellows have an attitude to truth which brings in my mind the notion of "creative book-keeping". It is frightening to see how deep hatred my thoughts generate in these fellows. Finnish readers can find my comments about these skeptic militants in finish here.