Saturday, January 31, 2009

A comment about thermodynamics of dark black holes

Lubos Motl had an excellent posting about thermodynamics of black holes. Unfortunately I am too busy with the updatings for a detailed response. Just a hasty comment about thermodynamics of dark black holes inspired by the vision about dark matter as a hierarchy of phases with non-standard value of Planck constant realized in terms of a book like structure of the generalized imbedding space (generalization of H=M4×CP2) with pages labeled by the values of Planck constant and phase transitions changing Planck constant interpreted as a leakage between different pages of the Big Book.

Suppose we accept the identification of dark matter in astrophysical length scales as matter with a gigantic gravitational Planck constant suggested by Bohr orbitology of planetary orbits. For instance, hbar =GM2/v0, v0=1/4, would hold true for an ideal black hole with Planck length (hbarG)1/2 equal to Schwartshild radius 2GM. Since black hole entropy is inversely proportional to hbar, this would predict black hole entropy to be of order single bit. This of course looks totally non-sensible if one believes in standard thermodynamics. For the star with mass equal to 1040 Planck masses discussed in the example of Lubos the entropy associated with the initial state of the star would be roughly the number of atoms in star equal to about 1060. Black hole entropy proportional to GM2/hbar would be of order 1080 provided the standard value of hbar is used as unit.

This stimulates some questions.

  1. Does second law pose an upper bound on the value of hbar of dark black hole from the requirement that black hole has at least the entropy of the initial state. The maximum value of hbar would be given by the ratio of black hole entropy to the entropy of the initial state and about 1020 in the example of Lubos to be compared with GM2/v0 ≈1080.

  2. Or should one generalize thermodynamics in a manner suggested by zero energy ontology by making explicit distinction between subjective time (sequence of quantum jumps) and geometric time? The arrow of geometric time would correlate with that of subjective time. One can argue that the geometric time has opposite direction for the positive and negative energy parts of the zero energy state interpreted in standard ontology as initial and final states of quantum event. If second law would hold true with respect to subjective time, the formation of ideal dark black hole would destroy entropy only from the point of view of observer with standard arrow of geometric time. The behavior of phase conjugate laser light would be a more mundane example. Do self assembly processes serve as example of non-standard arrow of geometric time in biological systems? In fact, zero energy state is geometrically analogous to a big bang followed by big crunch. One can however criticize the basic assumption as ad hoc guess. One should really understand the the arrow of geometric time. This is discussed in detail in the article About the Nature of Time.

Monday, January 19, 2009

Important notice about links which do not work anymore!!

Readers have perhaps noticed that the discovery of new longlived particle in CDF predicted by TGD already around 1990 turned out to be one of most fantastic breakthroughs of TGD. The reported findings could be explained and even predictd at quantitative level and a lot of testable predictions follow from the model as one might expect since essentially a leptonic variant of QCD is predicted.

My influental colleagues in Helsinki University who have told last 31 years in every possible occasion that I am totally mad, became understandably very angry. Since they could not punish Nature for behaving according to the predictions of TGD, they decided to punish me. I am not allowed to use university computer for my homepage anymore. This would not be a problem as such but they also refused to redirect visitors to my new homepage. These idiots have reached their holy goal: TGD has more or less disappeared from the web.

I have not yet had time and energy to update links from the blog to my homepage. When the link fails to work the recipe is however very simple: replace

in the link with

and everything should work. Thank you very much for your kind attention.

Matti Pitkänen

The recent view about the construction of configuration space spinor structure

During the last five years both the mathematical and physical understanding of quantum TGD has developed dramatically. Some ideas have died and large number of conjectures have turned to be un-necessary strong, un-necessary, or simply wrong. The outcome is that the books about basic TGD do not correspond the actual situation in the theory. Therefore I decided to perform a major cleaning operation throwing away the obsolete stuff and making good arguments more precise. Good household is not my only motivation: this kind of process, although it challenges the ego, is always extremely fruitful. The basic goal has been to replace the perspective as it was for five years ago with the one which is outcome of the development of visions and concepts like fundamental description of quantum TGD as almost topological QFT in terms of modified Dirac action for fermions at light-like 3-surfaces identified as the basic objects of the theory, zero energy ontology, finite measurement resolution as a fundamental physical principle realized in terms of Jones inclusions and having number theoretic braids as space-time correlate, generalization of S-matrix to M-matrix, number theoretical universality and number theoretical compactification reducing standard model symmetries to number theory and allowing to solve some basic problems of quantum TGD, realization of the hierarchy of Planck constants in terms of the generalization of imbedding space concept, discovery of a hierarchy of symplectic fusion algebras provided concrete understanding of the super-symplectic conformal invariance, and so on.

I started the cleaning up process from the chapter Configuration Space Spinor Structure and I glue below the abstract.

Quantum TGD should be reducible to the classical spinor geometry of the configuration space. In particular, physical states should correspond to the modes of the configuration space spinor fields. The immediate consequence is that configuration space spinor fields cannot, as one might naively expect, be carriers of a definite spin and unit fermion number. Concerning the construction of the configuration space spinor structure there are some important clues.

1. Geometrization of fermionic statistics in terms of configuration space spinor structure

The great vision has been that the second quantization of the induced spinor fields can be understood geometrically in terms of the configuration space spinor structure in the sense that the anti-commutation relations for configuration space gamma matrices require anti-commutation relations for the oscillator operators for free second quantized induced spinor fields.

  1. One must identify the counterparts of second quantized fermion fields as objects closely related to the configuration space spinor structure. Ramond model has as its basic field the anti-commuting field Gk(x), whose Fourier components are analogous to the gamma matrices of the configuration space and which behaves like a spin 3/2 fermionic field rather than a vector field. This suggests that the complexified gamma matrices of the configuration space are analogous to spin 3/2 fields and therefore expressible in terms of the fermionic oscillator operators so that their anti-commutativity naturally derives from the anti-commutativity of the fermionic oscillator operators.

    As a consequence, configuration space spinor fields can have arbitrary fermion number and there would be hopes of describing the whole physics in terms of configuration space spinor field. Clearly, fermionic oscillator operators would act in degrees of freedom analogous to the spin degrees of freedom of the ordinary spinor and bosonic oscillator operators would act in degrees of freedom analogous to the 'orbital' degrees of freedom of the ordinary spinor field.

  2. The classical theory for the bosonic fields is an essential part of the configuration space geometry. It would be very nice if the classical theory for the spinor fields would be contained in the definition of the configuration space spinor structure somehow. The properties of the modified massless Dirac operator associated with the induced spinor structure are indeed very physical. The modified massless Dirac equation for the induced spinors predicts a separate conservation of baryon and lepton numbers. The differences between quarks and leptons result from the different couplings to the CP2 Kähler potential. In fact, these properties are shared by the solutions of massless Dirac equation of the imbedding space.

  3. Since TGD should have a close relationship to the ordinary quantum field theories it would be highly desirable that the second quantized free induced spinor field would somehow appear in the definition of the configuration space geometry. This is indeed true if the complexified configuration space gamma matrices are linearly related to the oscillator operators associated with the second quantized induced spinor field on the space-time surface and/or its boundaries. There is actually no deep reason forbidding the gamma matrices of the configuration space to be spin half odd-integer objects whereas in the finite-dimensional case this is not possible in general. In fact, in the finite-dimensional case the equivalence of the spinorial and vectorial vielbeins forces the spinor and vector representations of the vielbein group SO(D) to have same dimension and this is possible for D=8-dimensional Euclidian space only. This coincidence might explain the success of 10-dimensional super string models for which the physical degrees of freedom effectively correspond to an 8-dimensional Euclidian space.

  4. It took a long time to realize that the ordinary definition of the gamma matrix algebra in terms of the anti-commutators {gA,gB} = 2gAB must in TGD context be replaced with {gAf,gB} = iJAB\per, where JAB denotes the matrix elements of the Kähler form of the configuration space. The presence of the Hermitian conjugation is necessary because configuration space gamma matrices carry fermion number. This definition is numerically equivalent with the standard one in the complex coordinates. The realization of this delicacy is necessary in order to understand how the square of the configuration space Dirac operator comes out correctly.

  5. The only possible option is that second quantized induced spinor fields are defined at 3-D light-like causal determinants associated with 4-D space-time sheet. The unique partonic dynamics is almost topological QFT defined by Chern-Simons action for the induced Kähler gauge potential and by the modified Dirac action constructed from it by requiring super-conformal symmetry. The resulting theory has all the desired super-conformal symmetries and is exactly solvable at parton level. It is 3-dimensional lightlike 3-surfaces rather than generic 3-surfaces which are the fundamental dynamical objects in this approach.

    The classical dynamics of the interior of space-time surface defines a classical correlate for the partonic quantum dynamics and provides a realization of quantum measurement theory. It is determined by the vacuum functional identified as the Dirac determinant. There are good arguments suggesting that it reduces to an exponent of absolute extremum of Kähler action in each region of the space-time sheet where the Kähler action density has a definite sign.

2. Modified Dirac equation for induced classical spinor fields

The identification of the light-like partonic 3-surfaces as carriers of elementary particle quantum numbers inspired by the TGD based quantum measurement theory forces the identification of the modified Dirac action as that associated with the Chern-Simons action for the induced Kähler gauge potential. At the fundamental level TGD would be almost-topological super-conformal QFT in the sense that only the light-likeness condition for the partonic 3-surfaces would involve the induced metric. Chern-Simons dynamics would thus involve the induced metric only via the generalized eigenvalue equation for the modified Dirac operator involving the light-like normal of X3l subset X4. N=4 super-conformal symmetry emerges as a maximal Super-Kac Moody symmetry for this option. The application of D to any generalized eigen-mode gives a zero mode and zero modes and generalized eigen-modes define a cohomology.

The basic idea is that Dirac determinant defined by eigenvalues of DC-S can be identified as the exponent of Kähler action for a preferred extremal. There are however two problems. Without further conditions the eigenvalues of DC-S are functions of the transversal coordinates of X3l and the standard definition of Dirac determinant fails. Second problem is how to feed the information about preferred extremal to the eigenvalue spectrum. The solution of these problems is discussed below.

The eigen modes of the modified Dirac equation are interpreted as generators of exact N=4 super-conformal symmetries in both quark and lepton sectors. These super-symmetries correspond to pure super gauge transformations and no spartners of ordinary particles are predicted: in particular N=2 space-time super-symmetry is generated by the righthanded neutrino is absent contrary to the earliest beliefs. There is no need to emphasize the experimental implications of this finding.

An essential difference with respect to standard super-conformal symmetries is that Majorana condition is not satisfied, the super generators carry quark or lepton number, and the usual super-space formalism does not apply. The situation is saved by the fact that super generators of super-conformal algebras anticommute to Hamiltonians of symplectic transformations rather than vector fields representing the transformations.

Configuration space gamma matrices identified as super generators of super-symplectic or super Kac-Moody algebras (depending on CH coordinates used) are expressible in terms of the oscillator operators associated with the eigen modes of the modified Dirac operator. The number of generalized eigen modes turns out to be finite so that standard canonical quantization does not work unless one restricts the set of points involved defined as intersection of number theoretic braid with the partonic 2-surface. The interpretation is in terms of finite measurement resolution and the surprising thing is that this notion is implied by the vacuum degeneracy of Kähler action.

3. The exponent of Kähler function as Dirac determinant for the modified Dirac action

Although quantum criticality in principle predicts the possible values of Kähler coupling strength, one might hope that there exists even more fundamental approach involving no coupling constants and predicting even quantum criticality and realizing quantum gravitational holography.

  1. The Dirac determinant defined by the product of Dirac determinants associated with the light-like partonic 3-surfaces X3l associated with a given space-time sheet X4 is the simplest candidate for vacuum functional identifiable as the exponent of the Kähler function. One can of course worry about the finiteness of the Dirac determinant. p-Adicization requires that the eigenvalues belong to a given algebraic extension of rationals. This restriction would imply a hierarchy of physics corresponding to different extensions and could automatically imply the finiteness and algebraic number property of the Dirac determinants if only finite number of eigenvalues would contribute. The regularization would be performed by physics itself if this were the case.

  2. The basic problem has been how to feed in the information about the preferred extremal of Kähler action to the eigenvalue spectrum of C-S Dirac operator DC-S at light-like 3-surface X3l. The identification of the preferred extremal came possible via boundary conditions at X3l dictated by number theoretical compactification. The basic observation is that the Dirac equation associated with the 4-D Dirac operator DK defined by Kähler action can be seen as a conservation law for a super current. By restricting the super current to flow along X3l by requiring that its normal component vanishes, one obtains a singular solution of 4-D modified Dirac equation restricted to X3l. The ënergy" spectrum for the solutions of DK corresponds to the spectrum of eigenvalues for DC-S and the product of the eigenvalues defines the Dirac determinant in standard manner. Since the eigenmodes are restricted to those localized to regions of non-vanishing induced Kähler form, the number of eigen modes is finite and therefore also Dirac determinant is finite. The eigenvalues can be also algebraic numbers.

  3. It remains to be proven that the product of eigenvalues gives rise to the exponent of Kähler action for the preferred extremal of Kähler action. At this moment the only justification for the conjecture is that this the only thing that one can imagine. The identification of super-symplectic conformal weights as zeros of zeta function defined by the eigenvalues of modified Dirac operator would couple them with the dynamics defined by the Kähler action.

  4. A long-standing conjecture has been that the zeros of Riemann Zeta are somehow relevant for quantum TGD. Rieman zeta is however naturally replaced Dirac zeta defined by the eigenvalues of DC-S and closely related to Riemann Zeta since the spectrum consists essentially for the cyclotron energy spectra for localized solutions region of non-vanishing induced Kähler magnetic field and hence is in good approximation integer valued up to some cutoff integer. In zero energy ontology the Dirac zeta function associated with these eigenvalues defines"square root" of thermodynamics assuming that the energy levels of the system in question are expressible as logarithms of the eigenvalues of the modified Dirac operator defining kind of fundamental constants. Critical points correspond to approximate zeros of Dirac zeta and if Kähler function vanishes at criticality as it ineed should, the thermal energies at critical points are in first order approximation proportional to zeros themselves so that a connection between quantum criticality and approximate zeros of Dirac zeta emerges.

  5. The discretization induced by the number theoretic braids reduces the world of classical worlds to effectively finite-dimensional space and configuration space Clifford algebra reduces to a finite-dimensional algebra. The interpretation is in terms of finite measurement resolution represented in terms of Jones inclusion M subset N of HFFs with M taking the role of complex numbers. The finite-D quantum Clifford algebra spanned by fermionic oscillator operators is identified as a representation for the coset space N/M describing physical states modulo measurement resolution. In the sectors of generalized imbedding space corresponding to non-standard values of Planck constant quantum version of Clifford algebra is in question.

4. Super-conformal symmetries

The almost topological QFT property of partonic formulation based on Chern-Simons action and corresponding modified Dirac action allows a rich structure of N=4 super-conformal symmetries. In particular, the generalized Kac-Moody symmetries leave corresponding X3-local isometries respecting the light-likeness condition. A rather detailed view about various aspects of super-conformal symmetries emerge leading to identification of fermionic anti-commutation relations and explicit expressions for configuration space gamma matrices and Kähler metric. This picture is consistent with the conditions posed by p-adic mass calculations.

Number theoretical considerations play a key role and lead to the picture in which effective discretization occurs so that partonic two-surface is effectively replaced by a discrete set of algebraic points belonging to the intersection of the real partonic 2-surface and its p-adic counterpart obeying the same algebraic equations. This implies effective discretization of super-conformal field theory giving N-point functions defining vertices via discrete versions of stringy formulas.

For the updated version of the chapter see Configuration Space Spinor Structure of "Physics as Infinite-Dimensional Geometry".

Vision about quantization of Planck constant

The quantization of Planck constant has been the basic them of TGD since 2005 and the perspective in the earlier version of this chapter reflected the situation for about year and one half after the basic idea stimulated by the finding of Nottale that planetary orbits could be seen as Bohr orbits with enormous value of Planck constant given by hbargr = GM1M2/v0, v0 ≈ 2-11 for the inner planets. The general form of hbargr is dictated by Equivalence Principle. This inspired the ideas that quantization is due to a condensation of ordinary matter around dark matter concentrated near Bohr orbits and that dark matter is in macroscopic quantum phase in astrophysical scales.

The second crucial empirical input were the anomalies associated with living matter. Mention only the effects of ELF radiation at EEG frequencies on vertebrate brain and anomalous behavior of the ionic currents through cell membrane. If the value of Planck constant is large, the energy of EEG photons is above thermal energy and one can understand the effects on both physiology and behavior. If ionic currents through cell membrane have large Planck constant the scale of quantum coherence is large and one can understand the observed low dissipation in terms of quantum coherence.

1. The evolution of mathematical ideas

From the beginning the basic challenge -besides the need to deduce a general formula for the quantized Planck constant- was to understand how the quantization of Planck constant is mathematically possible. From the beginning it was clear that since particles with different values of Planck constant cannot appear in the same vertex, a generalization of space-time concept is needed to achieve this.

During last five years or so many deep ideas -both physical and mathematical- related to the construction of quantum TGD have emerged and this has led to a profound change of perspective in this and also other chapters. The overall view about TGD is described briefly here.

  1. For more than five years ago I realized that von Neumann algebras known as hyperfinite factors of type II1 (HFFs) are highly relevant for quantum TGD since the Clifford algebra of configuration space ("world of classical worlds", WCW) is direct sum over HFFs. Jones inclusions are particular class of inclusions of HFFs and quantum groups are closely related to them. This led to a conviction that Jones inclusions can provide a detailed understanding of what is involved and predict very simple spectrum for Planck constants associated with M4 and CP2 degrees of freedom (later I replaced M4 by its light cone M4± and finally with the causal diamond CD defined as intersection of future and past light-cones of M4).

  2. The notion of zero energy ontology replaces physical states with zero energy states consisting of pairs of positive and negative energy states at the light-like boundaries δM4±×CP2 of CDs forming a fractal hierarchy containing CDs within CDs. In standard ontology zero energy state corresponds to a physical event, say particle reaction. This led to the generalization of S-matrix to M-matrix identified as Connes tensor product characterizing time like entanglement between positive and negative energy states. M-matrix is product of square root of density matrix and unitary S-matrix just like Schrödinger amplitude is product of modulus and phase, which means that thermodynamics becomes part of quantum theory and thermodynamical ensembles are realized as single particle quantum states. This led also to a solution of long standing problem of understanding how geometric time of the physicist is related to the experienced time identified as a sequence of quantum jumps interpreted as moments of consciousness in TGD inspired theory of consciousness which can be also seen as a generalization of quantum measurement theory (see this) .

  3. Another closely related idea was the emergence of measurement resolution as the basic element of quantum theory. Measurement resolution is characterized by inclusion M subset N of HFFs with M characterizing the measurement resolution in the sense that the action of M creates states which cannot be distinguished from each other within measurement resolution used. Hence complex rays of state space are replaced with M rays. One of the basic challenges is to define the nebulous factor space N/M having finite fractional dimension N:M given by the index of inclusion. It was clear that this space should correspond to quantum counterpart of Clifford algebra of world of classical worlds reduced to a finite-quantum dimensional algebra by the finite measurement resolution (see this).

  4. The realization that light-like 3-surfaces at which the signature of induced metric of space-time surface changes from Minkowskian to Euclidian are ideal candidates for basic dynamical objects besides light-like boundaries of space-time surface was a further decisive step or progress. This led to vision that quantum TGD is almost topological quantum field theory ("almost" because light-likeness brings in induced metric) characterized by Chern-Simons action for induced Kähler gauge potential of CP2. Together with zero energy ontology this led to the generalization of the notion of Feynman diagram to a light-like 3-surface for which lines correspond to light-like 3-surfaces and vertices to 2-D partonic surface at which these 3-D surface meet. This means a strong departure from string model picture. The interaction vertices should be given by N-point functions of a conformal field theory with second quantized induced spinor fields defining the basic fields in terms of which also the gamma matrices of world of classical worlds could be constructed as super generators of super conformal symmetries (see this).

  5. By quantum classical correspondence finite measurement resolution should have a space-time correlate. The obvious guess was that this correlate is discretization at the level of construction of M-matrix. In almost-TQFT context the effective replacement of light-like 3-surface with braids defining basic objects of TQFTs is the obvious guess. Also number theoretic universality necessary for the p-adicization of quantum TGD by a process analogous to the completion of rationals to reals and various p-adic number fields requires discretization since only rational and possibly some algebraic points of the imbedding space (in suitable preferred coordinates) allow interpretation both as real and p-adic points. It was clear that the construction of M-matrix boils to the precise understanding of number theoretic braids (see this).

  6. The interaction with M-theory dualities (see this) led to a handful of speculations about dualities possible in TGD framework, and one of these dualities- M8-M4×CP2 duality - eventually led to a unique identification of number theoretic braids. The dimensions of partonic 2-surface, space-time, and imbedding space strongly suggest that classical number fields, or more precisely their complexifications might help to understand quantum TGD. If the choice of imbedding space is unique because of uniqueness of infinite-dimensional Kähler geometric existence of world of classical worlds then standard model symmetries coded by M4×CP2 should have some deeper meaning and the most obvious guess is that M4×CP2 can be understood geometrically. SU(3) belongs to the automorphism group of octonions as well as hyper-octonions M8 identified by subspace of complexified octonions with Minkowskian signature of induced metric. This led to the discovery that hyper-quaternionic 4-surfaces in M8 can be mapped to M4×CP2 provided their tangent space contains preferred M2 subset M4 subset M4×E4. Years later I realized that the map generalizes so that M2 can depend on the point of X4. The interpretation of M2(x) is both as a preferred hyper-complex (commutative) sub-space of M8 and as a local plane of non-physical polarizations so that a purely number theoretic interpretation of gauge conditions emerges in TGD framework. This led to a rapid progress in the construction of the quantum TGD. In particular, the challenge of identifying the preferred extremal of Kähler action associated with a given light-like 3-surface X3l could be solved and the precise relation between M8 and M4×CP2 descriptions was understood (see this).

  7. Also the challenge of reducing quantum TGD to the physics of second quantized induced spinor fields found a resolution recently (see this). For years ago it became clear that the vacuum functional of the theory must be the Dirac determinant associated with the induced spinor fields so that the theory would predict all coupling parameters from quantum criticality. Even more, the vacuum functional should correspond to the exponent of Kähler action for a preferred extremal. The problem was that the generalized eigenmodes of Chern-Simons Dirac operator allow a generalized eigenvalues to be arbitrary functions of two coordinates in the directions transversal to the light-like direction of X3l. The progress in the understanding of number theoretic compactification however allowed to understand how the information about the preferred extremal of Kähler action is coded to the spectrum of eigen modes.

    The basic idea is simple and I actually discovered it for more than half decade ago but forgot! The generalized eigen modes of 3-D Chern-Simons Dirac operator DC-S correspond to the zero modes of a 4-D modified Dirac operator defined by Kähler action localized to X3l so that induced spinor fields can be seen as 4-D spinorial shock waves. The led to a concrete interpretation of the eigenvalues as analogous to cyclotron energies of fermion in classical electro-weak magnetic fields defined by the induced spinor connection and a connection with anyon physics emerges by 2-dimensionality of the evolving system. Also it was possible to identify the boundary conditions for the preferred extremal of Kähler action -analog of Bohr orbit- at X3l and also to the vision about how general coordinate invariance allows to use any light-like 3-surface X3 subset X4(X3l) instead of using only wormhole throat to second quantize induced spinor field.

  8. It became as a total surprise that due to the huge vacuum degeneracy of induced spinor fields the number of generalized eigenmodes identified in this manner was finite. The good news was that the theory is manifestly finite and zeta function regularization is not needed to define the Dirac determinant. The manifest finiteness had been actually must-be-true from the beginning. The apparently bad news was that the Clifford algebra of WCW world constructed from the oscillator operators is bound to be finite-dimensional. The resolution of the paradox comes from the realization that this algebra represents the somewhat mysterious coset space N/M so that finite measurement resolution and the notion inclusion are coded by the vacuum degeneracy of Kähler action and the maximally economical description in terms of inclusions emerges automatically.

  9. A unique identification of number theoretic braids became also possible and relates to the construction of the generalized imbedding space by gluing together singular coverings and factor spaces of CD\M2 and CP2\S2I to form a book like structure. Here M2 is preferred plane of M4 defining quantization axis of energy and angular momentum and S2I is one of the two geodesic sphere of CP2. The interpretation of the selection of these sub-manifolds is as a geometric correlate for the selection of quantization axes and CD defining basic sector of world of classical worlds is replaced by a union corresponding to these choices. Number theoretic braids come in too variants dual to each other, and correspond to the intersection of M2 and M4 projection of X3l on one hand and S2I and CP2 projection of X3l on the other hand. This is simplest option and would mean that the points of number theoretic braid belong to M2 (S2I) and are thus quantum critical although entire X2 at the boundaries of CD belongs to a fixed page of the Big Book. This means solution of a long standing problem of understanding in what sense TGD Universe is quantum critical. The phase transitions changing Planck constant correspond to tunneling represented geometrically by a leakage of partonic 2-surface from a page of Big Book to another one.

  10. Many other steps of progress have occurred during the last years. Much earlier it had become clear that the basic difference between TGD and string models is that in TGD framework the super algebra generators are non-hermitian and carry quark or lepton number (see this). Super-space concept is un-necessary because super generators anticommute to Hamiltonians of bosonic symmetries rather than corresponding vector fields. This allows to avoid the Majorana condition of super string models fixing space-time dimension to 10 or 11. During last years a much more precise understanding of super-symplectic and super Kac-Moody symmetries has emerged. The generalized coset representation for these two Super Virasoro algebras generalizes Equivalence Principle and predicts as a special case the equivalence of gravitational and inertial masses. Coset construction also provides justification for p-adic thermodynamics in apparent conflict with super-conformal invariance. The construction of the fusion rules of symplectic QFT as analog of conformal QFT led to the notion of number theoretic braid and to an explicit construction of a hierarchy of algebras realizing symplectic fusion rules and the notion of finite measurement resolution (see this). This approach led to the formulation of generalized Feynman diagrams and coupling constant evolution in terms of operads Taylor made for a mathematical realization of the notion of coupling constant evolution. One of the future challenges is to combine symplectic fusion algebras with the realization for the hierarchy of Planck constants.

2. The evolution of physical ideas

The evolution of physical ideas related to the hierarchy of Planck constants and dark matter as a hierarchy of phases of matter with non-standard value of Planck constants was much faster than the evolution of mathematical ideas and quite a number of applications have been developed during last five years.

  1. The basic idea was that ordinary matter condenses around dark matter which is a phase of matter characterized by non-standard value of Planck constant.

  2. The realization that non-standard values of Planck constant give rise to charge and spin fractionization and anyonization led to the precise identification of the prerequisites of anyonic phase (see this). If the partonic 2-surface, which can have even astrophysical size, surrounds the tip of CD, the matter at the surface is anyonic and particles are confined at this surface. Dark matter could be confined inside this kind of light-like 3-surfaces around which ordinary matter condenses. If the radii of the basic pieces of these nearly spherical anyonic surfaces - glued to a connected structure by flux tubes mediating gravitational interaction - are given by Bohr rules, the findings of Nottale can be understood. Dark matter would resemble to a high degree matter in black holes replaced in TGD framework by light-like partonic 2-surfaces with minimum size of order Schwarstchild radius rS of order scaled up Planck length: rS ~ (hbar G)1/2. Black hole entropy being inversely proportional to hbar is predicted to be of order unity so that dramatic modification of the picture about black holes is implied.

  3. Darkness is a relative concept and due to the fact that particles at different pages of book cannot appear in the same vertex of the generalized Feynman diagram. The phase transitions in which partonic 2-surface X2 during its travel along X3l leaks to different page of book are however possible and change Planck constant so that particle exchanges of this kind allow particles at different pages to interact. The interactions are strongly constrained by charge fractionization and are essentially phase transitions involving many particles. Classical interactions are also possible. This allows to conclude that we are actually observing dark matter via classical fields all the time and perhaps have even photographed it (see this).

  4. Perhaps the most fascinating applications are in biology. The anomalous behavior ionic currents through cell membrane (low dissipation, quantal character, no change when the membrane is replaced with artificial one) has a natural explanation in terms of dark supra currents. This leads to a vision about how dark matter and phase transitions changing the value of Planck constant could relate to the basic functions of cell, functioning of DNA and aminoacids, and to the mysteries of bio-catalysis. This leads also a model for EEG interpreted as a communication and control tool of magnetic body containing dark matter and using biological body as motor instrument and sensory receptor. One especially shocking outcome is the emergence of genetic code of vertebrates from the model of dark nuclei as nuclear strings (see this).

3. Brief summary about the generalization of the imbedding space concept

A brief summary of the basic vision in order might help reader to assimilate the more detailed representation about the generalization of imbedding space.

  1. The hierarchy of Planck constants cannot be realized without generalizing the notions of imbedding space and space-time since particles with different values of Planck constant cannot appear in the same interaction vertex. This suggests some kind of book like structure for both M4 and CP2 factors of the generalized imbedding space is suggestive.

  2. Schrödinger equation suggests that Planck constant corresponds to a scaling factor of M4 metric whose value labels different pages of the book. The scaling of M4 coordinate so that original metric results in M4 factor is possible so that the scaling of hbar corresponds to the scaling of the size of causal diamond CD defined as the intersection of future and past directed light-cones. The light-like 3-surfaces having their 2-D and light-boundaries of CD are in a key role in the realization of zero energy states. The infinite-D spaces formed by these 3-surfaces define the fundamental sectors of the configuration space (world of classical worlds). Since the scaling of CD does not simply scale space-time surfaces, the coding of radiative corrections to the geometry of space-time sheets becomes possible and Kähler action can be seen as expansion in powers of hbar/hbar0.

  3. Quantum criticality of TGD Universe is one of the key postulates of quantum TGD. The most important implication is that Kähler coupling strength is analogous to critical temperature. The exact realization of quantum criticality would be in terms of critical sub-manifolds of M4 and CP2 common to all sectors of the generalized imbedding space. Quantum criticality would mean that the two kinds of number theoretic braids assignable to M4 and CP2 projections of the partonic 2-surface belong by the definition of number theoretic braids to these critical sub-manifolds. At the boundaries of CD associated with positive and negative energy parts of zero energy state in given time scale partonic two-surfaces belong to a fixed page of the Big Book whereas light-like 3-surface decomposes into regions corresponding to different values of Planck constant much like matter decomposes to several phases at thermodynamical criticality.

  4. The connection with Jones inclusions was originally a purely heuristic guess based on the observation that the finite groups characterizing Jones inclusion characterize also pages of the Big Book. The key observation is that Jones inclusions are characterized by a finite subgroup G Ì SU(2) and that this group also characterizes the singular covering or factor spaces associated with CD or CP2 so that the pages of generalized imbedding space could indeed serve as correlates for Jones inclusions. The elements of the included algebra M are invariant under the action of G and M takes the role of complex numbers in the resulting non-commutative quantum theory.

  5. The understanding of quantum TGD at parton level led to the realization that the dynamics of Kähler action realizes finite measurement resolution in terms of finite number of modes of the induced spinor field. This automatically implies cutoffs to the representations of various super-conformal algebras typical for the representations of quantum groups closely associated with Jones inclusions. The Clifford algebra spanned by the fermionic oscillator operators would provide a realization for the factor space N/M of hyper-finite factors of type II1 identified as the infinite-dimensional Clifford algebra N of the configuration space and included algebra M determining the finite measurement resolution. The resulting quantum Clifford algebra has anti-commutation relations dictated by the fractionization of fermion number so that its unit becomes r=hbar/hbar0. SU(2) Lie algebra transforms to its quantum variant corresponding to the quantum phase q=exp(i2p/r).

  6. Jones inclusions appear as two variants corresponding to N:M < 4 and N:M=4. The tentative interpretation is in terms of singular G-factor spaces and G-coverings of M4 or CP2 in some sense. The alternative interpretation in terms of two geodesic spheres of CP2 would mean asymmetry between M4 and CP2 degrees of freedom.

  7. Number theoretic Universality suggests an answer why the hierarchy of Planck constants is necessary. One must be able to define the notion of angle -or at least the notion of phase and of trigonometric functions- also in p-adic context. All that one can achieve naturally is the notion of phase defined as root of unity and introduced by allowing algebraic extension of p-adic number field by introducing the phase if needed. In the framework of TGD inspired theory of consciousness this inspires a vision about cognitive evolution as the gradual emergence of increasingly complex algebraic extensions of p-adic numbers and involving also the emergence of improved angle resolution expressible in terms of phases exp(i2p/n) up to some maximum value of n. The coverings and factor spaces would realize these phases geometrically and quantum phases q naturally assignable to Jones inclusions would realize them algebraically. Besides p-adic coupling constant evolution based on hierarchy of p-adic length scales there would be coupling constant evolution with respect to hbar and associated with angular resolution.

For the updated version of the chapter see Does TGD Predict a Spectrum of Planck Constants? of "Towards S-matrix".

Saturday, January 17, 2009

Unidentified spectral noise as a new experimental support for quantum TGD?

The news of yesterday morning came in email from Jack Sarfatti. The news was that gravitational detectors in GEO600 experiment have been plagued by unidentified noise in the frequency range 300-1500 Hz. Craig J. Hogan has proposed an explanation in terms of holographic Universe. By reading the paper I learned that assumptions needed are essentially those of quantum TGD. Light-like 3-surfaces as basic objects, holography, effective 2-dimensionality, are some of the terms appearing repeatedly in the article.

Maybe this means a new discovery giving support for TGD. I hope that it does not make my life even more difficult in Finland. Readers have perhaps noticed that the discovery of new longlived particle in CDF predicted by TGD already around 1990 turned out to be one of most fantastic breakthroughs of TGD since the reported findings could be explained at quantitative level. The side effect was that Helsinki University did not allow me to use the computer for homepage anymore and they also refused to redirect visitors to my new homepage. The goal was achieved: I have more or less disappeared from the web. It seems that TGD is becoming really dangerous and power holds of science are getting nervous.

In any case, I could not resist the temptation to spend the day with this problem although I had firmly decided to use all my available time to the updating of basic chapters of quantum TGD.

1. The experiment

Consider first the graviton detector used in GEO600 experiment. The detector consists of two long arms (the length is 600 meters)- essentially rulers of equal length. The incoming gravitational wave causes a periodic stretch of the arms: the lengths of the rulers vary. The detection of gravitons means that laser beam is used to keep record about the varying length difference. This is achieved by splitting the laser beam into two pieces using a beam splitter. After this the beams travel through the arms and bounce back to interfere in the detector. Interference pattern tells whether the beam spent slightly different times in the arms due to the stretching of arm caused by the incoming gravitational radiation. The problem of experimenters has been the presence of an unidentified noise in the range 100-1500 Hz.

The prediction of Measurement of quantum fluctuations in geometry by Craig Hogan published in Phys. Rev. D 77, 104031 (2008) is that holographic geometry of space-time should induce fluctuations of classical geometry with a spectrum which is completely fixed . Hogan's prediction is very general and - if I have understood correctly - the fluctuations depend only on the duration (or length) of the laser beam using Planck length as a unit. Note that there is no dependence on the length of the arms and the fluctuations characterize only the laser beam. Although Planck length appears in the formula, the fluctuations need not have anything to with gravitons but could be due to the failure of the classical description of laser beams. The great surprise was that the prediction of Hogan for the noise is of same order of magnitude as the unidentified noise bothering experiments in the range 100-700 Hz.

2. Hogan's theory

Let us try to understand Hogan's theory in more detail.

  1. The basic quantitative prediction of the theory is very simple. The spectral density of the noise for high frequencies is given by hH= tP1/2, where tP=(hbar G)1/2 is Planck time. For low frequencies hH is proportional to 1/f just like 1/f noise. The power density of the noise is given by tP and a connection with poorly understood 1/f noise appearing in electronic and other systems is suggestive. The prediction depends only Planck scale so that it should very easy to kill the model if one is able to reduce the noise from other sources below the critical level tP1/2. The model predicts also the the distribution characterizing the uncertainty in the direction of arrival for photon in terms of the ratio lP/L. Here L is the length or beam of equivalently its duration. A further prediction is that the minimal uncertainty in the arrival time of photons is given by Δ t= (tPt)1/2 and increases with the duration of the beam.

  2. Both quantum and classical mechanisms are discussed as an explanation of the noise. Gravitational holography is the key assumption behind both models. Gravitational holography states that space-time geometry has two space-time dimensions instead of three at the fundamental level and that third dimension emerges via holography. A further assumption is that light-like (null) 3-surfaces are the fundamental objects. Sounds familiar!

2.1 Heuristic argument

The model starts from an optics inspired heuristic argument.

  1. Consider a light ray with length L, which ends to aperture of size D. This gives rise to a diffraction spot of size λL/D. The resulting uncertainty of the transverse position of source is minimized when the size of diffraction spot is same as aperture size. This gives for the transverse uncertainty of the position of source Δ x= (λ L)1/2. The orientation of the ray can be determined with a precision Δ θ= (λ/L)1/2. The shorter the wavelength the better the precision. Planck length is believed to pose a fundamental limit to the precision. The conjecture is that the transverse indeterminacy of Planck wave length quantum paths corresponds to the quantum indeterminacy of the metric itself. What this means is not quite clear to me.

  2. The basic outcome of the model is that the uncertainty for the arrival times of the photons after reflection is proportional to

    Δ t =tP1/2× (sin(θ))1/2×sin(2θ),

    where θ denotes the angle of incidence on beam splitter. In normal direction Δ t vanishes. The proposed interpretation is in terms of Brownian motion for the distance between beam splitter and detector the interpretation being that each reflection from beam splitter adds uncertainty. This is essentially due to the replacement of light-like surface with a new one orthogonal to it inducing a measurement of distance between detector and bean splitter.

This argument has some aspects which I find questionable.

  1. The assumption of Planck wave length waves is certainly questionable. The underlying is that it lead to the classical formula involving the aperture size which is eliminated from the basic formula by requiring optimal angular resolution. One might argue that a special status of waves with Planck wave length breaks Lorentz invariance but since the experimental apparatus defines a preferred coordinate system this ned not be a problem.

  2. Unless one is ready to forget the argument leading to the formula for Δ θ, one can argue that the description of the holographic interaction between distant points induced by these Planck wave length waves in terms of aperture with size D= (lPL)1/2 should have some more abstract physical counterpart. Could elementary particles as extended 2-D objects (as in TGD) play the role of ideal apertures to which a radiation with Planck wave length arrives? If one gives up the assumption about Planck wave radiation the uncertainty increases as λ. To my opinion one should be able to deduced the basic formula without this kind of argument.

2.2 Argument based on uncertainty principle for waves with Planck wave length

Second argument can do without diffraction but still uses Planck wave length waves.

  1. The interactions of Planck wave length radiation at null surface at two different times corresponding to normal coordinates z1 and z2 at these times are considered. From the standard uncertainty relation between momentum and position of the incoming particle one deduces uncertainty relation for transverse position operators x(zi), i=1,2. The uncertainty comes from uncertainty of x(z2) induced by uncertainty of the transverse momentum px(zi). The uncertainty relation is deduced by assuming that (x(z2)-x(z1)/(z2-z1) is the ratio of transversal and longitudinal wave vectors. This relates x(z2) to px(zi) and the uncertainty relation can be deduced. The uncertainty increases linearly with z2-z1. Geometric optics is used to describing the propagating between the two points and this should certainly work for a situation in which wavelength is Planck wavelength if the notion of Planck wave length wave makes sense. From this formula the basic predictions follow.

  2. Hogan emphasizes that the basic result is obtained also classically by assuming that light-like surfaces describing the propagation of light between ends points of arm describe Brownian like random walk in directions transverse to the direction of propagation. I understand that this means that Planck wave length wave is not absolutely necessary for this approach.

2.3 Description in terms of equivalent gravitonic wave packet

Hogan discusses also an effective description of holographic noise in terms of gravitational wave packet passing through the system.

  1. The holographic noise at frequency f has equivalent description in terms of a gravitational wave packet of frequency f and duration T=1/f passing through the system. In this description the variance for the length difference of arms using standard formula for gravitational wave packet

    Δl2/l2= h2f ,

    where h characterizes the spectral density of gravitational wave.

  2. For high frequencies one obtains

    h= hP= (tP)1/2 .

  3. For low frequencies the model predicts

    h= (fres/f)(tP)1/2 .

    Here fres characterized the inverse residence time in detector and is estimated to be about 700 Hz in GEO600 experiment.

  4. The predictions of the theory are compared to the unidentified noise in the frequency range 100-600 Hz which introduces amplifying factor varying from 7 to 1. The orders of magnitude are same.

3. TGD based model

In TGD based model for the claimed noise on can avoid the assumption about waves with Planck wave length. Rather Planck length corresponds to the transversal cross section of so called massless extremals (MEs) assignable to MEs and orthogonal to the direction of propagation. Further elements are so called number theoretic braids leading to the discretization of quantum TGD at fundamental level. The mechanism inducing the distribution for the travel times of reflected photon is due to the transverse extension of MEs, discretization in terms of number theoretic braids. Note that also in Hogan's model it is essential that one can speak about position of particle in the beam.

3.1 Some background

Consider first the general picture behind the TGD inspired model.

  1. What authors emphasize can be condensed to the following statement: The transverse indeterminacy of Planck wave length seems likely to be a feature of 3+1 D space-time emerge is as a dual of quantum theory on a 2+1-D null surface. In TGD light-like 3-surfaces indeed are the fundamental objects and 4-D space-time surface is in a holographic relation to these light-like 3-surfaces. The analog of conformal invariance in light-like radial direction implies that partonic 2-surfaces are actually basic objects in short scales in the sense that one 3-dimensionality only in discretized sense.

  2. Both the interpretation as almost topological quantum field theory, the notion of finite measurement resolution, number theoretical universality making possible p-adicization of quantum TGD, and the notion of quantum criticality lead to a fundamental description in terms of discrete points sets. These are defined as intersections of what I call number theoretic braids with partonic 2-surfaces X2 at the boundaries of causal diamonds identified as intersections of future and paste directed light-cones forming a fractal hierarchy. These 2-surfaces X2 correspond to the ends of light-like three surfaces. Only the data from this discrete point set is used in the definition of M-matrix: there is however continuum of selections of this data set corresponding to different directions of light-like ray at the boundary of light-cone, and in detection one of these direction is selected and corresponds to the direction of beam in the recent case.

  3. Fermions correspond to CP2 vacuum extremal with Euclidian signature of induced metric condensed to space-time sheet with Minkowskian signature and light-like wormhole throat for which 4-metric is degenerate carries the quantum numbers. Bosons correspond to wormhole contacts consisting of a piece of CP2 vacuum extremal connecting two two space-time sheets with Minkowskian signature of induced metric. The strands of number theoretic braids carry fermionic quantum numbers and discretization is interpreted as a space-time correlate for the finite measurement resolution implying the effective grainy nature of 2-surfaces.

3.2 The model

Consider now the TGD inspired model for a laser beam of fixed duration T.

  1. In TGD framework the beams of photons and perhaps also photons themselves would have so called massless extremals as space-time correlates. The identification of gauge bosons as wormhole contacts means that there is a pair of MEs connected by a pieces of CP2 type vacuum extremal and carrying fermion and antifermion at the wormhole throats defining light-like 3-surfaces. The intersection of ME with light-cone boundary would represent partonic 2-surface and any transverse cross section of the M4 projection of ME is possible.

  2. The reflection of ME has description in terms of generalized Feynman diagrams for which the incoming lines correspond to the light-like three surfaces and vertices to partonic 2-surfaces at which the MEs are glued together. In this simplest model this surface defines transverse cross section of both incoming and outgoing ME. The incoming and outgoing braid strands end to different points of the cross section because if two points coincide the N-point correlation function vanishes. This means that in the reflection the distribution for the positions of braid points representing exact positions of photon change in non-deterministic manner. This induces a quantum distribution of transverse coordinates associated with braid strands and in the detection state function reduction occurs fixing the position of braid strands.

  3. The transversal cross section has maximum area when it is parallel to ME. In this case the area is apart from a numerical constant equal to d×L, L the length defined by the duration of laser beam defining the length of ME and d the diameter of orthogonal cross section of ME. This makes natural the assumption about Gaussian distribution for the positions of points in the cross section as Gaussian with variance equal to d×L. The distribution proposed by Hogan is obtained if d is given by Planck length. This would mean that the minimum area for a cross section of ME is very small, about S=hbar×G. This might make sense if the ME represents laser beam.

  4. The assumption susceptible to criticism is that for the primordial ME representing photon the area of cross section orthogonal to the direction of propagation is assumed to be always given by Planck length. This assumption of course replaces Hogan's Planck wave. Note that the classical four-momentum of ME is massless. One could however argue that in quantum situation transverse momentum square is well defined quantum number and of order Planck mass mass squared.

  5. In TGD Universe single photon would differ from infinitely narrow ray by having thickness defined by Planck length. There would be just single braid strand and its position would change in the reflection. The most natural interpretation indeed is that the pair of space-time sheets associated with photon consists of MEs with different transversal size scales: larger ME could represent laser beam. The noise would come from the lowest level in the hierarchy. One could argue that the natural size for M4 projection of wormhole throat is of order CP2 size R and therefore roughly 104 Planck lengths. If the cross section has area of order R2, where R is CP2 size, the spectral density would be roughly by a factor 100 larger than for Planck length and this might predict too large holographic noise in GEO600 experiment if the value of fres is correct. The assumption that the Gaussian characterizing the position distribution of the wormhole throat is very strongly concentrated near the center of ME with transverse size given by R looks un-natural.

  6. It is important to notice that single reflection of primordial ME corresponds to a minimum spectral noise. Repeated reflections of ME in different directions gradually increase the transversal size of ME so that the outcome is cylindrical ME with radius of order L =cT, where T is the duration of ME. At this limit the spectral density of noise would be T1/2 meaning that the uncertainty in the frequency assignable to the arrival time of photons would of same order as the oscillation period f=1/T assignable to the original ME. The interpretation is that the repeated reflections gradually generate noise and destroy the coherence of the laser beam. This would however happen at single particle level rather than for a member of fictive ensemble. Quite literally, photon would get old! This interpretation conforms with the fact that in TGD framework thermodynamics becomes part of quantum theory and thermodynamical ensemble is represented at single particle level in the sense and time like entanglement coefficients between positive and negative energy parts of zero energy state define M-matrix as a product of square root of diagonal density matrix and of S-matrix.

  7. The notion of number theoretic braid is essential for the interpretation for what happens in detection. In detection the positions of ends of number theoretic braid are measured and this measurement fixes the exact time spent by photons during the travel. Similar position measurement appears also in Hogan's argument. Thus the overall picture is more or less same as in the popular representation where also the grainy nature of space-time is emphasized.

  8. I already mentioned the possible connection with poorly understood 1/f noise appearing in very many systems. The natural interpretation would be in terms of MEs.

3.3 The relationship with hierarchy of Planck constants

It is interesting to combine this picture with the vision about the hierarchy of Planck constants (I am just now developing in detail the representation of the ideas involved from a perspective given by the intense work during last five years).

  1. If one accepts that dark matter corresponds to a hierarchy of phases of matter labeled by a hierarchy of Planck constants with arbitrarily large values, one must conclude that Planck length lP proportional to hbar1/2, has also spectrum. Primordial photons would have transversal size scalings as hbar1/2. One can consider the possibility that for large values of hbar the transversal size saturates to CP2 length R ≈104× lP. The spectral density of the noise would scale as hbar1/4 at least up to the critical value hbarcr=R2/G, which is in the range [2.3683, 2.5262]× 107. The preferred values of hbar number theoretically simple integers expressible as product of distinct Fermat primes and power of 2. hbarcrit/hbar0=3× 223 is integer of this kind and belongs to the allowed range of critical values.

  2. The order of magnitude for gravitational Planck constant assignable to the space-time sheets mediating gravitational interaction is gigantic - of order hbargr≈ GM2 - so that the noise assignable to gravitons would be gigantic in astrophysical scales unless R serves as the upper bound for the transverse size of both primordial gauge bosons and gravitons.

  3. If ordinary photonic space-time sheets are in question hbar has its standard value. For dark photons which I have proposed to play a key role in living matter, the situation changes and Δl2/l2 would scale like hbar1/2 at least up to critical value of Planck constant. Above this value of Planck constant spectral density would be given by R and Δl2/l2 would scale like R/l and Δ θ like (R/l)1/2.

For details and background see the updated chapter Quantum Astrophysics of "Physics in Many-Sheeted Space-time".