https://matpitka.blogspot.com/2006/05/

Monday, May 29, 2006

Physical states are zero energy states: an ontology consistent with the existing world view?

The factorizing S-matrices seem unavoidably as basic building blocks of S-matrix in TGD framework and it might be that the S-matrix, which depends on von Neumann inclusion characterizing the limitations of quantum measurer, could quite generally reduce to a tensor product of these S-matrices in partonic degrees of freedom. The basic problem has been the fact that these S-matrices are essentially trivial as far as scattering in momentum degrees of freedom is considered. After many wrong guesses an incredibly simple solution to the puzzle emerged and the almost triviality of factorizing S-matrices transformed from a curse to blessing.

1. Zero energy states as the ultimate building block of matter

The idea is to modify the original vision about particle reactions somewhat and in complete consistency with the prediction that in TGD Universe all quantum states have vanishing total quantum numbers. Instead of thinking scattering of positive energy particles as creation of zero energy state from vacuum, the original vision, one considers scattering of zero energy states. I have proposed already earlier that this scattering and also higher level scatterings occur but that this process corresponds to higher level process in the cognitive hierarchy than the scattering that we detect in laboratory. What is nice is that one can deduce scattering rates for the illusory scattering of positive energy particles from this higher level S-matrix as thermal expectation values in a sense von Neumann would have defined them. One could of course assume that also positive energy states are there but scattering for them would be rather trivial and they would not correspond to observed particles.

In the new rather Buddhistic ontology zero energy states are identified as experienced events and objective reality in the conventional sense becomes only an illusion. Before the new view can be taken seriously one must demonstrate how the illusion about positive energy reality is created and why it is so stable.

  1. The very fact that the factorizing S-matrices are trivial apart from the changes in the internal degrees of freedom means that the event pairs are extremely stable once they are generated (how they are generated is an unavoidable question to be addressed below). Infinite sequences of transition between states with same positive energies and same initial energies occur. What is nice that this makes it possible to test the predictions of the theory by experiencing the transition again and again.

  2. Statistical physics becomes statistical physics for an an ensemble consisting of zero energy states |m+n-> including also their time reversals |n+,m->. In the usual kinetics one deduces equilibrium values for various particle densities as ratios for the rates for transitions m+→ n+ and their reversals n+→ m+ so that the densities are given by n(n+)/n(m+)= ∑n+Γ(m+→ n+)/∑n+Γ/(n+→ m+). In the recent situation the same formula can be used to define the particle number densities in kinetic equilibrium using the proposed identification of the transition probabilities.

  3. Because of the stability of the zero energy states, one can construct many particle systems consisting of zero energy states and can speak about the density of zero energy states per volume. Also the densities n+,i (n-,i) of initial (final) states of given type can be defined and densitites of positive energy states can be identified as densities assignable to the ordinary matter. Also densities of particles contained by these states can be defined. It would seem that the new ontology can reproduce the standard ontology as something which is not necessary but to which we are accustomed and which does not produce too much harm.

  4. The sequence of quantum jumps between negative energy states defines also a sequence between initial (final) states of quantum jumps and, as far as momentum and color degrees of freedom are considered, this sequence represents rather immutable reality if S-matrix is factorizing.

2. How does the quantum measurement theory generalize?

There are also important questions related to the quantum measurement theory. The zero modes associated with the interior degrees of freedom of space-time surface represent classical observables entangled with partonic observables and this entanglement is reduced in quantum jump. Negentropy Maximization Principle is the TGD based proposal for the variational principle governing the statistical dynamics of quantum jumps. NMP states that entanglement negentropy tends to be maximized in the reduction of entanglement. Number theoretic variants of Shannon entropy making sense for rationally or even algebraically entangled states can be positive so that NMP can also lead to generation of this kind of entanglement and gives rise to a highly stable bound state entanglement.

Does this picture generalize to the new framework in which zero energy states become physical states? Factorizing S-matrices describe partonic dynamics and should be responsible for generating entanglement in the partonic degrees of freedom. One should understand also the S-matrix generating entanglement between zero modes and partonic degrees of freedom and quantum classical correspondence is the only guideline in the recent situation.

3. Understanding quantum computation in the new ontology?

The understanding of what really happens in quantum computation, in particular topological quantum computation, is a challenge for the recent framework since the theory of quantum computation relies heavily on the Hamiltonian time evolution which cannot be an exact description in the new ontology. The basic element is entanglement between positive and negative energy states. It is generated by time evolution in standard framework whereas in the recent framework the creation of the quantum computer program and its realization reduces to the creation of a zero energy state realizing this entanglement. Note that also entanglement between positive energy states can be used for quantum computational purposes.

The problem is obvious: the creation of quantum computer program requires a creation of a zero energy state realizing the program. Can one allow quantum jumps creating zero energy states representing the desired program? The extreme stability of the zero energy states against evolution defined by a factorizing S-matrix does not allow the popping up of zero energy states from vacuum since four-momenta are in this case vanishing. Must be accept that we are passive spectators who just observe the already existing zero energy states representing quantum computer programs as we drift towards geometric future along larger space-time sheet?

It seems that this is not necessary. p-Adic physics as a physics of intentionality and cognition however suggests how the obstacle could be overcome at the level of principle. For zero energy states, p-adic-to-real transitions and vice versa are in principle possible and I have in fact proposed a general quantum model for how intentions might be transformed to actions in this manner. In the second direction the process corresponds to formation of cognitive representation of a zero energy physical state.

In the degrees of freedom corresponding to configuration space spinors situation is very much like for reals. Rational, and more generally algebraic number based physics applies in both cases. p-Adic space-time sheets however differ dramatically from their real counterparts since they have only rational (algebraic) points in common with real space-time sheets and p-adic transcendentals are infinite as real numbers. The S-matrix elements for p-adic-to-real transitions can be formulated using n-point functions restricted to these rational points common to matter and mind stuff. If this picture is not terribly wrong, it would be possible to generate zero energy states from vacuum and the construction of quantum computer programs would be basically a long and tedious process involving very many intentional acts.

One can of course, make a further question. What about the generation of intentions: can p-adic space-time sheets and quantum numbers pop up spontaneously from vacuum? What kind of p-adic space-time sheets and quantum numbers assignable to their partonic 2-surfaces can do so spontaneously?

Here an interesting aspect of the p-adic conservation laws passes a helping hand. p-Adic integration constants are pseudo constants in the sense that a quantity having vanishing (say) time derivative can depend on a finite number of pinary digits tn of the time coordinate t=∑ntnp-n. Could one think that quantum jumps can generate from vacuum exact vacuum states as vacuum tensor factors of the configuration space spinor, and that in subsequent quantum jumps factorizing p-adic S-matrix conserving quantum numbers only in p-adic sense transforms this state into a non-trivial zero energy state which then transforms to real state in intentional action? Note that if conserved quantum numbers are integers they are automatically pseudo constants. p-Adic conservation laws could allow also the p-adic zero energy states to pop up directly from vacuum.

Real-to-p-adic transitions would represent transformation of reality to cognition and would be also possible and mean destruction of zero energy states universe. The characteristic and perhaps the defining feature of living matter would be its highly developed ability to reconstruct reality by performing p-adic-to-real transitions and their reversals.

The chapter Construction of Quantum Theory of "Towards S-matrix" represents the detailed construction as it is now (it could change!).

Sunday, May 28, 2006

S-matrix for the scattering of zero energy states representing ordinary scattering events as a solution to the problems?

The properties of factorizing S-matrices are extremely beautiful but it seems that we have been trying to make sense of them in a wrong interpretational framework. The proper understanding of the situation might require a radically new interpretation relying on the idea that particle reactions correspond to creation of negative energy states from vacuum in TGD framework. This idea has not been used in any manner hitherto.

1. Fundamental scattering as a scattering zero energy states representing particle reactions

In TGD framework that initial and final states of particle reaction form zero energy states and it is scattering of these states representing particles reactions that we actually observe in TGD Universe.

  1. What is observed are not particles in the initial and final states but particle reactions identified as zero energy states consisting of positive and negative energy particles which we interpret as initial and final states of the particle reaction. This interpretation might well require an appropriate generalization of the notion of S-matrix and to modify the notion of unitarity which must be in some form be still there.

    What comes in mind is the construction of factorizing S-matrix for zero energy states consisting of the particles of initial and finals states having arbitrary momenta. This scattering would describe scattering between zero energy states interpreted in terms of ordinary particle reactions.

  2. The basic property of factorizing S-matrices is that they affect only the internal degrees of freedom: therefore the scattering between zero energy states does not affect the initial and final momenta of positive and negative energy states so that the curses of factorizing S-matrices become blessings. The quantum jumps describing scattering makes it possible to experience these reactions consciously and affects only internal degrees of freedom which are not detected.

  3. One should be even ready to give up the cherished Lorentz invariance and color symmetries since Jones inclusions associated with the scattering experiment could mean symmetry breaking caused by the selection of subgroups SO(1,1)× SO(2) SO(1)× SO(3), and U(2)subset of SU(3). In the recent picture these choices affect even the geometry and topology of imbedding space and they reflect directly the effect of experimenter to the measurement situation, which simply cannot neglected as is usually done axiomatically. One could also speak about number theoretic breaking o symmetries induced by the requirement that fundamental commutative sub-manifolds of imbedding space are 2-dimensional.

  4. One can generalize the construction of factorizing S-matrices without difficulties to the case in which incoming and outgoing states are zero energy states representing particle reactions. Pass-by is possible also for negative and positive energy states if pass by is considered for the projections of rapidities on iπ-&etai to real plane. The construction goes through as such for the other tensors. Also the crossing symmetry and other symmetries of factorizing S-matrices generalized in obvious manner.

2. Generalization of unitarity conditions

Consider now the unitarity conditions for the scattering between zero energy states observed as particle reactions. The aim is to derive ordinary unitarity conditions from the unitarity in the scattering of zero energy states as what might be regarded as thermal averages with motivation for averaging coming from the fact that the internal degrees of freedom affected by thescattering between zero energy states are not detected.

The unitarity conditions for the scattering of zero energy states read formally as

m+ n- Sm+n- m + n- S* r+s- m+ n- m+,r+ δ n-,s- .

Note that the summed final states are expressed using italic. The sum over the final zero energy states can be also written as a trace for the product of matrices labelled by incoming zero energy states.

Tr(S m+n- S*r+s-) =δm+,r+δn-,s- .

One can put s-=n- on both sides and perform the sum over n- to get

n-Tr(S{m+n- S*r+n-) =δm+,r+n-δn-,n- .

For the factors of type II1 the sum

n-δn-,n- is equal to the trace Tr(Id)=1 of the identity matrix so that one obtains

n-Tr(Sm+n- S*r+n-) =δm+,r+ .

One could also divide the left hand side by Tr(Id) to get average as one understands it usually.

The usual unitarity condition would read ∑n-Sm+n-→ m+n- S*r+n-→ r+n-m+,r+ .

This condition would be replaced with an average over the final states in the scattering described by a factorizing S-matrix.

The interpretation of the result would be as a thermal expectation value of the unitarity condition in the sense of hyper-finite factors of type II1. This averaging is necessary if we do not have any control over the scattering between zero energy states: this scattering is just a means to become conscious about the existence of the state we usually interpret as change of state. What looks a very non-physical feature of factorizing S-matrices turns into a victory since the trace is only over final states which are characterized by the same collection of momenta and same particle number and uncertainties relate only to the internal degrees of freedom which we cannot measure and whose basic function is to make it possible to consciously perceive the particle reaction as a zero energy state.

3. Should one accept spontaneous breaking of Lorentz symmetry?

The proposed picture does not seem to provide any way to achieve unitarity by summing over all choices of (M2,E2) and (S2,S1) pairs since in a more general scattering situation the crucial transversal or longitudinal momentum exchange can occur only under very special situations. Hence a statistical averaging of the probabilities over Lorentz group seems to be the only manner to achieve Lorentz invariance.

An interesting question, is whether breaking of Lorentz symmetry is already encountered in the hadronic scattering in quark model description, which involves the reduction of Lorentz group to SO(1,1)× SO(2), and longitudinal and transverse momenta.

4. Summary

To sum up, quantum classical correspondence combined with the number theoretical view about conformal invariance could fix highly uniquely the dependence of S-matrix on cm degrees of freedom and on net momenta and color quantum numbers associated with various lightcones whose tips define the arguments of n-point function. If one is ready to accept the new view about scattering event as a scattering between two zero energy states between representing initial resp. final states as positive resp. negative energy particles of the state one obtains physically highly non-trivial counterpart of S-matrix. One must also accept the generalization of the unitarity condition to what might be regarded as a thermal average stating explicitly the assumption that experimenter does not have control over the scattering between zero energy states whose basic function is to make the zero energy state representing ordinary scattering consciously observed. The properties of factorizing S-matrices are ideal for this purpose.

The price being paid is the breaking of the full Lorentz and color invariances, which at the level of Jones inclusion means a change of the geometry topology of imbedding space and space-time. This kind of breaking of course happens in realistic experimental situation. Lorentz invariance is obtained only in statistical sense.

The most fascinating aspect of the new interpretation is that the factorizing and physically almost trivial S-matrices of integrable 2-D systems generalize to S-matrices describing scattering between states with vanishing net quantum numbers could be imbedded into 4-D theory in such a manner that the resulting S-matrix could be physically completely non-trivial and perhaps even physically realistic.

The basic requirement on S-matric between zero energy states is its almost triviality. This motivates the hope that the tensor factoring of 2-D factorizing matrices could be extended also to configuration space degrees of freedom so that each complex configuration space dimension would contribute one factorizing S-matrix to the tensor product.

The chapter Construction of Quantum Theory of "Towards S-matrix" represents the detailed construction as it is now (it could change!).

Tuesday, May 23, 2006

Precise definition of the notion of unitarity for Connes tensor product

Connes tensor product for free fields provides an extremely promising manner to define S-matrix and I have worked out the master formula in a considerable detail. The subfactor N subset of M in Jones represents the degrees of freedom which are not measured. Hence the infinite number of degrees of freedom for M reduces to a finite number of degrees of freedom associated with the quantum Clifford algebra N/M and corresponding quantum spinor space.

The previous physical picture helps to characterize the notion of unitarity precisely for the S-matrix defined by Connes tensor product. For simplicity restrict the consideration to configuration space spin degrees of freedom.

  1. Tr(Id)=1 condition implies that it is not possible to define S-matrix in the usual sense since the probabilities for individual scattering events would vanish. Connes tensor product means that in quantum measurement particles are described using finite-dimensional quantum state spaces M/N defined by the inclusion. For standard inclusions they would correspond to single Clifford algebra factor C(8). This integration over the unobserved degrees of freedom is nothing but the analog for the transitions from super-string model to effective field theory description and defines the TGD counterpart for the renormalization process.

  2. The intuitive mathematical interpretation of the Connes tensor product is that N takes the role of the coefficient field of the state space instead of complex numbers. Therefore S-matrix must be replaced with N-valued S-matrix in the tensor product of finite-dimensional state spaces. The notion of N unitarity makes sense since matrix inversion is defined as Sij→ Sji and does not require division (note that i and j label states of M/N). Also the generalization of the hermiticity makes sense: the eigenvalues of a matrix with N-hermitian elements are N Hermitian matrices so that single eigenvalue is abstracted to entire spectrum of eigenvalues. Kind of quantum representation for conceptualization process is in question and might have direct relevance to TGD inspired theory of consciousness. The exponentiation of a matrix with N Hermitian elements gives unitary matrix.

  3. The projective equivalence of quantum states generalizes: two states differing by a multiplication by N unitary matrix represent the same ray in the state space. By adjusting the N unitary phases of the states suitably it might be possible to reduce S-matrix elements to ordinary complex vacuum expectation values for the states created by using elements of quantum Clifford algebra M/N, which would mean the reduction of the theory to TGD variant of conformal field theory or effective quantum field theory.

  4. The probabilities Pij for the general transitions would be given by

    Pij=NijNij ,

    and are in general N-valued unless one requires

    Pij=pijeN ,

    where eN is projector to N. Nij is therefore proportional to N-unitary matrix. S-matrix is trivial in N degrees of freedom which conforms with the interpretation that N degrees of freedom remain entangled in the scattering process.

  5. If S-matrix is non-trivial in N degrees of freedom, these degrees of freedom must be treated statistically by summing over probabilities for the initial states. The only mathematical expression that one can imagine for the scattering probabilities is given by

    pij=Tr(NijNij )N .

    The trace over N degrees of freedom means that one has probability distribution for the initial states in N degrees of freedom such that each state appears with the same probability which indeed was von Neumann's guiding idea. By the conservation of energy and momentum in the scattering this assumption reduces to the basic assumption of thermodynamics.

  6. An interesting question is whether also momentum degrees of freedom should be treated as a factor of type II1 although they do not correspond directly to configuration space spin degrees of freedom. This would allow to get rid of mathematically unattractive squares of delta functions in the scattering probabilities.

For details see the chapter Was von Neumann Right After All of "TGD: an Overview".

Tree like structure of the extended imbedding space

The quantization of hbar in multiples of integer n characterizing the quantum phase q=exp(iπ/n) in M4 and CP2 degreees of freedom separately means also separate scalings of covariant metrics by n2 in these degrees of freedom. The question is how these copies of imbedding spaces are glued together. The gluing of different p-adic variants of imbedding spaces along rationals and general physical picture suggest how the gluing operation must be carried out.

Two imbedding spaces with different scalings factors of metrics are glued directly together only if either M4 or CP2 scaling factor is same and only along M4 or CP2. This gives a kind of evolutionary tree (actually in rather precise sense as the quantum model for evolutionary leaps as phase transitions increasing hbar(M4) demonstrates!). In this tree vertices represent given M4 (CP2) and lines represent CP2:s (M4:s) with different values of hbar(CP2) (hbar(M4)) emanating from it much like lines from from a vertex of Feynman diagram.

  1. In the phase transition between different hbar(M4):s the projection of the 3-surface to M4 becomes single point so that a cross section of CP2 type extremal representing elementary particle is in question. Elementary particles could thus leak between different M4:s easily and this could occur in large hbar(M4) phases in living matter and perhaps even in quantum Hall effect. Wormhole contacts which have point-like M4 projection would allow topological condensation of space-time sheets with given hbar(M4) at those with different hbar(M4) in accordance with the heuristic picture.

  2. In the phase transition different between CP2:s the CP2 projection of 3-surface becomes point so that the transition can occur in regions of space-time sheet with 1-D CP2 projection. The regions of a connected space-time surface corresponding to different values of hbar (CP2) can be glued together. For instance, the gluing could take place along surface X3=S2× T (T corresponds time axis) analogous to black hole horizon. CP2 projection would be single point at the surface. The contribution from the radial dependence of CP2 coordinates to the induced metric giving ds2= ds2(X3)+grrdr2 at X3 implies a radial gravitational acceleration and one can say that a gravitational flux is transferred between different imbedding spaces.

    Planetary Bohr orbitology predicting that only 6 per cent of matter in solar system is visible suggests that star and planetary interiors are regions with large value of CP2 Planck constant and that only a small fraction of the gravitational flux flows along space-time sheets carrying visible matter. In the approximation that visible matter corresponds to layer of thickness Δ R at the outer surface of constant density star or planet of radius R, one obtains the estimate Δ R=.12R for the thickness of this layer: convective zone corresponds to Δ R=.3R. For Earth one would have Δ R≈ 70 km which corresponds to the maximal thickness of the crust. Also flux tubes connecting ordinary matter carrying gravitational flux leaving space-time sheet with a given hbar (CP2) at three-dimensional regions and returning back at the second end are possible. These flux tubes could mediate dark gravitational force also between objects consisting of ordinary matter.

Concerning the mathematical description of this process, the selection of origin of M4 or CP2 as a preferred point is somewhat disturbing. In the case of M4 the problem disappears since configuration space is union over the configuration spaces associated with future and past light cones of M4: CH= CH+U CH-, CH+/-= Um in M4 CH+/-m. In the case of CP2 the same interpretation is necessary in order to not lose SU(3) invariance so that one would have CH+/-= Uh in H CH+/-h. A somewhat analogous but simpler book like structure results in the fusion of different p-adic variants of H along common rationals (and perhaps also common algebraics in the extensions).

For details see the chapter Does TGD Predict the Spectrum of Planck Constants of "TGD: an Overview".

More precise view about S-matrix

In TGD framework the nontriviality of S-matrix would basically result by a replacement of the ordinary tensor product with Connes tensor product for free fields. Each Jones inclusion defines different Connes tensor product. According to the speculation of Jones, the fusion rules of conformal theories are equivalent with Connes tensor product for tensor products of Kac Moody representations at least. Various conformal field theories would thus represent different Jones inclusions characterizing limitations of the measurer in various kinds of quantum measument situations and characterized by ADE diagrams or extended ADE diagrams. Somewhat ironically, in ideal measurement S-matrix would be trivial! In string model context this approach does not work since one has necessarily c=h=0.

The picture about S-matrix has been developing rapidly and it is now clear that the 27 year old dream is finally realized: I understand S-matrix (or rather S-matrices) in TGD framework. Below comments about results that have emerged during last week.

1. Effective 2-dimensionality and quantum classical correspondence

The requirement that quantum measurement theory in the sense of TGD emerges as part of the construction of S-matrix allows to add details to the master formula and derive also non-trivial predictions. The effective 2-dimensionality of the configuration space metric (only the deformations of 2-D partonic surfaces appear in the line element) suggests that the incoming space-time sheets and the space-time sheet representing vertex intersect only along 2-D partonic space-time sheets. This allows incoming space-time sheets to have classical conserved charges identical with a maximal subset of commuting quantum numbers. The functional integral over 3-surfaces is very much analogous to the 2-dimensional functional integral of Euclidian string models. Vertex 3-surface in turn represents the quantum jump at space-time level.

The 3-surface representing N-vertex can be decomposed into parts representing vertices/propagators of conventional tree diagrams as space-/light like portions. No loops appear. By their strong classical non-determinism CP2 type extremals representing elementary particles are unique as classical representations of propagation of off-mass-shell particles. For other light like causal determinants, such as those assignable to massless extremals, only massless momentum exchanges are typically possible (not however in 2-particle scattering) so that scattering becomes extremely deterministic. The prediction is that even gravitational interaction could be strong in lower-dimensional regions of the phase space corresponding to light-like momentum exchanges.

2. Reduction of the Connes tensor product to fusion in conformal field theories

There are good reasons to expect that Connes tensor product corresponds to the fusion procedure of conformal field theories so that various conformal field theories at partonic boundary components would characterize various measurement situations.

The realization that various super-conformal algebras at partonic space-time sheets have dual hyper-quaterionic representations and that the restriction of these representations to commuting hyper-complex planes or real lines has strong mathematical and physical justification allows to write formulas for S-matrix in terms of n-point functions of the conformal field theory characterizing Jones inclusion applying to the quantum measurement situation by just replacing complex argument with hyper-complex argument.

The differences between TGD and string models become clear and one can formulate precisely where string models go wrong. In TGD framework breaking of conformal symmetry can occur and mass squared corresponds to a genuine conformal weight rather than a contribution causing the vanishing of conformal weight. Hence the vanishing of conformal parameters (c,h), which was originally believed to make string models unique, is not necessary. Furthermore, four-momentum does not appear in Super Virasoro generators as in string models so that super generators can carry fermion number since the Majorana character of super-generators becomes un-necessary. Hence dimension D=10 or 11 for imbedding space is not necessary.

One can understand why gravitational constant is so weak as compared to gauge couplings. Also particle massivation and p-adic mass scale are coded into the statistical properties of the zitterbewegung of CP2 type extremals (the projection to Minkowski space is random light-like curve).

3. What went wrong with string models?

Things went wrong with string models at many levels. With my personal background it is not difficult to see what went wrong with string models at the level of mathematical and physical understanding. Basically the wrong view about conformal invariance implied that theory made sense only in critical dimensions. Because this is so important I want to recapitulate: the basic mistake was the idea that mass squared compensates the contributions to conformal weight obtained in the ordinary conformal theory. This led to c=h=0 and to wrong realization of super generators and Majorana conditions and the rest we known. It is not surprising that so powerful constraints led to what looked a highly unique theory.

p-Adic mass calculations made it obvious for me that this view about conformal invariance is wrong. Thermal mass squared in string model sense means breaking of Lorentz invariance: mass squared must result as thermal average of conformal weight. This requires that physical states have non-vanishing conformal weights and that mass squared corresponds to this weight and that momentum does not appear in Super Virasoro generators. Ordinary Euclidian conformal theory for partonic 2-surfaces and its dual for hyper-complex planes of Minkowski space become the basic mathematical tool in the construction of physical states and S-matrix.

There are also many other thins that went wrong: I shared for a long time the belief in the stringy description of particle decay in terms of trouser diagrams until physical arguments forced to interpret this process as a space-time correlate for what happens in double slit experiments. Mathematically this was of course obvious from the beginning since it is just propagation via different routes which happens for induced spinor fields: it is extremely artificial to build vertices using this framework. A direct generalization of Feynman diagrams as singular manifolds obtained by gluing surfaces together along their ends turned out to the only acceptable topological description of particle decay but one can argue that this is mathematically too singular process. It is: and this description indeed turned out to be only effective when the master formula for S-matrix emerged.

Let us return to the consequences of misunderstood conformal invariance. I can well understand the excitement when people realized that perhaps only some little work might reveal the theory of everything. The resulting physics was of course not about this world. It is quite understandable that this sparkled the dream that Kaluza-Klein might save the theory and eventually we had landscape, anthropic principle, and even the brand new vision that the physics that we experience everyday could be anything else and that even at the level of principle it is impossible to predict anything. Now the theory is sold using the exact opposites of the arguments that were used two decades ago.

What is so sad in this that immense amount of high level technical work is being carried out in attempt to get something sensible out of a dead theory.

Conformal field theories represent just the opposite to string model hype. Instead of ad hoc quantization recipes and uncritical introduction of poorly defined notions like spontaneous compactification and effective field theories, rigorous formulation and understanding of the theory has been the primary goal. The formalism generalizes as such to TGD framework and finds a new interpretation in the framework of quantum measurement theory.

Details can be found here, here, and here.

Friday, May 19, 2006

Does the quantization of Planck constant transform integer quantum Hall effect to fractional quantum Hall effect?

The TGD based model for topological quantum computation inspired the idea that Planck constant might be dynamical and quantized. The work of Nottale (astro-ph/0310036) gave a strong boost to concrete development of the idea and it took year and half to end up with a proposal about how basic quantum TGD could allow quantization Planck constant associated with M4 and CP2 degrees of freedom such that the scaling factor of the metric in M4 degrees of freedom corresponds to the scaling of hbar in CP2 degrees of freedom and vice versa (see the new chapter Does TGD Predict the Spectrum of Planck constants?). The dynamical character of the scaling factors of M4 and CP2 metrics makes sense if space-time and imbedding space, and in fact the entire quantum TGD, emerge from a local version of an infinite-dimensional Clifford algebra existing only in dimension D=8.

The predicted scaling factors of Planck constant correspond to the integers n defining the quantum phases q=exp(iπ/n) characterizing Jones inclusions. A more precise characterization of Jones inclusion is in terms of group

Gb subset of SU(2) subset of SU(3)

in CP2 degrees of freedom and

Ga subset of SL(2,C)

in M4 degrees of freedom. In quantum group phase space-time surfaces have exact symmetry such that to a given point of M4 corresponds an entire Gb orbit of CP2 points and vice versa. Thus space-time sheet becomes N(Ga) fold covering of CP2 and N(Gb)-fold covering of M4. This allows an elegant topological interpretation for the fractionization of quantum numbers. The integer n corresponds to the order of maximal cyclic subgroup of G.

In the scaling hbar0→ n× hbar0 of M4 Planck constant fine structure constant would scale as

α= (e2/(4πhbar c)→ α/n ,

and the formula for Hall conductance would transform to

σH =να → (ν/n)× α .

Fractional quantum Hall effect would be integer quantum Hall effect but with scaled down α. The apparent fractional filling fraction ν= m/n would directly code the quantum phase q=exp(iπ/n) in the case that m obtains all possible values. A complete classification for possible phase transitions yielding fractional quantum Hall effect in terms of finite subgroups G subset of SU(2) subset of SU(3) given by ADE diagrams would emerge (An, D2n, E6 and E8 are possible). What would be also nice that CP2 would make itself directly manifest at the level of condensed matter physics.

For more details see the chapter Topological Quantum Computation in TGD Universe, the chapter Was von Neumann Right After All?, and the chapter Does TGD predict the Spectrum of Planck Constants?.

Tuesday, May 16, 2006

Large values of Planck constant and coupling constant evolution

There has been intensive evolution of ideas induced by the understanding of large values of Planck constants. This motivated a separate chapter which I christened as "Does TGD Predict the Spectrum of Planck Constants?". I have commented earlier about various ideas related to this topic and comment here only the newest outcomes.

1. hbargr as CP2 Planck constant

What gravitational Planck constant means has been somewhat unclear. It turned out that hbargr can be interpreted as Planck constant associated with CP2 degrees of freedom and its huge value implies that also the von Neumann inclusions associated with M4 degrees of freedom meaning that dark matter cosmology has quantal lattice like structure with lattice cell given by Ha/G, Ha the a=constant hyperboloid of M4+ and G subgroup of SL(2,C). The quantization of cosmic redshifts provides support for this prediction.

2. Is Kähler coupling strength invariant under p-adic coupling constant evolution

Kähler coupling constant is the only coupling parameter in TGD. The original great vision is that Kähler coupling constant is analogous to critical temperature and thus uniquely determined. Later I concluded that Kähler coupling strength could depend on the p-adic length scale. The reason was that the prediction for the gravitational coupling strength was otherwise non-sensible. This motivated the assumption that gravitational coupling is RG invariant in the p-adic sense.

The expression of the basic parameter v0=2-11 appearing in the formula of hbargr=GMm/v0 in terms of basic parameters of TGD leads to the unexpected conclusion that αK in electron length scale can be identified as electro-weak U(1) coupling strength αU(1). This identification, or actually something slightly complex (see below), is what group theory suggests but I had given it up since the resulting evolution for gravitational coupling predicted G to be proportional to Lp2 and thus completely un-physical. However, if gravitational interactions are mediated by space-time sheets characterized by Mersenne prime, the situation changes completely since M127 is the largest non-super-astrophysical p-adic length scale.

The second key observation is that all classical gauge fields and gravitational field are expressible using only CP2 coordinates and classical color action and U(1) action both reduce to Kähler action. Furthermore, electroweak group U(2) can be regarded as a subgroup of color SU(3) in a well-defined sense and color holonomy is abelian. Hence one expects a simple formula relating various coupling constants. Let us take αK as a p-adic renormalization group invariant in strong sense that it does not depend on the p-adic length scale at all.

The relationship for the couplings must involve αU(1), αs and αK. The formula 1/αU(1)+1/αs = 1/αK states that the sum of U(1) and color actions equals to Kähler action and is consistent with the decrease of the color coupling and the increase of the U(1) coupling with energy and implies a common asymptotic value 2αK for both. The hypothesis is consistent with the known facts about color and electroweak evolution and predicts correctly the confinement length scale as p-adic length scale assignable to gluons. The hypothesis reduces the evolution of αs to the calculable evolution of electro-weak couplings: the importance of this result is difficult to over-estimate.

For more details see the chapter Does TGD Predict the Spectrum of Planck Constants? of "TGD: an Overview".

Friday, May 12, 2006

Could the basic parameters of TGD be fixed by a number theoretical miracle?

If the v0 deduced to have value v0=2-11 appearing in the expression for gravitational Planck constant hbargr=GMm/v0 is identified as the rotation velocity of distant stars in galactic plane, it is possible to express it in terms of Kähler coupling strength and string tension as v0-2= 2×αKK,

αK(p)= a/log(pK) , K= R2/G .

The value of K is fixed to a high degree by the requirement that electron mass scale comes out correctly in p-adic mass calculations. The uncertainties related to second order contributions in p-adic mass calculations however leave the precise value open. Number theoretic arguments suggest that K is expressible as a product of primes p ≤ 23: K= 2×3×5×...×23 .

If one assumes that αK is of order fine structure constant in electron length scale, the value of the parameter a cannot be far from unity. A more precise condition would result by identifying αK with weak U(1) coupling strength αK = αU(1)em/cos2W)≈ 1/105.3531 ,

sin2W)≈ .23120(15),

αem= 0.00729735253327 .

Here the values refer to electron length scale. If the formula v0= 2-11 is exact, it poses both quantitative and number theoretic conditions on Kähler coupling strength. One must of course remember, that exact expression for v0 corresponds to only one particular solution and even smallest deformation of solution can change the number theoretical anatomy completely. In any case one can make following questions.

  1. Could one understand why v0≈ 2-11 must hold true.
  2. What number theoretical implications the exact formula v0= 2-11 has in case that it is consistent with the above listed assumptions?

1. Are the ratios π/log(q) rational?

The basic condition stating that gravitational coupling constant is renormalization group invariant dictates the dependence of the Kähler coupling strength of p-adic prime exponent of Kähler action for CP2 type extremal is rational if K is integer as assumed: this is essential for the algebraic continuation of the rational physics to p-adic number fields. This gives a general formula αK= a π/log(pK). Since K is integer, this means that v02 is of form

v02= qlog(pK)/π, q rational.

if a is rational.

  1. Since v02 should be rational for rational value of a, the minimal conclusion would be that the number log(pK)/π should be rational for some preferred prime p=p0 in this case. If this miracle occurs, the p-adic coupling constant evolution of Kähler coupling strength, the only coupling constant in TGD, would be completely fixed. Same would also hold true for the ratio of CP2 to length characterized by K1/2.

  2. A more general conjecture would be that log(q)/π is rational for q rational: this conjecture turns out to be wrong as discussed in the previous posting. The rationality of π/log(q) for single q is however possible in principle and would imply that exp(π) is an algebraic number. This would indeed look extremely nice since the algebraic character of exp(π) would conform with the algebraic character of the phases exp(iπ/n). Unfortunately this is not the case. Hence one loses the extremely attractive possibility to fix the basic parameters of theory completely from number theory.

The condition for v0=2-m, m=11, allows to deduce the value of a as

a= (log(pK)/π) × (22m/K).

The condition that αK is of order fine structure constant for p=M127= 2127-1 defining the p-adic length scale of electron indeed implies that m=11 is the only possible value since the value of a is scaled by a factor 4 in m→ m+1.

The value of αK in the length scale Lp0 in which condition of the first equation holds true is given by

1/αK= 221/K≈ 106.379 .

2. What is the value of the preferred prime p0?

The condition for v0 can hold only for a single p-adic length scale Lp0. This correspondence would presumably mean that gravitational interaction is mediated along the space-time sheets characterized by p0, or even that gravitons are characterized by p0.

  1. If same p0 characterizes all ordinary gauge bosons with their dark variants included, one would have p0=M89=289-1.

  2. One can however argue that dark gravitons and dark bosons in general can correspond to different Mersenne prime than ordinary gauge bosons. Since Mersenne primes larger than M127 define super-astrophysical length scales, M127 is the unique candidate. M127 indeed defines a dark length scale in TGD inspired quantum model of living matter. This predicts 1/αU(1)(M127)= 106.379 to be compared with the experimental estimate 1/αU(1)(M127)= 105.3531 deduced above. The deviation is smaller than one percent, which indeed puts bells ringing!

This agreement seems to provide dramatic support for the general picture but one must be very cautious.

  1. The identification of Kähler coupling strength as U(1) coupling strength poses strong conditions on the p-adic length scale evolution of Weinberg angle using the knowledge about the evolution of the electromagnetic coupling constant. The condition

    cos2W)(89)= [log(M127K)/log(M89K)] × [αem(M127)/αem(M89)]× cos2W)(127) .

    Using the experimental value 1/αem(M89)≈ 128 as predicted by standard model one obtains sin2W)(89)=.0479. There is a bad conflict with experimental facts unless the experimentally determined value of Weinberg angle corresponds to M127 space-time sheet.

    I will leave leave the implications of this conflict to the future posting.

The reader interested in details is recommended to look previous postings and the new chapter Can TGD Predict the Spectrum of Planck Constants? of the book "TGD: an Overview" and the chapter TGD and Astrophysics of the book "Physics in Many-Sheeted Space-Time".

Tuesday, May 09, 2006

New results in planetary Bohr orbitology

The understanding of how the quantum octonionic local version of infinite-dimensional Clifford algebra of 8-dimensional space (the only possible local variant of this algebra) implies entire quantum and classical TGD led also to the understanding of the quantization of Planck constant. In the model for planetary orbits based on gigantic gravitational Planck constant this means powerful constraints on the number theoretic anatomy of gravitational Planck constants and therefore of planetary mass ratios. These very stringent predictions are immediately testable.

1. Preferred values of Planck constants and ruler and compass polygons

The starting point is that the scaling factor of M4 Planck constant is given by the integer n characterizing the quantum phase q= exp(iπ/n). The evolution in phase resolution in p-adic degrees of freedom corresponds to emergence of algebraic extensions allowing increasing variety of phases exp(iπ/n) expressible p-adically. This evolution can be assigned to the emergence of increasingly complex quantum phases and the increase of Planck constant.

One expects that quantum phases q=exp(iπ/n) which are expressible using only square roots of rationals are number theoretically very special since they correspond to algebraic extensions of p-adic numbers involving only square roots which should emerge first and therefore systems involving these values of q should be especially abundant in Nature.

These polygons are obtained by ruler and compass construction and Gauss showed that these polygons, which could be called Fermat polygons, have

nF= 2ks Fns

sides/vertices: all Fermat primes Fns in this expression must be different. The analog of the p-adic length scale hypothesis emerges since larger Fermat primes are near a power of 2. The known Fermat primes Fn=22n+1 correspond to n=0,1,2,3,4 with F0=3, F1=5, F2=17, F3=257, F4=65537. It is not known whether there are higher Fermat primes. n=3,5,15-multiples of p-adic length scales clearly distinguishable from them are also predicted and this prediction is testable in living matter.

2. Application to planetary Bohr orbitology

The understanding of the quantization of Planck constants in M4 and CP2 degrees of freedom led to a considerable progress in the understanding of the Bohr orbit model of planetary orbits proposed by Nottale, whose TGD version initiated "the dark matter as macroscopic quantum phase with large Planck constant" program.

Gravitational Planck constant is given by

hbargr/hbar0= GMm/v0

where an estimate for the value of v0 can be deduced from known masses of Sun and planets. This gives v0≈ 4.6× 10-4.

Combining this expression with the above derived expression one obtains

GMm/v0= nF= 2kns Fns

In practice only the Fermat primes 3,5,17 appearing in this formula can be distinguished from a power of 2 so that the resulting formula is extremely predictive. Consider now tests for this prediction.

  1. The first step is to look whether planetary mass ratios can be reproduced as ratios of Fermat primes of this kind. This turns out to be the case if Nottale's proposal for quantization in which outer planets correspond to v0/5: TGD provides a mechanism explaining this modification of v0. The accuracy is better than 10 per cent.

  2. Second step is to look whether GMm/v0 for say Earth allows the expression above. It turns out that there is discrepancy: allowing second power of 17 in the formula one obtains an excellent fit. Only first power is allowed. Something goes wrong! 16 is the nearest power of two available and gives for v0 the value 2-11 deduced from biological applications and consistent with p-adic length scale hypothesis. Amusingly, v0(exp)= 4.6 × 10-4 equals with 1/(27× F2)= 4.5956× 10-4 within the experimental accuracy.

    A possible solution of the discrepancy is that the empirical estimate for the factor GMm/v0 is too large since m contains also the the visible mass not actually contributing to the gravitational force between dark matter objects. M is known correctly from the knowledge of gravitational field of Sun. The assumption that the dark mass is a fraction 1/(1+ε) of the total mass for Earth gives 1+ε= 17/16 in an excellent approximation. This gives for the fraction of the visible matter the estimate ε=1/16≈ 6 per cent. The estimate for the fraction of visible matter in cosmos is about 4 per cent so that estimate is reasonable and would mean that most of planetary and solar mass would be also dark as TGD indeed predicts and for which there are already now several experimental evidence (consider only the evidence that photosphere has solid surface discussed earlier in this blog ).

To sum up, it seems that everything is now ready for the great revolution. I would be happy to share this flood of discoveries with colleagues but all depends on what establishment decides. To my humble opinion twenty one years in a theoretical desert should be enough for even the most arrogant theorist. There is now a book of 800 A4 pages about TGD at Amazon: Topological Geometrodynamics so that it is much easier to learn what TGD is about.

The reader interested in details is recommended to look at the chapter Was von Neumann Right After All? of the book "TGD: an Overview"

and the chapter TGD and Astrophysics of the book "Classical Physics in Many-Sheeted Space-Time".

Sunday, May 07, 2006

Connes tensor product as universal interaction, quantization of Planck constant, McKay correspondence, etc...

It seems that discussion both in Peter Woit's blog, John Baez's This Week's Findings, and in h Lubos Motl's blog happen to tangent very closely what I have worked with during last weeks: ADE and Jones inclusions.

1. Some background.

  1. It has been for few years clear that TGD could emerge from the mere infinite-dimensionality of the Clifford algebra of infinite-dimensional "world of classical worlds" and from number theoretical vision in which classical number fields play a key role and determine imbedding space and space-time dimensions. This would fix completely the "world of classical worlds".

  2. Infinite-D Clifford algebra is a standard representation for von Neumann algebra known as a hyper-finite factor of type II1. In TGD framework the infinite tensor power of C(8), Clifford algebra of 8-D space would be the natural representation of this algebra.

2. How to localize infinite-dimensional Clifford algebra?

The basic new idea is to make this algebra local: local Clifford algebra as a generalization of gamma field of string models.

  1. Represent Minkowski coordinate of Md as linear combination of gamma matrices of D-dimensional space. This is the first guess. One fascinating finding is that this notion can be quantized and classical Md is genuine quantum Md with coordinate values eigenvalues of quantal commuting Hermitian operators built from matrix elements. Euclidian space is not obtained in this manner! Minkowski signature is something quantal! Standard quantum group Gl(2,q)(C) gives M4.

  2. Form power series of the Md coordinate represented as linear combination of gamma matrices with coefficients in corresponding infinite-D Clifford algebra. You would get tensor product of two algebra.

  3. There is however a problem: one cannot distinguish the tensor product from the original infinite-D Clifford algebra. D=8 is however an exception! You can replace gammas in the expansion of M8 coordinate by hyper-octonionic units which are non-associative (or octonionic units in quantum complexified-octonionic case). Now you cannot anymore absorb the tensor factor to the Clifford algebra and you get genuine M8-localized factor of type II1. Everything is determined by infinite-dimensional gamma matrix fields analogous to conformal super fields with z replaced by hyperoctonion.

  4. Octonionic non-associativity actually reproduces whole classical and quantum TGD: space-time surface must be associative sub-manifolds hence hyper-quaternionic surfaces of M8. Representability as surfaces in M4xCP2 follows naturally, the notion of configuration space of 3-surfaces, etc....

3. Connes tensor product for free fields as a universal definition of interaction quantum field theory

This picture has profound implications. Consider first the construction of S-matrix.

  1. A non-perturbative construction of S-matrix emerges. The deep principle is simple. The canonical outer automorphism for von Neumann algebras defines a natural candidate unitary transformation giving rise to propagator. This outer automorphism is trivial for II1 factors meaning that all lines appearing in Feynman diagrams must be on mass shell states satisfying Virasoro conditions. You can allow all possible diagrams: all on mass shell loop corrections vanish by unitarity and what remains are diagrams with single N-vertex!

  2. At 2-surface representing N-vertex space-time sheets representing generalized Bohr orbits of incoming and outgoing particles meet. This vertex involves von Neumann trace (finite!) of localized gamma matrices expressible in terms of fermionic oscillator operators and defining free fields satisfying Super Virasoro conditions.

  3. For free fields ordinary tensor product would not give interacting theory. What makes S-matrix non-trivial is that *Connes tensor product* is used instead of the ordinary one. This tensor product is a universal description for interactions and we can forget perturbation theory! Interactions result as a deformation of tensor product. Unitarity of resulting S-matrix is unproven but I dare believe that it holds true.

  4. The subfactor N defining the Connes tensor product has interpretation in terms of the interaction between experimenter and measured system and each interaction type defines its own Connes tensor product. Basically N represents the limitations of the experimenter. For instance, IR and UV cutoffs could be seen as primitive manners to describe what N describes much more elegantily. At the limit when N contains only single element, theory would become free field theory but this is ideal situation never achievable.

4. The quantization of Planck constant and ADE hierarchies

The quantization of Planck constant has been the basic them of TGD for more than one and half years and leads also the understanding of ADE correspondences (index ≤ 4 and index=4) from the point of view of Jones inclusions.

  1. The new view allows to understand how and why Planck constant is quantized and gives an amazingly simple formula for the separate Planck constants assignable to M4 and CP2 and appearing as scaling constants of their metrics. This in terms of a mild generalizations of standard Jones inclusions. The emergence of imbedding space means only that the scaling of these metrics have spectrum: no landscape.

  2. In ordinary phase Planck constants of M4 and CP2 are same and have their standard values. Large Planck constant phases correspond to situations in which a transition to a phase in which quantum groups occurs. These situations correspond to standard Jones inclusions in which Clifford algebra is replaced with a sub-algebra of its G-invariant elements. G is product Ga×Gb of subgroups of SL(2,C) and SU(2)Lx×U(1) which also acts as a subgroup of SU(3). Space-time sheets are n(Gb) fold coverings of M4 and n(Ga) fold coverings of CP2 generalizing the picture which has emerged already. An elementary study of these coverings fixes the values of scaling factors of M4 and CP2 Planck constants to orders of the maximal cyclic sub-groups. Mass spectrum is invariant under these scalings.

  3. This predicts automatically arbitrarily large values of Planck constant and assigns the preferred values of Planck constant to quantum phases q=exp(iπ/n) expressible in terms of square roots of rationals: these correspond to polygons obtainable by compass and ruler construction. In particular, experimentally favored values of hbar in living matter correspond to these special values of Planck constant. This model reproduces also the other aspects of the general vision. The subgroups of SL(2,C) in turn can give rise to re-scaling of SU(3) Planck constant. The most general situation can be described in terms of Jones inclusions for fixed point subalgebras of number theoretic Clifford algebras defined by Ga× Gb in SL(2,C)× SU(2).

  4. These inclusions (apart from those for which Ga contains infinite number of elements) are represented by ADE or extended ADE diagrams depending on the value of index. The group algebras of these groups give rise to additional degrees of freedom which make possible to construct the multiplets of the corresponding gauge groups. For index&le4 all gauge groups allowed by the ADE correspondence (An,D2n, E6,E8) are possible so that TGD seems to be able to mimick these gauge theories. For index=4 all ADE Kac Moody groups are possible and again mimicry becomes possible: TGD would be kind of universal physics emulator but it would be anyonic dark matter which would perform this emulation.

  5. Large hbar phases provide good hopes of realizing topological quantum computation. There is an additional new element. For quantum spinors state function reduction cannot be performed unless quantum deformation parameter equals to q=1. The reason is that the components of quantum spinor do not commute: it is however possible to measure the commuting operators representing moduli squared of the components giving the probabilities associated with 'true' and 'false'. The universal eigenvalue spectrum for probabilities does not in general contain (1,0) so that quantum qbits are inherently fuzzy. State function reduction would occur only after a transition to q=1 phase and decoherence is not a problem as long as it does not induce this transition.

For details see the chapter Was von Neumann Right After All? of "TGD: an Overview" at my homepage.

Matti Pitkanen