Friday, August 25, 2006

Absolute extremum property for Kähler action implies dynamical Kac-Moody and super conformal symmetries

The absolute extremization of Kähler action in the sense that the value of the action is maximal or minimal for a space-time region where the sign of the action density is definite, is a very attractive idea. Both maxima and minima seem to be possible and could correspond to quaternionic (associative) and co-quaternionic (co-associative) space-time sheets emerging naturally in the number theoretic approach to TGD.

It seems now clear that the fundamental formulation of TGD is as an almost-topological conformal field theory for lightlike partonic 3-surfaces. The action principle is uniquely Chern-Simons action for the Kähler gauge potential of CP2 induced to the space-time surface. This approach predicts basic super Kac Moody and superconformal symmetries to be present in TGD and extends them. The quantum fluctuations around classical solutions of these field equations break these super-symmetries partially.

The Dirac determinant for the modified Dirac operator associated with Chern-Simons action defines vacuum functional and the guess is that it equals to the exponent of Kähler action for absolute extremal. The plausibility of this conjecture would increase considerably if one could show that also the absolute extrema of Kähler action possess appropriately broken super-conformal symmetries. This has been a long-lived conjecture but only quite recently I was able to demonstrate it by a simple argument.

The extremal property for Kähler action with respect to variations of time derivatives of initial values keeping hk fixed at X3 implies the existence of an infinite number of conserved charges assignable to the small deformations of the extremum and to H isometries. Also infinite number of local conserved super currents assignable to second variations and to covariantly constant right handed neutrino are implied. The corresponding conserved charges vanish so that the interpretation as dynamical gauge symmetries is appropriate. This result provides strong support that the local extremal property is indeed consistent with the almost-topological QFT property at parton level.

The starting point are field equations for the second variations. If the action contain only derivatives of field variables one obtains for the small deformations δhk of a given extremal

α Jαk = 0 ,

Jαk = (∂2 L/∂ hkα∂ hlβ) δ hlβ ,

where hkα denotes the partial derivative ∂α hk. A simple example is the action for massless scalar field in which case conservation law reduces to the conservation of the current defined by the gradient of the scalar field. The addition of mass term spoils this conservation law.

If the action is general coordinate invariant, the field equations read as

DαJα,k = 0

where Dα is now covariant derivative and index raising is achieved using the metric of the imbedding space.

The field equations for the second variation state the vanishing of a covariant divergence and one obtains conserved currents by the contraction this equation with covariantly constant Killing vector fields jAk of M4 translations which means that second variations define the analog of a local gauge algebra in M4 degrees of freedom.

αJA,αn = 0 ,

JA,αn = Jα,kn jAk .

Conservation for Killing vector fields reduces to the contraction of a symmetric tensor with Dkjl which vanishes. The reason is that action depends on induced metric and Kähler form only.

Also covariantly constant right handed neutrino spinors ΨR define a collection of conserved super currents associated with small deformations at extremum

Jαn = Jα,knγkΨR .

Second variation gives also a total divergence term which gives contributions at two 3-dimensional ends of the space-time sheet as the difference

Qn(X3f)-Qn(X3) = 0 ,

Qn(Y3) = ∫Y3 d3x Jn ,

Jn = Jtk hklδhln .

The contribution of the fixed end X3 vanishes. For the extremum with respect to the variations of the time derivatives ∂thk at X3 the total variation must vanish. This implies that the charges Qn defined by second variations are identically vanishing

Qn(X3f) = ∫X3fJn = 0 .

Since the second end can be chosen arbitrarily, one obtains an infinite number of conditions analogous to the Virasoro conditions. The analogs of unbroken loop group symmetry for H isometries and unbroken local super symmetry generated by right handed neutrino result. Thus extremal property is a necessary condition for the realization of the gauge symmetries present at partonic level also at the level of the space-time surface. The breaking of super-symmetries could perhaps be understood in terms of the breaking of these symmetries for light-like partonic 3-surfaces which are not extremals of Chern-Simons action.

For more details see the chapter Construction of Configuration Space Kähler Geometry from Symmetry Principles: Part II of TGD: Physics as Infinite-Dimensional Geometry

Thursday, August 24, 2006

Dark matter based model for Pioneer and flyby anomalies

This has been very enjoyable period for dark matter afficionado. During last month I have had an opportunity to apply TGD based vision about dark matter to about five existing or completely new anomalies. Just yesterday I learned about the new findings related to Pioneer and flyby anomalies which challenge the standard theory of gravitation.

I have proposed earlier a model for Pioneer anomaly resulting as a by-product of an explanation of another anomaly which can be understood if cosmic expansion is compensated by a radial contraction of solar system in local Robertson-Walker coordinates. The recent findings reported here allow to sharpen the model suggesting a universal primordial mass density associated with the solar system. The facts about flyby anomaly lead to a model in which anomalous energy gain of the space-craft in Earth-craft rest system results when it passes through a spherical layer of dark matter containing Earth's orbit (this is of course too stringent model). These structures are predicted by the model explaining the Bohr quantization of planetary orbits to served as templates for the condensation of visible matter around them.

1. Explanation of Pioneer anomaly in terms of dark matter

I have proposed an explanation of Pioneer anomaly as a prediction of a model explaining the claimed radial acceleration of planets which is such that it compensates the cosmological expansion of planetary system. The correct prediction is that the anomalous acceleration is given by Hubble constant. The precise mechanism allowing to achieve this was not proposed.

A possible mechanism is based the presence of dark matter increasing the effective solar mass. Since acceleration anomaly is constant, a dark matter density behaving like ρd = (3/4π)(H/Gr), where H is Hubble constant giving M(r) propto r2, is required. For instance, at the radius RJ of Jupiter the dark mass would be about (δa/a) MS≈ 1.3× 10-4MS and would become comparable to MS at about 100RJ=520 AU. Note that the standard theory for the formation of planetary system assumes a solar nebula of radius of order 100AU having 2-3 solar masses. For Pluto at distance of 38 AU the dark mass would be about one per cent of solar mass. This model would suggest that planetary systems are formed around dark matter system with a universal mass density. For this option dark matter could perhaps be seen as taking care of the contraction compensating for the cosmic expansion by using a suitable dark matter distribution.

Here the possibility that the acceleration anomaly for Pioneer 10 (11) emerged only after the encounter with Jupiter (Saturn) is raised. The model explaining Hubble constant as being due to a radial contraction compensating cosmic expansion would predict that the anomalous acceleration should be observed everywhere, not only outside Saturn. The model in which universal dark matter density produces the same effect would allow the required dark matter density ρd= (3/4π)(H/Gr) be present only as a primordial density able to compensate the cosmic expansion. The formation of dark matter structures could have modified this primordial density and visible matter would have condensed around these structures so that only the region outside Jupiter would contain this density. There are also diurnal and annual variations in the acceleration anomaly (see the same article). These variations should be due to the physics of Earth-Sun system. I do not know whether they can be understood in terms of a temporal variation of the Doppler shift due to the spinning and orbital motion of Earth with respect to Sun.

2. Explanation of Flyby anomaly in terms of dark matter

The so called flyby anomaly might relate to the Pioneer anomaly. Fly-by mechanism used to accelerate space-crafts is a genuine three body effect involving Sun, planet, and the space-craft. Planets are rotating around sun in an anticlockwise manner and when the space-craft arrives from the right hand side, it is attracted by a planet and is deflected in an anticlockwise manner and planet gains energy as measured with respect to solar center of mass system. The energy originates from the rotational motion of the planet. If the space-craft arrives from the left, it loses energy. What happens is analyzed the above linked article using an approximately conserved quantity known as Jacobi's integral

J= e- ω ez · r× v.

Here e is total energy per mass for the space-craft, ω is the angular velocity of the planet, ez is a unit vector normal to the planet's rotational plane, and various quantities are with respect to solar cm system.

This as such is not anomalous and flyby effect is used to accelerate space-crafts. For instance, Pioneer 11 was accelerated in the gravitational field of Jupiter to a more energetic elliptic orbit directed to Saturn ad the encounter with Saturn led to a hyperbolic orbit leading out from solar system.

Consider now the anomaly. The energy of the space-craft in planet-space-craft cm system is predicted to be conserved in the encounter. Intuitively this seems obvious since the time and length scales of the collision are so short as compared to those associated with the interaction with Sun that the gravitational field of Sun does not vary appreciably in the collision region. Surprisingly, it turned out that this conservation law does not hold true in Earth flybys. Furthermore, irrespective of whether the total energy with respect to solar cm system increases or decreases, the energy in cm system increases during flyby in the cases considered.

Five Earth flybys have been studied: Galileo-I, NEAR, Rosetta, Cassina, and Messenger and the article of Anderson and collaborators gives a nice quantitative summary of the findings and of the basic theoretical notions. Among other things the tables of the article give the deviation δeg,S of the energy gain per mass in the solar cm system from the predicted gain. The anomalous energy gain in rest Earth cm system is δeEv·δv and allows to deduce the change in velocity. The general order of magnitude is δv/v≈ 10-6 for Galileo-I, NEAR and Rosetta but consistent with zero for Cassini and Messenger. For instance, for Galileo I one has vinf,S= 8.949 km/s and δvinf,S= 3.92+/- .08 mm/s in solar cm system.

Many explanations for the effect can be imagined but dark matter is the most obvious candidate in TGD framework. The model for the Bohr quantization of planetary orbits assumes that planets are concentrations of the visible matter around dark matter structures. These structures could be tubular structures around the orbit or a nearly spherical shell containing the orbit. The contribution of the dark matter to the gravitational potential increases the effective solar mass Meff,S. This of course cannot explain the acceleration anomaly which has constant value.

For instance, if the space-craft traverses shell structure, its kinetic energy per mass in Earth cm system changes by a constant amount not depending on the mass of the space-craft:

δE/m ≈ vinf,E×δv= δVgr = GδMeff,S/R.

Here R is the outer radius of the shell and vinf,E is the magnitude of asymptotic velocity in Earth cm system. This very simple prediction should be testable. If the space-craft arrives from the direction of Sun the energy increases. If the space-craft returns back to the sunny side, the net anomalous energy gain vanishes. This has been observed in the case of Pioneer 11 encounter with Jupiter (see this).

The mechanism would make it possible to deduce the total dark mass of, say, spherical shell of dark matter. One has

δM/MS ≈δv/vinf,E ×(2K/V) , K= v2inf,E/2 , V=GMS/R .

For the case considered δM/MS≥ 2× 10-6 is obtained. Note that the amount of dark mass within sphere of 1 AU implied by the explanation of Pioneer anomaly would be about 6.2× 10-6MS from Pioneer anomaly. Since the orders of magnitude are same one might consider the possibility that the primordial dark matter has concentrated in spherical shells in the case of inner planets as indeed suggested by the model for quantization of radii of planetary orbits.

In the solar cm system the energy gain is not constant. Denote by vi,E and vf,E the initial and final velocities of the space-craft in Earth cm. Let δv be the anomalous change of velocity in the encounter and denote by θ the angle between the asymptotic final velocity vf,S of planet in solar cm. One obtains for the corrected eg,S the expression

eg,S= (1/2)[(vf,E+vPv)2-(vi,E+vP)2] .

This gives for the change δeg,S

δeg,S≈(vf,E+vP)· δv≈ vf,S δv× cos(θS)= vinf,Sδv × cos(θS).

Here vinf,S is the asymptotic velocity in solar cm system and in excellent approximation predicted by the theory. Using spherical shell as a model for dark matter one can write this as

δeg,S= vinf,S/vinf,E × G δM/R × cos(θS) .

The proportionality of δeg,S to cos(θS) should explain the variation of the anomalous energy gain.

For a spherical shell δv is in the first approximation orthogonal to vP since it is produced by a radial acceleration so that one has in good approximation

δeg,Svf,S· δvvf,E· δv≈ vf,Sδv × cos(θS)= vinf,E δv× cos(θE).

For Cassini and Messenger cos(θS) should be rather near to zero so that vinf,E and vinf,S should be nearly orthogonal to the radial vector from Sun in these cases. This provides a clear cut qualitative test for the spherical shell model.

For TGD based view about astrophyscs see the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-Time".

Thursday, August 17, 2006

Dark matter in astrophysics and in hadronic collisions

NASA site is announcing that dark matter has been discovered by Chandra X-ray Observatory. A teleconference will be held on August 21 at 1 pm EDT. John Baez has a nice summary about what is involved in This Week's Finds. There is also some discussion about this in Lubos's blog.

Imagine a collision between two galaxies. The ordinary matter in them collides and gets interlocked due to the mutual gravitational attraction. Dark matter, however, just keeps its momentum and keeps going on leaving behind the colliding galaxies. It seems that something like this has been detected by the Chandra X-Ray Observatory by using an ingenious manner to detect dark matter. Collisions of ordinary matter produces a lot of X-rays and the dark matter outside the galaxies acts as a gravitational lens.

Dark matter is everywhere and in TGD Universe the hierarchy of Planck constants provides a detailed quantitative model for it. The basis hypothesis is that the failure of the perturbation theory at the limit of strong coupling strength quite generally induces a phase transition in which Planck constant increases and thus leads to the reduction of the gauge coupling strength in the new phase. This inspires the proposal that valence quarks inside hadrons correspond to a larger value of Planck constant than sea quarks and would indeed be dark matter relative to sea quarks (see this).

If one takes this picture seriously, one can ask whether something analogous to the collision of galaxy clusters could be seen in hadron physics. There are indeed more than decade old findings having an intriguing resemblance to the observations above: e-p, p-p and p-anti-p collisions are in question. For p-p collisions see A. M. Smith et al (1985), Phys. Lett. B 163, p. 267. For e-p collisions see M. Derrick et al (1993), Phys. Lett. B 315, p. 481, and for p-antip collisions see A. Brandt et al, (1992), Phys. Lett. B 297, p. 417).

In e-p scattering anomalous collisions were observed in which proton scatters essentially elastically whereas jets in the direction of the incoming virtual photon emitted by the electron are observed. These events can be understood if proton emits a color singlet particle carrying a small fraction of proton's momentum which scatters from photon whereas valence quark complex continues to move as such. The proposed interpretation was in terms of Pomeron, a particle like entity which was originally introduced in Regge theory. Later the notion was given up since the predicted Regge trajectory did not exist.

The original TGD based explanation was that Pomeron corresponds to sea partons which reside at different space-time sheet than valence quarks (I just realized with some nostalgy that this was not yesterday, there is more than decade since I invented this little model!). In the anomalous collisions valence quarks connected by color flux tubes would continue almost without noticing anything whereas sea would interact with the virtual photon. Combining this with the proposed identifications of valence and sea quarks as phases of matter characterized by different values of Planck constant the analogy with the astrophysical situation becomes rather precise.

For quantization of Planck constant and dark matter hierarchy see the chapter Does TGD Predict the Spectrum of Planck Constants? of "Towards S-Matrix". See also the chapters p-Adic Mass Calculations: Hadron Masses and p-Adic Mass Calculations: New Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Tuesday, August 15, 2006

The story of Martin Eden

I have a habit of listening radio plays on sunday afternoons and monday evenings. Last time the play contained a short episode in which Kalle Holmberg, finnish director acting in the play, read a fragment of text from the end of a book by Jack London. London had imagined what it is to die by drowning and doing it by own decision. This piece of text was so impressive that I dediced to find the book from the local library. I found it, or actually half of it. The book was "Martin Eden" and it was chopped into two pieces as I realized when I was back at home with part II of the book.

I read what I had got and it struck my feelings very deeply at personal level. The book tells about Martin Eden living a period of life when he has realized that his mission is to become a writer. He devotes all his mental powers to this challenge and learns and works fanatically. He sends his texts to magazines and they are rejected again and again. In some rare occasions the text is published but he has to learn this is by no means a guarantee that he will get the promised royalty. At that time it was not really easy to become a writer in USA. There was no social security system and to become a writer meant readiness to literally starve for years. So did also Martin Eden.

Hunger was however not the worst enemy. The social humiliations were much more difficult to tolerate. At that time people had strong ties to their neighbours. Middle class people saw Martin as a person afraid of real work and wanting to spend his time in leisury pretending that he was some sort of a writer.

Those who loved Martin produced him the deepest agony. His sister and beloved one could not but believe that his continual failure to get the texts published proved that he did not have the needed talent. They tried their best to persuade him to go to job just like anyone else in the neighbourhood did. He could only tell that he knew that he had the gift and could not do anything else. He of course saw that they could not believe him. He had to face their pity and to sense that they felt shame for him.

As a serious writer he of course soon learned that the texts published in literary journals were mostly horrible rubbish. The readers of these magazines wanted simple daydreams. He learned to use his evening time to produce these secondary texts using a simple format and this purposeful rubbish was the only text he got published for a long time.

The catastrophe in personal relationships had to come sooner or later. Martin Eden was a hard borne individualist in the style of Nietche but ironically got a label of communist: this was the final catastrophe. What happened that his friend took him to a meeting of socialist party. He spoke passoniately against socialistic ideas as a slave moral and this stimulated a furious debate between people who had built their world view with hard work and hard life experiences and knew what they were talking about. A local journalist happened to be in the occasion: a young and rather stupid man who did not have a slightest idea what he or anyone else in the meeting was arguing about. As a natural opportunist this young fellow deduced that Martin must be a socialist since all rational people participating in meetings of socialist party had to be socialists in his simple world. So he wrote to a local journal telling about Martin as a local leader of socialist party! Martin Eden had now a label of communist and it become impossible to even buy food from the local store. People did not want to have any contact with him. Also his beloved left him. He was completely alone.

And then suddenly the success came. For some reason impossible to explain one of his texts was accepted in a famous literary magazine and the snowball began to roll. The very same magazines which had not bothered to even respond to him during last years were suddenly fighting to get rights to publish his texts. They were ready to pay him incredible sums of money for just getting the title of his next work. His books were translated to other languages.

Martin Eden was suddenly a rich and famous man. The people who had refused to recognize him at street were now inviting him to lunch. These endless invitations to lunches were really funny: no one had suggested a lunch when he was starving. And nothing had changed. He was exactly the same Martin Eden as before. He had not written a single line of new text during this period of success. Suddenly all these people who had regarded him as a total failure and expressed their disgust openly, were reading his stories and expressing their admiration. Here realized that they were even afraid of him!

Martin Eden had perhaps seen too much to cheat himself. We must have some beliefs or at least illusions about our fellow human beings and society in order to survive but this period of "success" stripped even the last ones off without mercy. The bowl became full when his former beloved came to him and with a lot of tears confessed that she had only now learned to really love him. Martin Eden could not feel anything but embarrashed anymore and by an unlucky accident he had to learn that the tragic episode had been a pre-calculated scene. He was now rich, famous, and desperately alone. He could not hate nor love, his life was devoid of meaning. From this it was not very many paragraphs to the impressive piece of text which I heard from the radio play.

There are many manners to interpret "Martin Eden". It could be seen as a failure of Nietche's idea about Superman. It could be seen as a story about how it is possible to get rid of social illusions to the extent that one is able to resist the basic biological trait to survive and is ready to leave the cycle of Karma. The story could be seen as teaching how mission can help human being to tolerate almost every imaginable humiliation, and how the sudden loss of reason for existence drives a man swimming in money and social wellfare to such a deep depression that the only way out is a suicide.

To me this story was to a surprising degree a story of my life. For me the courage to think with my own brain and express my thoughts was what mission to write was for Martin. How painful it was to receive a label of crackpot as a prize for working years seven days a week and with five hours of sleep per day and going through all the social humiliations that Martin also experienced. How painful it was to realize that the real motivation of my academic destroyers was envy, and that it was useless to try to tell this to anyone because everyone of us wants to live in the world in which professors can be trusted. The complete inability to defend oneself against brutal social violence: it is impossible to tell how paralyzing this experience can be.

I did not of course starve physically but I was starving socially. I had become an academic Zombie. It is quite a feeling to realize that colleagues do not want to sit at your table, that you are not wellcome to any academic party, that you are not invited to speak in any seminar, that people seem to be even afraid of being seen in your company, that you simply cease to exist in academic sense. After this kind of experience, you begin to ask what really distinguishes these people regarded as highly civilized human beings from primitive savages living in a world of taboos.

With a deep pain I remember also the well-meaning attempts of people closest to me to persuade me to take an ordinary job. How painful it was to see that they regarded me as a loser who for a sheer personal vanity could not admit it. My mother dreamed that his son could some day enjoy human rights in this country famous for its social security. Her dream was not filled: she died this summer at the age of ninety four years. I feel that I could forgive these academic bastards all that they have done to me but not what they did to these good and honest people who believe that also professors are good and honest people.

Martin Eden experienced also "success". It seems that also I have now to face the "success" since the tide is now definitely turning in the academic stock market. For slightly more than decade ago two young finnish professors stated publicly that my work fails to safisfy every requirement that anyone can imagine scientific work should satisfy. Now I am listed as a candidate to a category "Phycicists" in Wikipedia. As Martin Eden did, I definitely know that I am the same person now as I was a decade ago and what I have done after that relies on what I had done before this time! Some people were lying for more than decade ago and this meant a professional murder in practice. Elsewhere in society these people would be responsible for what they have done but not in the world of science.

There are also definite social signals that the people who previously did not literally see me are experiencing strange metamorphosis. It has begun with friendly smiles and taps to the shoulder; Nice to see; How it is going..;... If this trend continues, I might be invited sooner or later to give a seminar in Physics Department about my life work! You never know about academic stock markets!

The most amusing piece of social commedy was the appearence of a link to my homepage at the homepage of some young finnish physicist. Certainly one of the first such links since my colleagues are well aware of the dangers of their profession. This fellow obviously knew that this is risky business and had decided to be cautious. He added the link under the title "Hörhöt" ("Nuts") to send a positive signal to the community. To give a positive signal to my direction he divided "Nuts" section into two clearly separate pieces and added a strong intendation to the lower section to create the impression about a separate section whose title had somehow disappeared (I would recommend "Controversially Nuts" as the lost title;-))! My name was the lowest one in the lower list to maximize the spatial distance to "Nuts" . I admit that I had never thought that social calculation could manifest itself as a precise geometric variational principle: maximize the spatial distance between the expressions of two opposite opinions you are forced to have simultaneously!

Despite all this and knowing the fate of Martin Eden, I would love to believe that these people in this corner of Universe have suddenly become human beings in a deeper sense of the word than the social games usually allow. I confess that this is not easy after these 28 years. I try to tell for myself the story about the commendant of a concentration camp with deep admiration for the music of Bach and living a happy family life. I try to tell me that the same human beings who are charming, humorous, and genuinely warmhearted individuals can become monsters as members of a collective. Martin Eden did not have this view about humans to help him over his crisis and this was fatal but I try to have so that I have good reasons to be optimistic.

Friday, August 11, 2006

How to not build theories of everything

I have decided to avoid writing about the situation in string models. After all, anyone with a little bit of intellectual honesty admits at least privately that this theory is dead and the basic reason that we still talk about it as the only known (or even possible!) theory of everything is that very few string theoretician has the moral strength to say "Sorry, this what I am doing has become a waste of money and time so that I do not want to receive my salary anymore!".

Now I however decided to make a few paragraph exception to my blogging habits. There will be a semester long program on string phenomenology taking place in Kavli Institute for Theoretical Physics. Wati Taylor had inaugural talk for the program with the title String Vacua and the Quest for Predictions. The main points of the talk are summarized in Not-Even Wrong. What made me optimistic was that the recent situation in string theory was openly admitted and difficulties analyzed. It is really remarkable that a leading theoretician does this and in this kind of context.

Instead of criticizing string theory and its sociology, I find is much more inspiring to report the growth and evolution of my own brain child TGD as the happy and unashamedly proud father. I hope that these often hastily written postings, besides giving an online documentation about the trial-and-mostly-error process what real scientific work always is, would also demonstrate that many important things have been forgotten during last 21 years.

  • Sometimes a real progress in science requires dramatic paradigm shifts but these shifts must be inspired by solutions to the problems of existing theory rather than decision that the model which did not work as a theory of hadrons is the only known theory of everything. The basic idea of TGD was stimulated by a simple question. Could it be possible to solve the conceptual problems related to the definition of energy in general relativity by constructing a Poincare invariant theory of gravitation? The answer to this question stimulated a question-answer process which has yielded 16 online books: not too bad taking into account the miserable circumstances in which this work has done. This continual questioning has also forced me to leave the narrow confines of particle physics and to realize that theory of everything is unavoidably also a theory of consciousness. It is really sad to see the brilliant string theorists to continue beating their big-brained heads on the wall after having narrowed their focus on Planck length scale or to algebraic geometry.

  • What it means when idea really works. What was achieved during early days of quantum theory when no computers were available looks completely mysterious unless one realizes that the basic idea was so full of life that it took care of itself. The evolution of TGD during last 28 years without a single coin of research money should also teach the same lesson. The string story in turn convincingly proves that all the king's horses and all the king's men are not enough to resuscitate an idea which is dead.

  • Real progress in science is not a series of fads and fashions lasting for year or two but requires total life time long devotion. Group think is not the manner to achieve paradigm shifts. Real revolutions are not made at streets nor are they made in palaces.

  • Sticking to hidden philosophical assumptions paralyzes even the brightest thinker. I have heard again and again the evergreen about how standard model works too well, that quantum theories of gravitation cannot be tested experimentally, etc., etc.. My experience is totally different. Having been forced to give up the reductionistic dogma and having developed a network of ideas that really predicts and explains, I have found myself literally swimming in a flood of well-reported anomalies and I find myself again and again wondering how it can be possible that these brilliant theoreticians can be unaware of all this. The biggest anomaly is of course the very phenomenon of life. If your TOE cannot say anything interesting about life and consciousness, it is better to return to the drawing board and start from the scratch again.

During this effusion the inhibitory circuits in my brain have started to counter act (very important for a theoretician to have also these trouble makers) and I hasten to admit that TGD is a zoo of interacting ideas in a vigorous competition for survival rather than a clean and polished theory. Some of these ideas will suffer extinction and it will require time and certainly a lot of collective effort before a precise formulation and construction of the calculational machinery is possible. I also admit that TGD is also an endless error correction process and as such a wonderful means to get rid of self importance and tendency to have very strong opinions (something which should be a first year course in theoretical physics). Returning again and again to texts written for the first time for almost two decades ago is often quite frustrating but always very rewarding process.

In any case, just this morning I feel optimistic. I feel that theoretical physics is not a completely dead discipline after all. As long there is life there is also hope.

Wednesday, August 09, 2006

Do nuclear reaction rates depend on environment?

Claus Rolfs and his group have found experimental evidence for the dependence of the rates of nuclear reactions on the condensed matter environment (see this and this). See also the comments of Lubos Motl about these findings whose tone is not too difficult to guess. For instance, the rates for the reactions 50V(p,n)50Cr and 176Lu(p,n) are fastest in conductors. The model explaining the findings has been tested for elements covering a large portion of the periodic table.

1. Debye screening of nuclear charge by electrons as an explanation for the findings

The proposedd theoretical explanation is that conduction electrons screen the nuclear charge or equivalently that incoming proton gets additional acceleration in the attractive Coulomb field of electrons so that the effective collision energy increases so that reaction rates below Coulomb wall increase since the thickness of the Coulomb barrier is reduced.

The resulting Debye radius

RD= 69(T/neffρa)1/2 ,

where ρa is the number of atoms per cubic meter and T is measured in Kelvins. RD is of order .01 Angstroms for T=373 K for neff=1, a=1 Angstrom. The theoretical model predicts that the cross section below Coulomb barrier for X(p,n) collisions is enhanced by the factor

f(E)=(E/E+Ue)exp(πη Ue/E).

E is center of mass energy and η so called Sommerfeld parameter and

Ue= 2.09×10-11(Z(Z+1))1/2×(neffρa/T)1/2 eV

is the screening energy defined as the Coulomb interaction energy of electron cloud responsible for Debye screening and projectile nucleus. The idea is that at RD nuclear charge is nearly completely screened so that the energy of projectile is E+Ue at this radius which means effectively higher collision energy.

The experimental findings from the study of 52 metals support the expression for the screening factor across the periodic table.

  1. The linear dependence of Ue on Z and T-1/2 dependence on temperature conforms with the prediction. Also the predicted dependence on energy has been tested.

  2. The value of the effective number neff of screening electrons deduced from the experimental data is consistent with neff(Hall) deduced from quantum Hall effect.

The model suggests that also the decay rates of nuclei, say beta and alpha decay rates, could be affected by electron screening. There is already preliminary evidence for the reduction of beta decay rate of 22Na β decay rate in Pd (see this), a metal which is utilized also in cold fusion experiments. This might have quite far reaching technological implications. For instance, the artificial reduction of half-lives of the radioactive nuclei could allow an effective treatment of radio-active wastes. An interesting question is whether screening effect could explain cold fusion and sono-fusion: I have proposed a different model for cold fusion based on large hbar here.

2. Could quantization of Planck constant explain why Debye model works?

The basic objection against the Debye model is that the thermodynamical treatment of electrons as classical particles below the atomic radius is in conflict with the basic assumptions of atomic physics. On the other hand, it is not trivial to invent models reproducing the predictions of the Debye model so that it makes sense to ask whether the quantization of Planck constant predicted by TGD could explain why Debye model works.

TGD predicts that Planck constant is quantized in integer multiples: hbar=nhbar0, where hbar0 is the minimal value of Planck constant identified tentatively as the ordinary Planck constant. The preferred values for the scaling factors n of hbar correspond to n-polygons constructible using ruler and compass. The values of n in question are given by

nF= 2ki Fsi,

where the Fermat primes Fs=22s+1 appearing in the product are distinct. The lowest Fermat primes are 3,5,17,257,216+1. In the TGD based model of living matter the especially favored values of hbar come as powers 2k11.

It is not at all obvious that ordinary nuclear physics and atomic physics should correspond to the minimum value hbar0 of Planck constant. The predictions for the favored values of n are not affected if one has hbar(stand)= 2khbar0, k≥ 0. The non-perturbative character of strong force suggests that the Planck constant for nuclear physics is not actually the minimal one. As a matter fact, TGD based model for nucleus implies that its "color magnetic body" has size of order electron Compton length. Also valence quarks inside hadrons have been proposed to correspond to non-minimal value of Planck constant since color confinement is definitely a non-perturbative effect. Since the lowest order classical predictions for the scattering cross sections in perturbative phase do not depend on the value of the Planck constant one can consider the testing of this issue is not trivial in the case of nuclear physics where perturbative approach does not really work.

Suppose that one has n=n0=2k0 > 1 for nuclei so that their quantum sizes are of order electron Compton length or perhaps even larger. One could even consider the possibility that both nuclei and atomic electrons correspond to n=n0, and that conduction electrons can make a transition to a state with n1<n0. This transition could actually explain how the electron conductivity is reduced to a finite value. In this state electrons would have Compton length scaled down by a factor n0/n1.

For instance, suppose that one has n0=211k0 as suggested by the model for quantum biology and by the TGD based explanation of the optical rotation of a laser beam in a magnetic field (hep-exp/0507107). The Compton length Le=2.4×10-12 m for electron would reduce in the transition k0→ k0-1 to Le =2-11Le= 1.17 fm, which is rather near to the proton Compton length since one has mp/me≈ .94×211. It is not too difficult to believe that electrons in this state could behave like classical particles with respect to their interaction with nuclei and atoms so that Debye model would work.

The basic objection against this model is that anyonic atoms should allow more states that ordinary atoms since very space-time sheet can carry up to n electrons with identical quantum numbers in conventional sense. This should have been seen.

3. Electron screening and Trojan horse mechanism

An alternative mechanism is based on Trojan horse mechanism suggested as a basic mechanism of cold fusion. The idea is that projectile nucleus enters the region of the target nucleus along a larger space-time sheet and in this manner avoids the Coulomb wall. The nuclear reaction itself occurs conventionally. In conductors the space-time sheet of conduction electrons is a natural candidate for the larger space-time sheet.

At conduction electron space-time sheet there is a constant charged density consisting of neff electrons in the atomic volume V= 1/na. This creates harmonic oscillator potential in which incoming proton accelerates towards origin. The interaction energy at radius r is given by

V(r)= α neffr2/2a3 ,

where a is atomic radius.

The proton ends up to this space-time sheet by a thermal kick compensating the harmonic oscillator energy. This occurs below with a high probability below radius R for which the thermal energy E=T/2 of electron corresponds to the energy in the harmonic oscillator potential. This gives the condition

R= (Ta/neffα)1/2 a .

This condition is exactly of the same form as the condition given by Debye model for electron screening but has a completely different physical interpretation.

Since the proton need not travel through the nuclear Coulomb potential, it effectively gains the energy

Ee= Zα/R= (Zα3/2/a)×(neff/Ta)1/2,

which would be otherwise lost in the repulsive nuclear Coulomb potential. Note that the contribution of the thermal energy to Ee is neglected. The dependence on the parameters involved is exactly the same as in the case of Debye model. For T=373 K in the 176Lu experiment and neff(Lu)=2.2+/-1.2, and a=a0=.52 Angstrom (Bohr radius of hydrogen as estimate for atomic radius), one has Ee=28.0 keV to be compared with Ue=21+/- 6 keV of Rolfs et al (a=1 Angstrom corresponds to 1.24× 104 eV and 1 K to 10-4 eV). A slightly larger atomic radius allows to achieve consistency. The value of hbar does not play any role in this model since the considerations are purely classical.

An interesting question is what the model says about the decay rates of nuclei in conductors. For instance, if the proton from the decaying nucleus can enter directly to the space-time sheet of the conduction electrons, the Coulomb wall corresponds to the Coulomb interaction energy of proton with conduction electrons at atomic radius and is equal to α neff/a so that the decay rate should be enhanced.

The chapter Does TGD Predict the Spectrum of Planck Constants? of "Towards S-Matrix" contains this piece of text too. See also the chapter TGD and Nuclear Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy"

Monday, August 07, 2006

Some problems of TGD as almost-topological QFT and their resolution

There are some problems involved with the precise definition of the quantum TGD as an almost-topological QFT at the partonic level and the resolution of these problems leads to new insights about the relationship between gravitational and inertial mass and a more precise definition of quantum classical correspondence.

1. Three problems

The proposed view about partonic dynamics is plagued by three problems.

  1. The definition of supercanonical and super-Kac-Moody charges in M4 degrees of freedom poses a problem. These charges are simply vanishing since M4 coordinates do not appear in field equations.

  2. Classical field equations for the C-S action imply that this action vanishes identically which would suggest that the dynamics does not depend at all on the value of k. The central extension parameter k determines the over-all scaling of the eigenvalues of the modified Dirac operator. 1/k- scaling occurs for the eigenvalues so that Dirac determinant scales by a finite power kN if the number N of the allowed eigenvalues is finite for the algebraic extension considered. A constant Nlog(k) is added to the Kähler function and its effect seems to disappear completely in the normalization of states.

  3. The general picture about Jones inclusions and the possibility of separate Planck constants in M4 and CP2 degrees of freedom suggests a close symmetry between M4 and CP2 degrees of freedom at the partonic level. Also in the construction of the geometry for the world of classical worlds the symplectic and Kähler structures of both light-cone boundary and CP2 are in a key role. This symmetry should be somehow coded by the Chern-Simons action.

The first thing to come in mind is the extension of the Kähler form defined at the light-cone boundary to the entire M4 as a self dual or antiself-dual instanton. This option looks at first unattractive for several reasons. The Maxwell field in question is not covariantly constant and does not define a generalization of the CP2 Kähler form. A charged Dirac monopole would be in question. Although its action would vanish, the energy density given by the sum of electro-static and magnetostatic energies would break Lorentz invariance of the Robertson-Walker cosmologies unless one extends the configuration space by adding to it light cones with all Lorentz transforms of the instanton field. Something more elegant is perhaps needed. I will discuss this option in a separate posting and show that it it does not exclude the option discussed here and could form part of the full story.

2. A possible resolution of the problems

A possible cure to the above described problems is based on the modification of Kähler gauge potential by adding to it a gradient of a scalar function Φ with respect to M4 coordinates.

  1. This implies that super-canonical and super Kac-Moody charges in M4 degrees of freedom are non-vanishing.

  2. Chern-Simons action is non-vanishing if the induced CP2 Kähler form is non-vanishing. If the imaginary exponent of C-S action multiplies the vacuum functional, the presence of the central extension parameter k is reflected in the properties of the physical states.

  3. The function Φ could code for the value of k(M4) via a proportionality constant

    Φ= (k(M4)/k(CP2))× Φ0 ,

    Here k(CP2) is the central extension parameter multiplying the Chern-Simons action for CP2 Kähler gauge potential. This tricks does just what is needed since it multiplies the Noether currents and super currents associated with M4 degrees of freedom with k(M4) instead of k(CP2).

The obvious breaking of U(1) gauge invariance looks strange at first but it conforms with the fact that in TGD framework the canonical transformations of CP2 acting as U(1) gauge symmetries do not give to gauge degeneracy but to spin glass degeneracy since they act as symmetries of only vacuum extremals of Kähler action.

3. How to achieve Lorentz invariance?

Lorentz invariance fixes the form of function Φ uniquely as the following argument demonstrates.

  1. Poincare invariance would be broken in any case for a given light-cone in the decomposition of the configuration space to sub-configuration spaces associated with light-cones at various locations of M4 but since the functions Φ associated with various light cones would be related by a translation, translation invariance would not be lost.

  2. The selection of Φ should not break Lorentz invariance. If Φ depends on the Lorentz proper time a only, this is achieved. Momentum currents would be proportional to mk and become light like at the boundary of the light-cone. This fits very nicely with the interpretation that the matter emanates from the tip of the light cone in Robertson-Walker cosmology.

Lorentz invariance poses even stronger conditions to Φ.

  1. Partonic four-momentum defined as Chern-Simons Noether charge is definitely not conserved and must be identified as gravitational four-momentum whose time average corresponds to the conserved inertial four-momentum assignable to the Kähler action (see this and this). This identification is very elegant since also gravitational four-momentum is well-defined although not conserved.

  2. Lorentz invariance implies that mass squared is constant of motion. It is interesting to look what expression for Φ results if the gravitational mass defined as Noether charge for C-S action is conserved. The components of four-momentum for Chern-Simons action are given by

    Pk=(∂ LC-S/∂ (∂αa)) mklmla .

    Chern-Simons action is proportional to Aα=Aaαa so that one has

    Pk propto ∂aΦ ∂mka=∂aΦ mk/a.

    The conservation of gravitational mass would give Φ propto a. Since CP2 projection must be 2-dimensional, M4 projection is 1-dimensional so that mass squared is indeed conserved.

    Thus one can write

    Φ= (k(M4)/k(CP2))× x×a/R,

    where R the radius of geodesic sphere of CP2 and x a numerical constant which could be fixed by quantum criticality of the theory. Chern-Simons action density does not depend on a for this choice.

  3. A rather strong prediction is that only homologically charged partonic 3-surfaces can carry gravitational four-momentum. For CP2 type extremals, ends of cosmic strings, and wormhole contacts the non-vanishing of homological charge looks natural. In fact, for wormhole contacts 3-D CP2 projection suggests itself and is possible only if one allows also quantum fluctuations around light-like extremals of Chern-Simons action. The interpretation could be that for a vanishing homological charge boundary conditions force X4 to approach vacuum extremal at partonic 3-surfaces.

4. Comment about quantum classical correspondence

The proposed general picture allows to define the notion of quantum classical correspondence more precisely. The identification of the time average of the gravitational four-momentum for C-S action as a conserved inertial four-momentum associated with the Kähler action at a given space-time sheet of a finite temporal duration (recall that we work in the zero energy ontology) is the most natural definition of the quantum classical correspondence and generalizes to all charges.

In this framework the identification of gravitational four-momentum currents as those associated with 4-D curvature scalar for the induced metric of X4 could be seen as a phenomenological manner to approximate partonic gravitational four-momentum currents using macroscopic currents, and the challenge is to demonstrate rigorously that this description emerges from quantum TGD.

For instance, one could require that at a given moment of time the net gravitational four-momentum of Int(X4) defined by the combination of the Einstein tensor and metric tensor equals to that associated with the partonic 3-surfaces. This identification, if possible at all, would certainly fix the values of the gravitational and cosmological constants and it would not be surprising if cosmological constant would turn out to be non-vanishing.

The chapter Construction of Quantum Theory: Symmetries of "Towards S-Matrix" contains this piece of text too.

Sunday, August 06, 2006

Quantum model for Priore's machine

Theoretician encounters sometimes inventions which work but seem to defy all attempts to understand them. Even more, it seems a complete mystery how the inventor has ended up with his device, unless one accepts the idea that the inventor was working under the guidance of some higher level conscious entities. Priore's machine demonstrated to heal cancer certainly belongs to this category. Although the biological effects of the Priore's device are described in high detail, the construction of the machine, which is very complicated, is described in a very sketchy manner (see U.S. Office of Naval Research Report on the Priore Machine, 16. August 1978 here). This makes it difficult to see what is essential and what is not.

During years I have used Priore's machine as a kind of testing tool for TGD based quantum model of living matter. The frustrating experience has been that I have not been able to understand the real function of Priore's machine although I have identified which many of the most essential pieces in the puzzle. The most recent view about quantization of Planck constant and about hierarchy of dark magnetic bodies using hierarchy of EEGs to control biological body and to receive information from it motivated take a new look on Priore's machine.

And then the flash came! I realized that the microwave radiation with frequency 9.4 GHz corresponds to the frequency of Josephson radiation at k=1 level of dark matter hierarchy defined by the hierarchy hbar= 2k11hbar0 forming the basis for the TGD based model for hierarchy of scaled variants of eEG. The corresponding energy turned out to be exactly the Josephson energy 80 meV associated with the cell membrane resting potential! Thus the function of microwave radiation could be regeneration of Josephson radiation essential for the generation of scaled up variant of EEG.

The basic function of Priore's machine is to artificially regenerate a level in the control hierarchy involving scaled variants of EEG, which is not present in the cancer tissue so that quantum bio-control fails. Priore's machine would do a lot of things.

  1. Priore's machine would create the magnetic body corresponding to this level, it would creat also the rotating plasmoids responsible for realization of quantum control and carrying magnetic fields with strengths 612 Gauss and 1224 Gauss used by Priore's machine, and also regenerate the scaled up variant of cell membrane with size scaled up by a factor n=211 characterizing the corresponding Planck constant hbar=2k11hbar0, k=1.

  2. Priore's machine would establish scaled up variant of EEG including the needed cyclotron radiation and Josephson radiation, stimulate plasma wave patterns by microwave radiation at plasma frequency, and perhaps also generate temporal codewords representing control commands of the immune system by modulating microwave radiation.

  3. Priore's machine would also provide the metabolic energy needed to achieve all this. In particular the highly energetic electrons and X rays with energies up to 300 keV would be needed to give for electrons their high cyclotron energies measured in tens of keV:s. Electrons could be also kicked to space-time sheets with sub-atomic size scale and their dropping would provide metabolic energy quanta up to 258 keV for k=131 space-time sheet.

The constraint that the generalized EEG is precisely scaled up version of the ordinary EEG fixes the model for the function of the Priore's machine essentially completely.

The chapter Homeopathy in Many-Sheeted Space-Time of "Universe as a Conscious Hologram" contains the model for Priore's machine.

Friday, August 04, 2006

Has dark matter been observed?

The group of G. Cantatore has reported an optical rotation of a laser beam in a magnetic field (hep-exp/0507107). The experimental arrangement involves a magnetic field of strength B=5 Tesla. Laser beam travels 22000 times forth and back in a direction orthogonal to the magnetic field travelling 1 m during each pass through the magnet. The wavelength of the laser light is 1064 nm. A rotation of (3.9+/-.5)× 10-12 rad/pass is observed.

A possible interpretation for the rotation would be that the component of photon having polarization parallel to the magnetic field mixes with QCD axion, one of the many candidates for dark matter. The mass of the axion would be about 1 meV. Mixing would imply a reduction of the corresponding polarization component and thus in the generic case induce a rotation of the polarization direction. Note that the laser beam could partially transform to axions, travel through a non-transparent wall, and appear again as ordinary photons.

The disturbing finding is that the rate for the rotation is by a factor 2.8× 104 higher than predicted. This would have catastrophic astrophysical implications since stars would rapidly lose their energy via axion radiation.

The motivation for introducing axion was the large CP breaking predicted by the standard QCD. No experimental evidence has been found has been found for this breaking. The idea is to introduce a new broken U(1) gauge symmetry such that is arranged to cancel the CP violating terms predicted by QCD. Because axions interact very weakly with the ordinary matter they have been also identified as candidates for dark matter particles.

In TGD framework there is special reason to expect large CP violation analogous to that in QCD although one cannot completely exclude it. Axions are however definitely excluded. TGD predicts a hierarchy of scaled up variants of QCD and entire standard model plus their dark variants corresponding to some preferred p-adic length scales, and these scaled up variants play a key role in TGD based view about nuclear strong force (see this and this), in the explanation of the anomalous production of e+e- pairs in heavy nucleus collisions near Coulomb wall (this), high Tc superconductivity (see this, this, and this), and also in the TGD based model of EEG (see this). Therefore a natural question is whether the particle in question could be a pion of some scaled down variant of QCD having similar coupling to electromagnetic field. Also dark variants of this pion could be considered.

What raises optimism is that the Compton length of the scaled down quarks is of the same order as cyclotron wavelength of electron in the magnetic field in question. For the ordinary value of Planck constant this option however predicts quite too high mixing rate. This suggests that dark matter has been indeed observed in the sense that the pion corresponds to a large value of Planck constant. Here the encouraging observation is that the ratio λc/λ of wavelength of cyclotron photon and laser photon is n=211, which corresponds to the lowest level of the biological dark matter hierarchy with levels characterized the value hbar= 211khbar0, k=1,2,....

The most plausible model is following.

  1. Suppose that the photon transform first to a dark cyclotron photon associated with electron at the lowest n=211 level of the biological dark matter hierarchy. Suppose that the coupling of laser photon to dark photon can be modelled as a coefficient of the usual amplitude apart from a numerical factor of order one equal to αem(n) propto 1/n.
  2. Suppose that the coupling gπNN for the scaled down hadrons is proportional to αs4(n) propto 1/n4 as suggested by a simple model for what happens for the nucleon and pion at quark level in the emission of pion.
Under these assumptions one can understand why only an exotic pion with mass of 1 meV couples to laser photons with wavelength λ= 1 μm in magnetic field B=5 Tesla. The general prediction is that λc/λ must correspond to preferred values of n characterizing Fermat polygons constructible using only ruler and compass, and that the rate for the rotation of polarization depends on photon frequency and magnetic field strength in a manner not explained by the model based on the photon-axion mixing.

The chapter Does TGD Predict the Spectrum of Planck Constants? of "Towards S-Matrix" contains the detailed calculations.