https://matpitka.blogspot.com/2009/10/

Monday, October 19, 2009

New evidence for macroscopic quantum coherence in living matter

The idea that living systems might be quantum systems emerged around 1980 in Esalem conference. David Finkelstein - the chief editor of International Journal of Theoretical Physics, in which I was able to publish my works at that time - was the primus motor. Around 1995 an intense period of discussions in email groups began. Hameroff-Penrose model was one of the models discussed. The books of Penrose had a great impact on the gradual transformation of quantum consciousness to a respectable scientific topic (not everywhere: there are some distant corners of the globe such as my home country where quantum consciousness is still regarded as a pseudoscience). At that that I began serious and almost whole-daily work in TGD inspired theory of consciousness and quantum biology. The wisdom gained in this process in turn led to a progress in the mathematical formulation of quantum TGD proper made possible by a radically new vision about fundamentals.

The attitude towards the quantum vision about living systems depend on the basic prejudices of the scientist. Average hard wired guy willing to appear as an authority relies on text book wisdom and of course immediately tells that quantum effects cannot be significant in length and time scales involved and that there is absolutely no evidence for them. We should not however trust text book wisdom and -as I have learned- even less to average physicists;-)! After all, living systems look very quantal and we experience directly what could be called free will. We should rely on what we directly experience and ability to think rationally rather than authorities and be ready to question also the existing view about quantum physics.

What could biology and neuroscience give to the quantum physics? This should be the question. If the standard quantum physics does not allow the needed macroscopic quantum phases, we must modify the quantum physics. Even quantum consciousness theorists have usually adopted the view that wave mechanics is enough for understanding of living matter. Penrose has been an exception since he proposes that quantum gravity could be important. Perhaps it is not a mere co-incidence that persons who most passionately believe that the old theory is enough, have also the most limited skills as theorists.

During years I have learned that there is a lot of indirect experimental evidence for the quantum view (the strange findings about the functioning of cell membrane, the effects of ELF em fields on vertebrate brain,...), and have used these bits of experimental data to develop TGD based view about quantum physics. This involves the identification of dark energy and dark matter in terms of macroscopic quantum phases with non-standard large value of Planck constant, the new view about space-time and about the relationship between experienced time and time of physicists, new view about quantum states based on zero energy ontology, etc.. Also p-adic physics is essential in the proposed view about correlates of cognition and intention. Of course, this all this is very speculative and my frustrating realization has been that the good theory necessarily comes long before the experiments directly testing it.

During years the experimentation to test the presence of quantum effects in living matter has begun. And the positive evidence is accumulating. In Discover magazine there is an article titled Is Quantum Mechanics Controlling Your Thoughts? telling among other things about the latest direct evidence of quantum effects provided by experiments related to photosynthesis and odor perception.

Quantum coherence and photosynthesis

The article summarizes in popular terms the contents of the paper Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems by Fleming and collaborators reporting evidence for quantum coherence in photosynthesis. The absorption of photon induces electron current from the point of capture- chlorosome- to the reaction centers. The semiclassical theory predicts the dissipation of the electronic energy to be about 20 per cent whereas the observed dissipation is only about 5 per cent. This suggests quantum coherence. The following abstract of the original article summarizes the essentials.

Photosynthetic complexes are exquisitely tuned to capture solar light efficiently, and then transmit the excitation energy to reaction centres, where long term energy storage is initiated. The energy transfer mechanism is often described by semiclassical models that invoke 'hopping' of excited-state populations along discrete energy levels. Two-dimensional Fourier transform electronic spectroscopy has mapped6 these energy levels and their coupling in the Fenna–Matthews–Olson (FMO) bacteriochlorophyll complex, which is found in green sulphur bacteria and acts as an energy 'wire' connecting a large peripheral light-harvesting antenna, the chlorosome, to the reaction centre. The spectroscopic data clearly document the dependence of the dominant energy transport pathways on the spatial properties of the excited-state wavefunctions of the whole bacteriochlorophyll complex. But the intricate dynamics of quantum coherence, which has no classical analogue, was largely neglected in the analyses—even though electronic energy transfer involving oscillatory populations of donors and acceptors was first discussed more than 70 years ago11, and electronic quantum beats arising from quantum coherence in photosynthetic complexes have been predicted and indirectly observed. Here we extend previous two-dimensional electronic spectroscopy investigations of the FMO bacteriochlorophyll complex, and obtain direct evidence for remarkably long-lived electronic quantum coherence playing an important part in energy transfer processes within this system. The quantum coherence manifests itself in characteristic, directly observable quantum beating signals among the excitons within the Chlorobium tepidum FMO complex at 77 K. This wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, in that it allows the complexes to sample vast areas of phase space to find the most efficient path.

The popular article translates the article to the following piece of text.

To unearth the bacteria’s inner workings, the researchers zapped the connective proteins with multiple ultra-fast laser pulses. Over a span of femto­seconds, they followed the light energy through the scaffolding to the cellular reaction centers where energy conversion takes place. Then came the revelation: Instead of haphazardly moving from one connective channel to the next, as might be seen in classical physics, energy traveled in several directions at the same time. The researchers theorized that only when the energy had reached the end of the series of connections could an efficient pathway retroactively be found. At that point, the quantum process collapsed, and the electrons’ energy followed that single, most effective path.

My own interpretation would be following.

  1. Remarkably long lived electronic quantum coherence is claimed to be present. Authors propose that quantum computation like process - quantum random walk -could be in question. If I have understood correctly, the proposed process can halt only by a state function reduction localizing the electron at the reaction center. Completely standard Schrödinger evolution in the network would be otherwise in question. The good news is that the average time to find from the entrance to exit in this kind of process is exponentially shorter than in the classical random walk. One can say that exit plus all other points are always reached after some minimum time and it is enough to perform the state function reduction localizing the electron to the exit.

  2. Somewhat confusingly, the popularizers claim that the authors argue (I do not have access to the original article) that the quantum random walk selects the shortest path from the chlorosome to the reaction center is in question. Quantum collapse is a non-deterministic process and if it selects the path in this particular case it can select any path with some probability, not always the shortest one. The selection of the shortest path is not necessarily needed since the quantum random walk with fixed entrance and exit is by its inherent nature exponentially faster than its classical counterpart. The proposed interpretation makes sense only if the state function reduction takes place immediately after the electron's state function at the exit becomes non-vanishing. Does it? I cannot say.

If one accepts this view, the sole problem is to understand how macroscopic quantum coherence is possible in the length scales considered. There are good arguments supporting the view that this is not the case for the ordinary quantum mechanics. In TGD framework the hierarchy of Planck constants suggests that both macroscopic quantum coherence and very low dissipation rate are due to the large value of hbar for electrons. For instance, for hbar=5×hbar0 the naive estimate is that dissipation rate should reduce by a factor 1/5 and coherence times and lengths should increase by a factor 5. I have proposed much larger values of hbar in the model of living system. In particular, the model for high temperature super-conductivity assigns to these systems basic biological length scales from p-adic length scale hypothesis (5 nm thickness of lipid layer of cell membrane corresponds to L(149), 10 nm thickness of lipid layer to L(151) and the length scale 2.5 μm of cell nucleus to L(167)). The electron Compton length is scaled up by a factor 211 to so that it corresponds to the p-adic length scale L(149)=5 nm. This would scale up the fundamental bio-time scale of .1 seconds predicted by TGD to be the time scale assignable to causal diamond of electron by factor 222 to about 4 × 105 seconds.

For TGD based ideas about photosynthesis see this.

Odor perception and quantum coherence

The article discusses also the work of the biophysicist Luca Turin related to odor perception as additional support for quantum brain. Before going to the article it is good to summarize the basic ideas about sensory qualia (colors, odors, ...) in TGD inspired theory of consciousness.

  1. In TGD framework the identification of qualia follows from the identification of quantum jump as a moment of consciousness. Just as quantum numbers characterize the physical state, the increments of quantum numbers characterize the quantum jump between two states. This leads to a capacitor model of the sensory receptor in which the sensory perception corresponds to a generalized di-electric breakdown in which various particles carrying some quantum numbers flow between electrodes and the change of the quantum numbers at second electrodes gives rise to the sensory quale in question.

  2. It is important that sensory qualia are assigned to the sensory receptors rather than to the neural circuitry of brain as in standard neuroscience. This leads to objections (phantom leg for instance) which are circumvented in TGD based vision about 4-D brain. For instance, phantom leg would correspond to sensory memory resulting by sharing the mental image about pain residing in the geometric past when the leg still existed. A massive back-projection generating virtual sensory input from brain (or from the magnetic body via brain) is needed to build the actual perception as a kind of art-work by filtrating from the actual sensory input a lot of unessential stuff and amplifying the essential features.
  3. The discovery of Callahan that odor perception of insects seems to be based on IR light inspired my own the proposal that photons at IR frequencies could be involved with the odor perception so that odor perception would be at molecular level seeing by IR light. Even hearing could involve similar "seeing" in appropriate frequency range. Massless extremals (topological light rays) would serve as kind of wave guides parallel to axons along which light would propagate as kind of laser beams between receptor and brain. This would also explain why the mediation of auditory input takes so rapidly.

  4. I have also proposed frequency coding for the sensory qualia. The first proposal which I dubbed as "Spectroscopy of Consciousness" stated that cyclotron frequencies assignable to various biologically important ions -much below IR range- associated with as such correspond to sensory qualia. Later I gave up this idea and proposed that frequencies code provide only a symbolic representations- define their names- as one might say. The information about qualia and more general sensory data would be represented in terms of cyclotron frequencies inducing dynamical patterns of the cyclotron Bose-Einstein condensates of biologically important ions residing at the magnetic body receiving the sensory information.

I attach a small piece of the article here to give a popular summary about the work of Luca Turin.

Quantum physics may explain the mysterious biological process of smell, too, says biophysicist Luca Turin, who first published his controversial hypothesis in 1996 while teaching at University College London. Then, as now, the prevailing notion was that the sensation of different smells is triggered when molecules called odorants fit into receptors in our nostrils like three-dimensional puzzle pieces snapping into place. The glitch here, for Turin, was that molecules with similar shapes do not necessarily smell anything like one another. Pinanethiol [C10H18S] has a strong grapefruit odor, for instance, while its near-twin pinanol [C10H18O] smells of pine needles. Smell must be triggered, he concluded, by some criteria other than an odorant’s shape alone.

What is really happening, Turin posited, is that the approximately 350 types of human smell receptors perform an act of quantum tunneling when a new odorant enters the nostril and reaches the olfactory nerve. After the odorant attaches to one of the nerve’s receptors, electrons from that receptor tunnel through the odorant, jiggling it back and forth. In this view, the odorant’s unique pattern of vibration is what makes a rose smell rosy and a wet dog smell wet-doggy.

The article A spectroscopic mechanism for primary olfactory perception by Turin explains in detail his theory and various experimental tests. Here are the core ideas in more quantitative terms.

  1. The theory originates from the proposal of Dyson (not that Dyson;-)!) proposed already 1938 that odor perception might rely on the vibrational spectrum of the odorant rather than its shape alone. The spectrum would be in the wave length range 2.5-10 μm corresponding to photon energies in the range .5 eV - .125 eV. This vibrational spectrum would be excited by the current of electrons tunneling from the receptor to the odorant molecule.

  2. The proposal is that odor receptor can be regarded as a pair formed by a source and sink of electrons. If there is nothing between source and sink, tunneling can occur if there is electronic energy state with same energy in both source and sink. If there is an odorant molecule between source and sink with vibrational energy E, tunneling can occur indirectly: the electron can excite a vibrational state with this energy and tunneling can occur only if the difference of electron energies in source and sink is E. Therefore the presence of odor molecule would be detected from the occurrence of the tunneling and vibrational energy spectrum would characterize the odor molecule.

One can compare the model of Turin with TGD based ideas.

  1. The theory of Turin conforms at the general level with the receptor model. The "electrodes" of the sensory capacitor would correspond to the source and sink of electrons and the presence of the odorant molecule between the "electrodes" would induce the current. The current of electrons from the source to the sink should induce the change of total quantum numbers defining the odor quale.

  2. The first thing to notice is that the upper bound .5 eV for IR energies corresponds to the nominal value of the metabolic energy quantum identified as the energy liberated as proton drops from the atomic space-time sheet with k=137 to a very large space-time sheet or the same process for electron Cooper at k=149 space-time sheet. If Cooper pairs are involved, the latter process would occur in the length scale defined by the thickness of the lipid layer of the cell membrane (5 nm). The lower bound corresponds to a metabolic energy quantum assignable to k= 139 for protons and k=151 transition for electrons (thickness of cell membrane).

  3. Second point to notice is that TGD predicts a fractal hierarchy of spectra of metabolic energy quanta coming as E(Δk,n)= 2-ΔkE0(1-2-n), n=1,2,..., converging to E(Δk,∞)= 2-ΔkE0 for given p-adic length scale characterized by the difference Δk=k-k0 . E0 denotes the zero point kinetic energy of particle at space-time sheet with p-adic length scale k=k0 and is inversely proportional to the mass of the particle. The transfer of electrons and/or protons between different space-time sheets with any perception for purely metabolic reasons. The simplest option is that since the electrons at the side of the source receive their energy in this manner, their energy spectrum is given by E(Δk,n) (there is of course some resolution meaning a cutoff in n). The specificity of the receptor would require preference of some specific metabolic energy quanta E(Δk,n). If this spectrum characterizes the receptor independently of its chemistry, then not only metabolic energy quanta but also the mechanism of sensory perception is universal. This proposal fails if the receptor has always same spectrum of E(Δk,n) since all receptors would detect all odors.

It is interesting to relate the theory of Turin with the hypothesis of Callahan that the odor perception of insects uses IR light.

  1. Callahan's work (Callahan, P. S. (1977). Moth and Candle: the Candle Flame as a Sexual Mimic of the Coded Infrared Wavelengths from a Moth Sex Scent. Applied Optics. 16(12) 3089-3097) suggests that the IR photons emitted by the odorant in the transitions between the vibrational states and received by the odor receptor are basically responsible for the odor perception. Turin in turn proposes that the pattern of vibrational excitations in the odor molecule characterizes the perception. These views are consistent if the pattern of vibrational excitations is in 1-1 correspondence with the flow pattern of electrons between different space-time sheets at the receptors if a kind of self-organization pattern results: this is expected to take place in presence of a metabolic energy feed.

  2. In Callahan's model for the odor perception of insects the simplest odor receptor would "see" the IR light emitted by the odor molecules. Also Turin explains -with different assumptions- that the situation is analogous to that prevailing in retina in that there are receptors sensitive to characteristic energy ranges of photons. One would expect that the odor perception of insects is something very simple. The so called vomeronasal organ is known to be responsible for the perception of socially important odors not generating conscious experience at our level of self hierarchy but having important effect on behavior (perfume industry has long ago realized this!). Vomeronasal organ could utilize this kind of primitive odor receptors.

  3. The rate for the spontaneous transitions emitting IR light could be rather low. A more advanced receptor would induce more transitions by using tunneling electrons to excite vibrational energy levels in the odorant. This would be like using lamp to see better! The analogy with the transistor is also suggestive: the small base current induced by IR radiation generated by the odor molecule would be amplified in the process. Since the source contains electrons in excited states (at smaller space-time sheets), odor molecules could send negative energy photons dropping electrons to the large space-time sheet along which tunneling is possible. Induced emission would cause a domino like flow of electrons and excitations of the vibrational states of the odor molecule as the counterpart of di-electric breakdown would take place.

  4. What could then the physical correlates for the primary odor qualia? The increments of some quantum numbers assignable to electrons at the source should be in question. Could the energies E(k,n) characterizing the receptor define the primary odors? Odors and tastes are indeed very intimately related to metabolic activities;-). A natural consequence would be that besides the radiation generated by the transfer of electrons between space-time sheets would induce odor and perhaps also taste sensation. Organisms serve as food for other organisms so that an optimal detection of nutrients would be the outcome.

Could one assume that also other receptors use metabolic energy quanta as basic excitation energies?

  1. The first objection is that similar "metabolic qualia" would result in all receptors. This is not a problem if these qualia are qualia not conscious to us but conscious to neuronal selves. For instance, in the TGD based model for visual colors the increments of color quantum numbers (in QCD sense!) define the basic colors, which means that colored particles must be in question (TGD variant of quark color implies the existence of scaled variants of QCD like physics and predicts that also electrons have colored excitations for which there is indeed a growing experimental evidence).

  2. Second objection is that it does not seem possible to identify E(k,n) as excitation energies in the case of vision. The relevant range of photon energies is [1.65,3.3] eV. By scaling the metabolic energy quantum by a power of 2, the nominal values of relevant maximal metabolic energy quanta E(k,n=∞) are 2 eV and 4 eV. The series of energies approaching 2 eV below 2 eV is 1, 1.5, 1.75, ..., 2 eV so that the range below 2 eV representing red light would be covered. Above 2 eV the series is 2, 3, 3.50,...,4 eV so that the region above 2 eV (orange, yellow, green, blue, indigo, violet) would contain only single line at 3 eV (violet). If the incoming photon can kick the electron to an excited state with energy E0 at the smaller space-time sheet the spectrum contains also the energies E(k,n)+E0. For E0=1.3 eV these excitation energies would come as 2.3, 2.8, 3.05,... 3.3 eV and cover this range.

For TGD based view about qualia see the chapter Quantum Model for Qualia of the book "Bio-systems as Conscious Holograms".

Thursday, October 15, 2009

Malevolent backwards causation as source of problems at LHC and other non-conventional ideas

The recent paper by Holger Nielsen and Masao Ninomiya - discussing the quite unconventional idea that signals from future making detection of Higgs impossible are responsible for the diffifulties of LHC and for why the construction of SSC (Superconducting Super Collider) was stopped by Congress - has received a lot of attention. After Dennies Overbye wrote about it in New York Times, bloggers have expressed their views one after another. Sean Carroll wrote quite a balanced and humorous comments trying to convince that everything in theoretical physisc is not lost althogh this paper has appeared in archive. Lubos - the militant of theoretical physics- wrote about the subject with the characteristic highly emotional tone (negative as usual). Also Kea has written about the topic -even twice- and I got an opportunity to tell my remembrances about discussions with Holger, one of the most friendly persons in the known Universe and also one of the very few intelligent life-forms who have shown keen and genuine interest in TGD.

Very few have taken the paper as a joke allowing to concretize in a humorous manner delicate and difficult and yet unresolved questions related to the notion of time. People with strong beliefs firmly based on text book wisdom about physics as it was in their youth are aggressively attacking ideas that the joke meant to concretize. Typical blog behavior of course.

There are three unconventional ideas involved which tend to be seen as sources of all the evil.

  1. The action defining quantum field theory could have imaginary part suppressing some histories (in this case those allowing a successful production of Higgs in laboratory).

  2. Action could possess space-time locality unlike actions of quantum field theories usually have.

  3. The idea of backward causation meaning that signals can propagate backward in time: here one should however specify what one exactly means with time and causation.

Since these unconventional ideas relate very closely to the basic distinctions between quantum TGD and standard approach, I will try to demonstrate that they are not a threat for the civilization.

Should we tolerate imaginary part and space-time non-locality of action?

The idea about imaginary part of action supressing some histories need not be crackpottish if properly formulated. There is also a good motivation for something like this: the basic difficulty of both quantum field theories and string model is that path integral is not well-defined mathematically.

  1. One could try to overcome the problem by adding an imaginary part to the action so that phase factor is replaced with a complex exponent and some histories are indeed supressed and one obtains a well-defined integral around minimum of the real part of the imaginary exponent of action (usually the extremal with a stationary phase defines the perturbationt theory). The loss of unitarity is the obvious objection.
  2. Unfortunately this is not enough. Space-time locality of quantum field theory implies infinities in n-point functions of the theory. So that there is order also for non-locality. The problem with non-locality is how to realize it in a non-ad-hoc manner.

It seems that a solution of problem generates new problems. These new problems are avoided in quantum TGD.

  1. Light-like 3-surfaces (or equivalently space-like 3-surfaces are taken as fundamental objects and the fundamental variational principle assigns to them unique 4-D space-time surface. This is nothing but quantum holography. Don't be afraid. This is a good thing;-).

  2. Path integral is replaced with a functional integral over 3-surfaces with the exponent of Kähler action for a preferred extremal (space-time surface) defining the analog of Gaussian. The infinite-dimensional integral over 3-surfaces is well defined since exponential suppression occurs and local divergences are absent since the counterpart of action depends in a non-local manner on 3-surface. This represent 20 years old layer of TGD.

  3. The loss of unitarity is not a catastrophe in zero energy ontology where S-matrix is replaced with M-matrix defined as a "complex square root" of the density matrix having S-matrix as a "multiplicative phase factor" so that quantum theory becomes "complex square root" of thermodynamics. Quantum field theory at non-zero temperature is a respected branch of theoretical physics and its TGD counterpart emerges at the level of fundamental formulation. This layer of TGD is about half decade old.

Should we tolerate backward causation?

I see nothing crackpottish even in the notion of backward causation. What is crackpottish or probably just a joke is to propose that this would explain why Higgs has not been discovered yet. As far as plausibility is considered this proposal brings to my mind the brane constructions meant to reproduce standard model symmetries (certainly not intended to be jokes)!

  1. In zero energy ontology physical states are replaced with zero energy states formed by entangled pairs of positive and negative energy states at opposite light-like boundaries of causal diamonds (CDs) defined as intersections of future and past directed light-cones. Zero energy ontology allows positive energy signals propagating to geometry future as well as negative energy signals propagating to geometric past. Negative energy signals justify the notion of backwards causation and it forms the corner stone of TGD inspired quantum biology and consciousness theory. It also resolves fundamental philosophical problems of theoretical physics posed by some innocent looking questions (What are the total conserved quantum numbers of the Universe and why are there values what they are?).

  2. When the time scale of observations is larger than the size of CD involved with the phenomenon studied, standard thermodynamics applies. If not, the signals propagating in both time directions are significant somewhat like in standard Feynman diagrammatics. The recent formulation of quantum TGD indeed supports the view that antimatter is in negative energy states near the opposite light-like boundary of CDs. This would conform completely with Feynman's view and explain the generation of matter antimatter asymmetry.

  3. The hierarchy of Planck constants - motivated by the mysteries of dark matter and dark energy plus intriguing observations suggesting quantum effects in both biology and astrophysics- leads to a generalization of 8-D imbedding space to a book like structure with pages partially characterized by the values of Planck constant. This hierarchy makes possible quantum coherence in arbitrary long scales so that there exist always sheets of the many-sheeted space-time at which second law cannot be applied at all or applies in both directions of geometric time. Biology would represent a basic example of this kind of situation.

  4. Quantum biology is one of the basic applications of quantum TGD and the basic mechanisms of intentional action, metabolism, and memory rely on backwards causation. One must of course make a clear distinction between geometric time and subjective time (identified as a sequence of quantum jumps) in order to avoid paradoxes. The precise articulation of this distinction in TGD framework has turned out to be extremely useful exercise and could be also seen as one of the motivations for TGD inspired theory of consciousness besides the challenge of making observer a genuine part of the physical system by introducing the notion of self.

  5. Most importantly, backwards causation has experimental support. Libet's paradoxical finding that neural activity precedes conscious decision finds in this framework a nice explanation without giving up free will. Phase conjugate laser beams provide the direct experimental evidence at the level of physics: for instance, they obey second law in reversed direction of geometric time: this has even technological application.

Since the generally accepted conceptual framework is lacking, theoretical physicists follow Wittgenstein's advice and prefer to be silent about the fascinating phenomena related to backwards causation. And about many other things too: it seems that the recent day theoretical physics is filled with taboos;-)).

Multiverse as space of quaternionic sub-algebras of local octonionic Clifford algebra?

Multiverses as quantum superpositions of geometric objects are unavoidable in any theory of quantum gravitation starting from a geometric description of gravitation.

The notion of multiverse in M-theory context is however extremely poorly defined. Should one introduce probability amplitudes in all possible 11-D space-times and try to geometrize this space and show that Calabi-Yau times circle times M4:s appears as preferred ones? Should one also introduce probability amplitudes for all possible configurations of all possible branes inside particular 11-D manifold? Should one introduce at classical level decomposition of 11-D space-time to regions in good approximation of the desired form?

To me this is a hopeless mess both mathematically and physically. Like thermodynamics before Boltzman whose work colleagues stubbornly refused to recognize with tragic consequences (it seems that the situation is equally difficult with the "complex square root" of thermodynamics;-)).

My own modest proposal is following. Let us start by asking whether the higher-D space-time could be selected uniquely, say by starting from the idea that associativity fixes physics completely.

  1. 8-D space-times with Minkowski signature allow octonionic representation of gamma matrices as products of octonions and Pauli's sigma matrices. Consider local Clifford algebra in M8, which is the simplest possible choice.

  2. Ask what are the local associative sub-algebras of this algebra (one could and must also consider co-associative sub-algebras). Associativity corresponds to a restriction of local Clifford algebra elements to 4-D (hyper-)quaternionic surface Quaternionicity means that one can assign quaternionic plane, not necessarily tangent plane, to each of its points by some rule. If the 4-D quaternionic planes form an integrable distribution in some sense, we have got 4-D space-time.

  3. Do these quaternionic local Clifford sub-algebras allow commutative local sub-algebras? They do. This leads to a slicing of given hyper-quaternionic space-time surface by 2-D stringy surfaces (they are commutative) with slices parametrized by what I call partonic 2-surfaces (Euclidian string world sheets). In finite measurement implying discretization you get a collection of strings. Could M8 should allow slicings by quaternionic local Clifford sug-algebras with slicings parametrized by coquaternionic sub-algebras? This proposal is not a new one but appears naturally in this context.

  4. These properties imply M8-M4×CP2 duality that is mapping of these surfaces in M8 to M4×CP2 giving standard model symmetries and TGD in its basic form.

  5. The meaning of (hyper-)quaternionicity depends on the criteria assigning to given point of space-time surface quaternionic plane. Classical variational principle provides this criterion. For volume as action (non-physical choice) one obtains standard induced gamma matrices spanning tangent space. For Kahler action one obtains modified gammas and quaternionic sub-algebra does not span tangent space. This option is physical and besides producing standard model gauge field dynamics it provides the richests structure (quantum criticality, inclusion hierarchy of super-conformal algebras corresponding to that for HFFS of type II_1, etc..).

The world of classical worlds (WCW) is the multiverse of TGD and can be identified as the space of these quaternionic sub-algebras of the octonionic local Clifford and entire quantum TGD follows from mere algebra. Quantum states are spinor fields in WCW formed by quaternionic local Clifford sub-algebras. No landscape is obtained in this multiverse. Standard model symmetries are always the fundamental symmetries having purely number theoretical meaning. This picture is mathematically precisely defined with well-developed connections with existing physics. Mathematicians could immediately start to apply their methodology and intuition to develop TGD as a purely mathematical discipline.

But first something should be done. Maybe Nobel committee should follow their strategy when it gave peace price for Kissinger: Nobels to the leading string gurus! String wars would cease, landscape nightmare -the Vietnam of physics- would be soon forgotten, and theoreticians would be eagerly studying physics again;-).

What shook up Saturn's rings in 1984?

Solar system provides a continual supply of surprises. Now New Scientist reports that something shook up Saturn's rings in 1984. No convincing explanation has been found hitherto.

Something warped the inner D rings and also outer C rings into a ridged spiral like pattern like the grooves in a vinyl record. The amplitudes of grooves are about 1 km for D rings with width of about 8.000 km and about 100 m for the C rings with width of about 17.000 km. Recall that Saturn's ring span an annulus with width of order 60.000 km and with distance from planet of the same order of magnitude. Their thickness is only about 20 m so that a warping for a very thin sheet of paper is an excellent analogy. Warping in a precise mathematical sense means bending of plane without tearing it (so that the Riemann geometry of the sheet remains flat) and occurs almost spontaneously as the experimentation with a sheet of paper shows. Locally the process would look like an ideal warping of plane along parallel lines but in long scales -thanks to the gravitational pull of Saturn- these lines could become curved and form spirals.

The guess of Matthew Hedman of Cornell University was that some perturbation - perhaps a comet or asteroid- should have caused this warping by tilting the rings with respect to the plane of Saturn's equator so that the gravitation of Saturn (Saturn is not a perfect sphere) would have caused tidal forces putting the rings into a wobbling motion and created the spiral grooving pattern. By running equations of motion backwards in time Hedman and colleagues showed that the event should have occurred around 1984. The pattern is however so widespread that the explanation in terms of a comet or asteroid must be given up.

TGD inspired model for the sheets would be as condensations of visible matter around dark matter forming similar structures. Could it be that a quantum counterpart of Earth quake but at the level of dark matter rings with large Planck constant and therefore in large length scales took place? Could this explain why the event was missed by telescopes and space-crafts?

Monday, October 12, 2009

A new cosmological finding challenging General Relativity

I learned this morning about highly interesting new results challenging general relativity based cosmology. Sean Carroll and Lubos Motl commented the article A weak lensing detection of a deviation from General Relativity on cosmic scales by Rachel Bean. The article Cosmological Perturbation Theory in the Synchronous and Conformal Newtonian Gauges by Chung-Pei Ma and Edmund Bertschinger allows to understand the mathematics related to the cosmological perturbation theory necessary for a deeper understanding of the article of Bean.

The message of the article is that under reasonable assumptions General Relativity leads to a wrong prediction for cosmic density perturbations in the scenario involving cold dark matter and cosmological constant to explain accelerated expansion. The following represents my first impressions after reading the article of Rachel Bean and the paper about cosmological perturation theory.

1. Assumptions

"Reasonable" means at least following assumptions about the perturbation of the metric and of energy momentum tensor.

  1. The perturbations to the Robertson-Walker metric contain only two local scalings parameterized as dτ2→ (1+2Ψ)dτ2 and dxidxi→ (1-2Φ)dxidxi. Vector perturbations and tensor perturbations (gravitational radiation classically) are neglected.

  2. The traceless part (in 3-D sense) of the perturbation of energy momentum tensor vanishes. Geometrically this means that the perturbation does not contain a term for which the contribution to 3-curvature would vanish. In hydrodynamical picture the vanishing of this term would mean that the mass current for the perturbation contains only a term representing incompressible flow. During the period when matter and radiation were coupled this assumption makes sense. The non-vanishing of this term would mean the presence of a flow component - say radiation of some kind- which couples only very weakly to the background matter. Neutrinos would represent one particular example of this kind of contribution.

  3. The model of cosmology used is so called ΛCDM (cosmological constant and cold dark matter).

These assumptions boil down to a simple equation

η= Φ/Ψ=1.

2. The results

The prediction can be tested and Rachel Bean indeed did it.

  1. Ψ makes itself visible in the motion of massive objects such as galaxies since they couple to Newton's potential. This motion in turn makes itself visible as detected modifications of the microwave background from ideal. The so called Integrated Sachs-Wolfe effect is due to the redshift of microwave photons between last surface of scattering and Earth and caused by the gravitational fields of massive objects. Ordinary matter does not contribute to this effect but dark energy does.

  2. Φ makes itself visible in the motion of light. The so called Weak lensing effect distorts the images of the distant objects: apparent size is larger than the real one and there is also distortion of the shape of the object.

From these two data sources Rachel Bean deduces that η differs significantly from the GRT value and concentrates around η=1/3 meaning that the scaling of the time component of the metric perturbation is roughly 3 times larger than for spatial scaling.

3. What could be the interpretation of the discrepancy?

What η=1/3 could mean physically and mathematically?

  1. From Cosmological Perturbation Theory in the Synchronous and Conformal Newtonian Gauges one learns that for neutrinos causing shear stress one has Φ= (1+2Rν/5)Ψ, where Rν is mass fraction of neutrinos: hence η should increase rather than decrease! If this formula generalizes, a negative mass fraction R= -5/3 would be present! Something goes badly wrong if one tries to interpret the result in terms of the perturbations of the density of matter - irrespective of whether it is visible or dark!

  2. What about the perturbations of the density of dark energy? Geometrically η=1/3 would mean that the trace of the metric tensor defined in terms of the background metric is not affected. This means conservation of the metric determinant for the deformations so that small four-volumes are not affected. As a consequence, the interaction term Tαβ δgαβ receives a contribution from Gαβ but not from the cosmological term Λgαβ. This would suggest that the perturbation is not that of matter but of the vacuum energy density for which one would have

    Λgαβ δ gαβ=0 .

The result would not challenge General Relativity (if one accepts the notion of dark energy) but only the assumption about the character of the density perturbation. Instead of matter it would be the density of dark energy which is perturbed.

4. TGD point of view

What TGD could say about this.

  1. In TGD framework one has many-sheeted space-time, dark matter hierarchy represented by the book like structure of the generalized imbedding space, and dark energy is replaced with dark matter at pages of the book with gigantic Planck constant so that the Compton lengths of ordinary particles are gigantic and the density of matter is constant in long scales so that one can speak about cosmological constant in General Relativity framework. The periods with vanishing 3-curvature are replaced by phase transitions changing the value of Planck constant at some space-time sheets and inducing lengthening of quantum scales: the cosmology during this kind of periods is fixed apart from the parameter telling the maximal duration of the period. Also early inflationary period would correspond to his kind of phase transition. Obviously, many new elements are involved so that it is difficult to say anything quantitative.

  2. Quantum criticality means the existence of deformations of space-time surface for which the second variation of Kähler action vanishes. The first guess would be that cosmic perturbations correspond to this kind of deformations. In principle this would allow a quantitative modeling in TGD framework. Robertson-Walker metrics correspond to vacuum extremals of Kähler action with infinite spectrum of this kind of deformations (this is expected to hold true quite generally although deformations disappear as one deforms more and more the vacuum extremal).

  3. Why the four-volumes defined by the Robertson-Walker metric should remain invariant under these perturbations as η=1/3 would suggest? Are the critical perturbations of the energy momentum tensor indeed those for the dominating part of dark matter with gigantic values of Planck constant and having an effective representation in terms of cosmological constant in GRT so that the above mentioned equations implying conservation of four-volume result as a consequence?

  4. The most natural interpretation for the space-time sheets mediating gravitation is as magnetic flux tubes connecting gravitationally interacting objects and thus string like objects of astrophysical size. For this kind of objects the effectively 2-dimensional energy momentum tensor is proportional to the induced metric. Could this mean -as I proposed many years ago when I still took seriously the notion of the cosmological constant as something fundamental in TGD framework- that in the GRT description based on the replacement string like objects with energy momentum tensor the resulting energy momentum tensor is proportional to the induced metric? String tension would explain the negative pressure preventing the identification of dark energy in terms of ordinary particles.

For a background see the chapters TGD and Cosmology and Cosmic Strings of the book "Physics in Many-Sheeted Space-time".

Does TGD allow the counterpart of space-time super-symmetry?

The question whether TGD allows space-time super-symmetry or something akin to it has been a longstanding problem. A considerable progress in the respect became possible with the better understanding of the modified Dirac equation. At the same time I learned from Tommaso Dorigo's blog about almost 15 year old striking eeγγ+missing transversal energy event detected by CDF collaboration for which an explanation in terms super-symmetry has been proposed.

p-Adic length scale hypothesis assuming that the mass formulas for particles and sparticles are same but p-adic length scale is possibly different, combined with kinematical constraints fixes the masses of TGD counterparts of selectron, higgsino, and Z^0-gluino to be 131 GeV (just at the upper bound allowed kinematically), 45.6 GeV, and 91.2 GeV (Z^0 mass) respectively. The masses are consistent with the bounds predicted by the MSSM inspired model.

Instead of typing 6 pages of text in html format I just give a link to the pdf file Does TGD allow the counterpart of space-time supersymmetry?

For a background see the chapter p-Adic Mass Calculations: New Physics of the book "p-Adic Length Scale Hypothesis And Dark Matter Hierarchy".

Friday, October 09, 2009

Octonionic approach to the modified Dirac equation

The recent progress in the understanding of the modified Dirac equation defining quantum TGD at fundamental level (see this and this) stimulated a further progress. I managed to find a general ansatz for the modified Dirac equation solving it with very general assumptions about the preferred extremal of Kähler action.

A key role in the ansatz is played by the assumption that modified Dirac equation can be formulated using an octonionic representation of imbedding space gamma matrices. Associativity requires that the space-time surface is associative in the sense that the modified gamma matrices expressible in terms of octonionic gamma matrices of H span quaternionic sub-algebra at each point of space-time surface. Also octonionic spinors at given point of space-time surface must be associative: that is they span same quaternionic subspace of octonions as gamma matrices do. Besides this the 4-D modified Dirac operator defined by Kähler action and the 3-D Dirac operator defined by Chern-Simons action and corresponding measurement interaction term must commute: this condition must hold true in any case. The point is that associativity conditions fix the solution ansatz highly uniquely since the action of various operators in Dirac equation is not allowed to lead out from the quaternionic sub-space and the resulting ansatz makes sense also for ordinary gamma matrices.

It must be emphasized that octonionization is far from a trivial process. The mapping of sigma matrices of imbedding space to their octonionic counterparts means projection of the vielbein group SO(7,1) to G2 acting as automorphism group of octonions and only the right handed parts of electroweak gauge potentials survive so that only neutral Abelian part of classical electroweak gauge field defined in terms of CP2 remains. More over, electroweak holonomy group is mapped to rotation group so that electroweak interactions transform to gravitational interactions in the octonionic context! If octonionic and ordinary representations of gamma matrices are physically equivalent this represents kind of number theoretical variant for the possiblity to represent gauge interactions as gravitational interactions. This effective reduction to electrodynamics is absolutely essential for the associativity and simplifies the situation enormously. The conjecture is that the resulting solutions as such define also solutions of the modified Dirac equation for ordinary gamma matrices.

The additional outcome is a nice formulation for the notion of octo-twistor using the fact that octonion units define a natural analog of Pauli spin matrices having interpretation as quaternions. Associativity condition reduces the octo-twistors locally to quaternionic twistors which are more or less equivalent with the ordinary twistors and their construction recipe might work almost as such. It must be however emphasized that this notion of twistor is local unlike the standard notion of twistor since projections of momentum and color charge vector to space-time surface are considered. The two spinors defining the octo-twistor correspond to quark and lepton like spinors having different chirality as 8-D spinors.

The basic motivation for octo-twistors is that they might allow to overcome the problems caused by the massivation in the case of ordinary twistors. One might think that 4-D massive particles correspond to 8-D massless particles. A more refined idea emerges from modified Dirac equation. The space-time vector field obtained by contracting the space-time projections of four-momentum and the vector defined by Cartan color charges might be light-like with respect to the effective metric defined by the anticommutators of the modified gamma matrices. Whether this additional condition is consistent with field equations for the preferred extremals of Kähler action remains to be seen. Note that the geometry of the space-time sheet depends on momentum and color quantum numbers in accordance with quantum classical correspondence: this is what makes possible entanglement of classical and quantum degrees of freedom essential for quantum measurement theory.

Since it not much point in typing the detailed equations I give a link to a ten page pdf file Octo-twistors and modified Dirac equation representing the calculations. For details and background see the chapter Twistors, N=4 Super-Conformal Symmetry, and Quantum TGD of the book "Towards M-matrix".

How TGD emerges from number theory?

An interesting possibility is that quantum TGD could emerge from a condition that a local version of hyper-finite factor of type II1 represented as a local version of infinite-dimensional Clifford algebra exists. The conditions are that "center or mass" degrees of freedom characterizing the position of causal diamond (CD) defined as an intersection of future and past directed light-cones separate uniquely from the "vibrational" degrees of freedom being represented in terms of octonions and that for physical states associativity holds true. The resulting local Clifford algebra would be identifiable as the local Clifford algebra of the world of classical worlds (being an analog of local gauge groups and conformal fields).

The uniqueness of M8 and M4×CP2 as well as the role of hyper-quaternionic space-time surfaces as fundamental dynamical objects indeed follow from rather weak conditions if one restricts the consideration to gamma matrices and spinors instead of assuming that M8 coordinates are hyper-octonionic as was done in the first attempts.

  1. The unique feature of M8 and any 8-dimensional space with Minkowski signature of metric is that it is possible to have an octonionic representation of the complexified gamma matrices and of spinors. This does not require octonionic coordinates for M8. The restriction to a quaternionic plane for both gamma matrices and spinors guarantees the associativity.

  2. One can also consider a local variant of the octonionic Clifford algebra in M8. This algebra contains associative subalgebras for which one can assign to each point of M8 a hyper-quaternionic plane. It is natural to assume that this plane is either a tangent plane of 4-D manifold defined naturally by the induced gamma matrices defining a basis of tangent space or more generally, by modified gamma matrices defined by a variational principle (these gamma matrices do not define tangent space in general). Kähler action defines a unique candidate for the variational principle in question. Associativity condition would automatically select sub-algebras associated with 4-D hyper-quaternionic space-time surfaces.

  3. This vision bears a very concrete connection to quantum TGD. In the octonionic formulation of the modified Dirac equation is studied and shown to lead to a highly unique general solution ansatz for the equation working also for the matrix representation of the Clifford algebra. An open question is whether the resulting solution as such defined also solutions of the modified Dirac equation for the matrix representation of gammas. Also a possible identification for 8-dimensional counterparts of twistors as octo-twistors follows: associativity implies that these twistors are very closely related to the ordinary twistors. In TGD framework octo-twistors provide an attractive manner to get rid of the difficulties posed by massive particles for the ordinary twistor formalism.

  4. Associativity implies hyperquaternionic space-time surfaces (in a more general sense as usual) and this leads naturally to the notion of WCW and local Clifford algebra in this space. Number theoretic arguments imply M8-H duality. The resulting infinite-dimensional Clifford algebra would differ from von Neumann algebras in that the Clifford algebra and spinors assignable to the center of mass degrees of freedom of causal diamond CD would be expressed in terms of octonionic units although they are associative at space-time surfaces. One can therefore say that quantum TGD follows by assuming that the tangent space of the imbedding space corresponds to a classical number field with maximal dimension.

The importance of this result is that the Universe of Quantum TGD is mathematically completely unique: both classical and quantum dynamics follow from associativity alone.

For details and background see the chapter TGD as a Generalized Number Theory II: Quaternions, Octonions, and their Hyper Counterparts of the book "TGD as a Generalized Number Theory".

Tuesday, October 06, 2009

Zero energy ontology and quantum version of Robertson-Walker cosmology

Zero energy ontology has meant a real quantum leap in the understanding of the exact structure of the world of classical worlds (WCW). There are however still open questions and interpretational problems. The following comments are about a quantal interpretation of Robertson-Walker cosmology provided by zero energy ontology.

  1. The light-like 3-surfaces -or equivalently corresponding space-time sheets- inside a particular causal diamond (CD) is the basic structural unit of world of classical worlds (WCW)). CD (or strictly speaking CD×CP2) is characterized by the positions of the tips for the intersection of the future and past directed light-cones defining it. The Lorentz invariant temporal distance a between the tips allows to characterize the CDs related by Lorentz boosts and SO(3) acts as the isotropy group of a given CD. CDs with a given value of a are parameterized by Lobatchevski space -call it L(a)- identifiable as a2=constant hyperboloid of the future light-cone and having interpretation as a constant time slice in TGD inspired cosmology.

  2. The moduli space for CDs characterized by a given value of a is M4×L(a). If one poses no restrictions on the values of a, the union of all CDs corresponds to M4×M4+, where M4+ corresponds to the interior of future light-cone. F-theorist might get excited about dimension 12 for M4×M4+×CP2: this is of course just a numerical co-incidence.

  3. p-Adic length scale hypothesis follows if it is assumed that a comes as octaves of CP2 time scale: an = 2nTCP2. For this option the moduli space would be discrete union of spaces M4×L(an). A weaker condition would be that a comes as prime multiples of TCP2. In this case the preferred p-adic primes p ≈ 2n correspond to a=an and would be natural winners in fight for survival. If continuum is allowed, p-adic length scale hypothesis must be be a result of dynamics alone. Algebraic physics favors quantization at the level of moduli spaces.

  4. Also unions of CDs are possible. The proposal has been that CDs form a fractal hierarchy in the sense that there are CDs within CDs but that CDs to not intersect. A more general option would allow also intersecting CDs.

Consider now the possible cosmological implications of this picture. In TGD framework Robertson-Walker cosmologies correspond to Lorentz invariant space-time surfaces in M4+ and the parameter a corresponds to cosmic time.

  1. First some questions. Could Robertson Walker coordinates label CDs rather than points of space-time surface at deeper level? Does the parameter a labeling CDs really correspond to cosmic time? Do astrophysical objects correspond to sub-CDs?

  2. An affirmative answer to these questions is consistent with classical causality since the observer identified as -say- upper boundary of CD receives classical positive/negative energy signals from sub-CDs arriving with a velocity not exceeding light-velocity. M4×L(a) decomposition provides also a more precise articulation of the answer to the question how the non-conservation of energy in cosmological scales can be consistent with Poincare invariance. Note also that the empirically favored sub-critical Robertson-Walker cosmologies are unavoidable in this framework whereas the understanding of sub-criticality is one of the fundamental open problems in General Relativity inspired cosmology.

  3. What objections against this interpretation can one imagine?

    1. Robertson-Walker cosmology reduces to future light-cone only at the limit of vanishing density of gravitational mass. One could however argue that the scaling factor of the metric of L(a) need not be a2 corresponding to M4+ but can be more general function of a. This would allow all Robertson-Walker cosmologies with sub-critical mass density. This argument makes sense also for a = an option.

    2. Lorentz invariant space-time surfaces in CD provide an elegant and highly predictive model for cosmology. Should one give up this model in favor of the proposed model? This need not to be the case. Quantum classical correspondence requires that also the quantum cosmology has a representation at space-time level.

  4. What is then the physical interpretation for the density of gravitational mass in Robertson- Walker cosmology in the new framework? A given CD characterized by a point of M4×L(a), has certainly a finite gravitational mass identified as the mass assignable to positive/negative energy state at either upper or lower light-like boundary or CD. In zero energy ontology this mass is actually an average over a superposition of pairs of positive and negative energy states with varying energies. Since quantum TGD can be seen as square root of thermodynamics the resulting mass has only statistical meaning. One can assign a probability amplitude to CD as a wave function in M4×L(a) as a function of various quantum numbers. The cosmological density of gravitational mass would correspond to the quantum average of the mass density determined by this amplitude. Hence the quantum view about cosmology would be statistical as is also the view provided by standard cosmology.

  5. Could cosmological time be really quantized as a=an = 2nT(CP2)? Note that other values of a are possible at the pages of the book like structure representing the generalized imbedding space since a scales as r=hbar/hbar)0 at these pages. All rational multiples of an are possible for the most general option. The quantization of a does not lead to any obvious contradiction since M4 time would correspond to the time measured in laboratory and there is no clock keeping count about the flow of a and telling whether it is really discrete or continuous. It might be however possible to deduce experimental tests for this prediction since it holds true in all scales. Even for elementary particles the time scale a is macroscopic. For electron it is .1 seconds, which defines the fundamental bio-rhythm.

  6. The quantization for a encourages also to consider the quantization for the space of Lorentz boosts characterized by L(a) obtained by restricting the boosts to a subgroup of Lorentz group. A more concrete picture is obtained from the representation of SL(2,C) as Möbius transformations of plane.

    1. The restriction to a discrete subgroup of Lorentz group SL(2,C) is possible. This would allow an extremely rich structure. The most general discrete subgroup would be subgroup of SL(2,QC), where QC could be any algebraic extension of complex rational numbers. In particular, discrete subgroups of rotation group and powers Ln of a basic Lorentz boost L=exp(η) to a motion with a fixed velocity v0 = tanh(η) define lattice like structures in L(a). This would effectively mean a cosmology in 4-D lattice. Note that everything is fully consistent with the basic symmetries.

    2. The alternative possibility is that all points of L(a) are possible but that the probability amplitude is invariant under some discrete subgroup of SL(2,QC). The first option could be seen as a special case of this.

    3. One can consider also the restriction to a discrete subgroup of SL(2,R) known as Fuschian groups. This would mean a spontaneous breaking of Lorentz symmetry since only boosts in one particular direction would be allowed. The modular group SL(2,Z) and its subgroups known as congruence subgroups define an especially interesting hierarchy of groups if this kind: the tesselations of hyperbolic plane provide a concrete representation for the resulting hyperbolic geometries.

    4. Is there any experimental support for these ideas. There are indeed claims for the quantization of cosmic recession velocities of quasars (See Fang, L., Z. and Sato, H. (1985): Is the Periodicity in the Distribution of Quasar Red Shifts an Evidence of Multiple Connectedness of the Universe?, Gen. Rel. and Grav. Vol 17 , No 11.). For non-relativistic velocities this means that in a given direction there are objects for which corresponding Lorentz boosts are powers of a basic boost exp(η). The effect could be due to a restriction of allowed Lorentz boosts to a discrete subgroup or to the invariance of the cosmic wave function under this kind of subgroup. These effects should take place in all scales: in particle physics they could manifest themselves as a periodicity of production rates as a function of η closely related to the so called rapidity variable y.

  7. The possibility of CDs would mean violent collisions of sub-cosmologies. One could consider a generalized form of Pauli exclusion principle denying the intersections.

For a background see the chapter TGD and Cosmology of the book "Physics in Many-Sheeted Space-time".

Thursday, October 01, 2009

A new dark matter anomaly

There is an intense flood of exciting news from both biology, neuroscience, cosmology and particle physics which are very interesting from TGD point of view. Unfortunately, I do not have time and energy to comment all of them. Special thanks for Mark Williams and Ulla for sending links: I try to find time to write comments.

One of the most radical parts of quantum TGD is the view about dark matter as a hierarchy of phases of matter with varying values of Planck constant realized in terms of generalization of the 8-D imbedding space to a book like structure. The latest blow against existing models of dark matter is the discovery of a new strange aspect of dark matter discussed in the popular article Galaxy study hints at cracks in dark matter theories in New Scientist. The original article in Nature is titled as Universality of galactic surface densities within one dark halo scale-length. I glue here a short piece of the New Scientist article.

A galaxy is supposed to sit at the heart of a giant cloud of dark matter and interact with it through gravity alone. The dark matter originally provided enough attraction for the galaxy to form and now keeps it rotating. But observations are not bearing out this simple picture. Since dark matter does not radiate light, astronomers infer its distribution by looking at how a galaxy's gas and stars are moving. Previous studies have suggested that dark matter must be uniformly distributed within a galaxy's central region – a confounding result since the dark matter's gravity should make it progressively denser towards a galaxy's centre. Now, the tale has taken a deeper turn into the unknown, thanks to an analysis of the normal matter at the centres of 28 galaxies of all shapes and sizes. The study shows that there is always five times more dark matter than normal matter where the dark matter density has dropped to one-quarter of its central value.

In TGD framework both dark energy and dark matter are assumed to correspond to dark matter but with widely different values of Planck constant. The point is that very large value of Planck constant for dark matter implies that its density is in an excellent approximation constant as is also the density of dark energy. Planck constant is indeed predicted to be gigantic at the space-time sheets mediating gravitational interaction.

The appearance of number five as a ratio of mass densities sounds mysterious. Why the average mass in a large volume should be proportional to hbar at least if hbar is not too large? Intriguingly, number five appears also in the Bohr model for planetary orbits. The value of the gravitational Planck constant GMm/v0 assignable to the space-time sheets mediating gravitational interaction between planet and star is gigantic: v0/c ∼2-11 holds true inner planes. For outer planets v0/c is by a factor 1/5 smaller so that coresponding gravitational Planck constant is 5 times larger. Do these two fives represent a mere coincidence?

  1. In accordance with TGD inspired cosmology suppose that visible matter and also the matter which is conventionally called dark matter has emerged from the decay and widening of cosmic strings to magnetic flux tubes. Assume that the string tension can be written as k×hbar/G, k a numerical constant.

  2. Suppose that the values of hbar come as pairs hbar=n× hbar0 and 5×hbar. Suppose also that for a given value of hbar the length of the cosmic string (if present at all) inside a sphere or radius R is given by L=x(n)R, x(n) a numerical constant which can depend on the pair but is same for the members of the pair (hbar,5×hbar). This assumption is supported by the velocity curves of distant stars around galaxies.

  3. These assumptions imply that the masses of matter for a pair (hbar,5×hbar) corresponding to a given value of hbar in a volume of size R are given by M(hbar)= k× x(hbar)× hbar×R/G and M(5×hbar)= 5×M(hbar). This would explain the finding if visible matter corresponds to hbar0, and x(n) is much smaller for pairs (n>1,5×n) than for the pair (1,5).

  4. One can explain the pairing in TGD framework. Let us accept the earlier hypothesis that the preferred values of hbar correspond to number theoretically maximally simple quantum phases q= exp(i2π/n) emerging first in the number theoretical evolution having a nice formulation in terms of algebraic extensions of rationals and p-adics and the gradual migration of matter to the pages of the book like structure labelled by large values of Planck constant. These number theoretically simple quantum phases correspond to n-polygons drawable by ruler and compass construction. This predicts that the preferred values of hbar correspond to a power of 2 multiplied by a product of Fermat primes Fk=22k+1. The list of known Fermat primes is short and given by Fk, k=0,1,2,3,4 giving the Fermat primes 3,5,17,257, 216+1. This hypothesis indeed predicts that Planck constants hbar and 5×hbar appear as pairs.

  5. Why the pair (1, F1=5) should be then favored? Could the reason be that n=5 corresponds also to the smallest integer making possible universal topological quantum computer: the quantum phase q=exp(i2π/5) characterizes the braiding coding for the topological quantum computer program. Or is the reason simply that this pair corresponds to the number theoretically simplest pair which must have emerged first in the number theoretic evolution?

  6. This picture supports the view that ordinary matter and what is usually called dark matter are characterized by Planck costants hbar0 and 5×hbar0, and that the space-time sheets mediating gravitational interaction correspond to dark energy because the density of matter at these space-time sheets must be constant in an excellent approximation since Compton lengths are so gigantic.

  7. Using the fact that 4 per cent of matter is visible this means that n=5 corresponds to 20 per cent of dark matter in standard sense. Pairs (n>1,5×n) should contribute the remaining 2 per cent of dark matter. The fractal scaling law

    x(n) proportional to 1/nr

    allowing pairs defined by all Fermat integers not divisible by 5 would give for the mass fraction of conventional dark matter with n>1 the expression

    p = 6× ∑k 2-kr×[2-r+ ∑ nF-r]× (4/100)= (24/100)× (1-2-r)-1×[2-r+ ∑ nF-r] .

    Here nF denotes a Fermat integer which is product of some Fermat primes in the set {3,17,257, 216+1}. The contribution from n=2k, k>0, gives the term not included to the sum over nF. r=4.945 predicts p=2.0035 and that the mass density of dark matter should scale down as 1/hbarr-1= 1/hbar3.945.

  8. The prediction brings in mind the scaling 1/ar-1 for the cosmological mass density. a-4 scaling for the radiation dominated cosmology is very near to this scaling. r=5 would predict p=1.9164 which is of course consistent with the data. This inspires the working hypothesis that the density of dark matter as function of hbar scales just like the density of matter as function of cosmic time at particular epoch. In matter dominated cosmology with mass density behaving as 1/a3 one would have r=4 and p=4.45. In asymptotic cosmology with mass density behaving as 1/a2 (according to TGD) one would have r=3 and p=11.68.
  9. Living systems would represent a deviation from the "fractal thermodynamics" for hbar since for the typical values of hbar associated with the magnetic bodies in living systems (say hbar= 244hbar0 for EEG to guarantee the the energies of EEG photons are above the thermal threshold) the density of the dark matter would be extremely small. Bio-rhythms are assumed to come as powers of 2 in the simplest model for the bio-system: the above considerations raise the question whether these rhythms could be accompanied by 5-multiples and perhaps also by Fermat integer multiples. For instance, the fundamental 10 Hz alpha frequency could be accompanied by 2 Hz frequency and the 40 Hz thalamocortical resonance frequency by 8 Hz frequency.
This model is an oversimplification obtained by assuming only singular coverings of CD. In principle both coverings and factor spaces of both CD and CP2 are possible. If singular covering of both CP2 and CD is involved and if one has n=5 for both then the ratio of mass densities is 1/25 or about 4 per cent. This is not far from the experimental ratio of about 4 per cent of the density of visible matter to the density of ordinary, dark matter and dark energy. I interpret this as an accident: dark energy can correspond to dark matter only if the Planck constant is very large and a natural place for dark energy is at the space-time sheets mediating gravitational interaction.

Some further observations about number five are in order. The angle 2π/5 relates closely to Golden Mean appearing almost everywhere in biology. n=5 makes itself manifest also in the geometry of DNA (the twist per single nucleotide is π/5 and aromatic 5-cycles appear in DNA nucleotides). Could it be that electron pairs associated with aromatic rings correspond to hbar=5×hbar0 as I have proposed? Note that DNA as topological quantum computer hypothesis plays a key role in TGD inspired quantum biology.

For a background see the chapter TGD and Astrophysics of the book "Physics in Many-Sheeted Space-time".