https://matpitka.blogspot.com/2007/04/

Sunday, April 29, 2007

Precise definition of fundamental length as a royal road to TOE

The visit to Kea's blog inspired a little comment about the notion of fundamental length. During morning walk I realized that a simple conceptual analysis of this notion, usually taken as granted, allows to say something highly non-trivial about the basic structure of quantum theory of gravitation.

1. Stringy approach to Planck length as fundamental length fails

In string models in their original formulation Planck length is identified as the fundamental length and appears in string tension. As such this means nothing unless one defines Planck length operationally. This means that one must refer to standard meter stick in Paris or something more modern. This is very very ugly if one is speaking about theory of everything. Perhaps this is why the writers of books about strings prefer not to go the details about what one means with Planck length;-). One could of course mumble something about the failure of Riemannian geometry at Planck lengths but this means going outside the original theory and the question remains why this failure at a length scale which is just this particular fraction of the meter stick in Paris.

2. Fundamental length as length of closed geodesic

What seems clear that theory of everything should identify the fundamental length as something completely inherent to the structure of theory, not something in Paris. The most natural manner to define the fundamental length is as the length of a closed geodesic. Unfortunately, flat Minkowski space does not have closed geodesics. If one however adds compact dimensions to Minkowski space, problem disappears. If one requires that the unit is unique this leaves only symmetric spaces for which all geodesics have unit length into consideration.

3. Identification of fundamental length in terms of spontaneous compactification fails

The question is how to do this. In super-string models and M-theory spontaneous compactification brings in the desired closed geodesics. Usually the lengths of geodesics however vary. Even worse, the length is defined by a particular solution of the theory rather than something universal so that we are in Paris again. It seems that the compact dimensions cannot be dynamical but in some sense God given.

4. The approach based on non-dynamical imbedding space works

In TGD imbedding space is not dynamical but fixed by the requirement that the "world of classical worlds" consisting of light-like 3-surfaces representing orbits of partons (size can be actually anything) has a well-defined Kähler geometry. Already the Kähler geometry of loop spaces is unique and possesses Kac-Moody group as isometries. Essentially a constant curvature space is in question. The reason for the uniqueness is that the mathematically respectable existence of Riemann connection is not at all obvious in infinite-dimensional context. The fact that the curvature scalar is infinite however strongly suggests that strings are not fundamental objects after all.

In 3-dimensional situation the existential constraints are much stronger. A generalization of Kac-Moody symmetries is expected to be necessary for the existence of the geometry of the world of classical worlds. The uniqueness of infinite-dimensional Kähler geometric existence would thus become the Mother of All Principles of Physics.

This principle seems to work in the desired manner. The generalization of conformal invariance is possible only for 4-dimensional space-time surfaces and imbedding space of form H=M4×S, and number theoretical arguments involving octonions and quaternions fix the symmetric space S (also Kähler manifold) to S=CP2. Standard model symmetries lead to the same identification.

The conclusion is that in TGD the universal unit of length is explicitly present in the definition of the theory and that TGD in principle predicts the length of the meter stick in Paris using CP2 geodesic length as unit rather than expressing Planck length in terms of the length of the meter stick in Paris. It is actually enough to predict the Compton lengths of particles correctly (that is their masses) in terms of CP2 size. p-Adic mass calculations indeed predict particle masses correctly and p-adic length scale hypothesis brings in p-adic length scales as a hierarchy of derived and much more practical units of length. In particular, the habitant of many-sheeted space-time can measure distances at a given space-time sheet using smaller space-time sheet as a unit.

Thursday, April 26, 2007

In what sense dark matter is dark?

The notion of dark matter as something which has only gravitational interactions brings in mind the concept of ether and is very probably only an approximate characterization of the situation. As I have been gradually developing the notion of dark matter as a hierarchy of phases of matter with an increasing value of Planck constant, the naivete of this characterization has indeed become obvious. While writing yesterday's long posting Gravitational radiation and large value of gravitational Planck constant I understood what the detection of dark gravitons might mean.

During the last night I realized that dark matter is dark only in the sense that the process of receiving the dark bosons (say gravitons) mediating the interactions with other levels of dark matter hierarchy, in particular ordinary matter, differs so dramatically from that predicted by the theory with a single value of Planck constant that the detected dark quanta are unavoidably identified as noise. Dark matter is there and interacts with ordinary matter and living matter in general and our own EEG in particular (identified in standard neuroscience as a noise however correlating with contents of consciousness, God Grief!) provide the most dramatic examples about this interaction. Hence we can consider the dropping of "dark matter" from our vocabulary altogether and replace "dark" with the spectrum of Planck constants characterizing the particles (dark matter) and their field bodies (dark energy).

A. Background

A.1 Generalization of the imbedding space concept

The idea about quantized Planck constant in ordinary space-time is not promising since in a given interaction vertex the values of Planck constants should be identical and it is difficult to imagine how this could be realized mathematically. With the realization that the hierarchy of Jones inclusions might relate directly to the value hierarchy of Planck constants emerged the idea about the modification of imbedding space obtained by gluing together H→ H/Ga×Gb, Ga and Gb discrete subgroups of SU(2) associated with Jones inclusions along their common points (see this and this). Ga and Gb could be restricted to be cyclic and thus leaving the choice of quantization axis invariant. A book like structure results with different copies of H analogous to the pages of the book. Each sheet corresponds to a particular kind of dark matter or dark energy depending on whether it corresponds to a particle or its field body.

A.2 Darkness at the level of elementary particles

If elementary particles are maximally quantum critical in the sense that the corresponding partonic 2-surfaces belong to the 4-D intersection of all copies of the imbedding space (assuming Ga and Gb are cyclic and leave quantization axes invariant) so that one cannot say which value of Planck constant they correspond to. The most conservative criterion for the darkness at elementary particle level is that elementary particles are quantum critical systems so that only their field bodies are dark and that particle space-time sheet to which I assign the p-adic prime p characterizing particle corresponds to its em field body mediating its electromagnetic self interactions. Also Compton length as determined by em interaction would characterize to this field body. Compton length would be completely operational concept. This option is implied by the strong hypothesis that elementary particles are maximally quantum critical meaning that they belong to sub-space of H left invariant by all groups Ga×Gb leaving quantization axis invariant so that all dark variants of particle identified as 2-D partonic surface would be identical.

The implication would be that particle possess field body associated with each interaction and an extremely rich repertoire of phases emerges if these bodies are allowed to be dark and characterized by p-adic primes. Planck constant would be assigned with a particular interaction of particle rather than particle. This conforms with the formula of gravitational Planck constant hbargr= 211×GMm (not the most general formula but giving order of magnitude, for details see this), whose dependence on particle masses indeed forces the assignment of this constant to the gravitational field body as something characterizing interaction rather than particle.

Of course, nothing prevents from accepting the existence of elementary particles, which are not completely quantum critical and the subgroups of Ga×Gb define a hierarchy for which elementary particle proper is also dark. This extends the repertoire and leads to idea like N-atom in which electrons correspond to N-sheeted partonic 2-surfaces so that as many as N electrons can be in identical quantum state in the sense as the word is used in single sheeted space-time. I have proposed applications of N-atom and hierarchy of subgroups to the basic biology (see this, this, and this.)

B. How various levels of dark matter hierarchy interact?

At classical level the interaction between various levels of hierarchy means that electric and magnetic fluxes flow between sectors of the imbedding space with different values of Planck constant. Faraday's induction law made it from beginning clear that the levels of the hierarchy must interact.

It became also clear that dark bosons can decay to ordinary bosons by a phase transition that I called de-coherence. For instance, a dark boson defining an N-sheeted covering of M4 decays in this process to N 1-sheeted coverings. The N-fold value of Planck constant means that energy is conserved if the frequency is not changed and "Rieman sheets" ceases to form a folded connected structure in the process.

The so called massless extremals (see this, this, and this) and define n(Ga)×n(Gb) fold coverings of H/Ga×Gb, n(Ga)-fold coverings of CP2, and n(Gb)-fold coverings of M4. They are are ideal representatives for dark bosonic quanta. The proposal was that dark EEG photons with energies above thermal energy at room temperature can have non-negligible quantum effects on living matter although their frequencies would correspond to ridiculously small energies for the ordinary value of Planck constant. Those who are weak can combine their forces! Marxism (or synergy, as you will) at the elementary particle level!

Quite generally, field bodies can mediate interaction between particles at any level of the hierarchy. The visualization is in terms of the book metaphor. Virtual boson, or more generally 3-D partonic variant of the field body mediating the interaction, emitted by a particle at a given page leaks via the rim of the book to another page. The mediating 2-surface must become partially quantum critical at some stage of process. This applies to both the static topological field quanta (quanta of electric and magnetic fluxes) connecting particles in bound states and to the dynamical topological field quanta exchanged in the scattering (say MEs). Thus dark matter and ordinary matter interact, and only the value of Planck constant associated with the mediators of the interaction is different and should explain the apparent darkness.

C. What does this mean experimentally?

At this moment I would say that dark matter has standard interactions but because of large value of Planck constant these interactions occur in different manner. Even gravitons would be dark. Single graviton with a large value of hbar generates much larger effect than the ordinary graviton with the same frequency.

The good news is that the possibility to detect gravitational radiation improves dramatically. The bad news is that experimenters firmly believe in the dogma of single universal Planck constant and continue to eliminate the signals which are quite too strong as a shot noise, seismic noise, and all kind of noises produced by the environment. Ironically, not only gravitational radiation but also dark gravitons(!), might have been detected long ago, but we continue to get the frustrating null result just because we have a wrong theory.

The same would apply to dark quanta of gauge interactions and we might be receiving continually direct signals about dark matter but misinterpreting them. The mystery of dark matter would be generated by a theoretical prejudice just as the notion of ether for a century ago.

From the foregoing it should be clear "dark" is just an unfortunate letter combination, which I happened to pick up as I started this business. My sincere apologies! To sum up some basic points.

  1. In TGD Universe dark matter interacts with the ordinary matter and is detectable but the interactions are realized as bursts of collinear quanta resulting when a dark boson de-coheres to bosons at the lower level of hierarchy. This quantum jump corresponds to some characteristic time interval at the level of the space-time correlates and things look classical from the point of detector at the lowest level of the hierarchy. After the elimination of noise the time averages for the detection rates over sufficiently long time interval should be identical to those predicted by a theory based on ordinary Planck constant.

  2. In the previous posting I told about additional fascinating aspects related to the detection of dark gravitons due to the fact that gravitational Planck constant hgr= 211GMm (in the simplest case) of absorbed dark graviton characterizes the field body connecting detector and source and is proportional the masses of receiver and source. Both the total energy of dark graviton and the duration of process induced by the receival of large hbar graviton are proportional to the masses of the receiving and emitting system and thus carry information about the mass of the distance source. Some day this could make possible the gravitational counterpart of atomic spectroscopy. This additional information theoretic candy gives one further good reason to take the hierarchy of Planck constants seriously.

  3. Planck constant should carry information about the interacting systems also in the case of other dark interactions. In the case of em interactions the condition hbar= 211Z1Z2e2 or its generalization should hold true when perturbative approach fails for the em interaction between two charged systems. Z1Z2e2> 1 is the naive criterion for this to happen. Heavy ion collisions would be an obvious application (I discussed RHIC findings as one of the earliest attempts to develop ideas by applying them, see this). Gamma ray burst might also be an outcome of single very dark boson giving rise to precisely targeted pulse of radiation. The criterion should apply also to self interactions and suggests that in the case of heavy nuclei the electromagnetic field body of the nucleus becomes dark. Color confinement provides also a natural application.

  4. Dark photons with large value of hbar could transmit large energies through long distances and their phase conjugate variants could make possible a new kind of energy transfer mechanism essential in TGD based quantum model of metabolism and having also possible technological applications. Various kinds of sharp pulses suggest themselves as a manner to produce dark bosons in laboratory. Interestingly, after having given us alternating electricity, Tesla spent the rest of his professional life by experimenting with effects generated by electric pulses. Tesla claimed that he had discovered a new kind of invisible radiation, scalar wave pulses, which could make possible wireless communications and energy transfer in the scale of globe (for a possible but not the only TGD based explanation see this). This notion of course did not conform with Maxwell's theory, which had just gained general acceptance so that Tesla's fate was to spend his last years as a crackpot. Great experimentalists seem to be to see what is there rather than what theoreticians tell them they should see. They are often also visionaries too much ahead of their time.

For more details see the chapter TGD and Astrophysics of "Classical Physics in Many-Sheeted Space-time". For the applications of dark matter ideas to biosystems and living matter see the online books at my homepage and the links in the text.

Gravitational radiation and large value of gravitational Planck constant

Gravitational waves has been discussed on both Lubos's blog and Cosmic Variance. This raised the stimulus of looking how TGD based predictions for gravitational waves differ classical predictions. The article Gravitational Waves in Wikipedia provides excellent background material which I have used in the following. This posting is an extended and twice corrected version of the original.

The description of gravitational radiation provides a stringent test for the idea about dark matter hierarchy with arbitrary large values of Planck constants. In accordance with quantum classical correspondence, one can take the consistency with classical formulas as a constraint allowing to deduce information about how dark gravitons interact with ordinary matter. In the following standard facts about gravitational radiation are discussed first and then TGD based view about the situation is sketched.

A. Standard view about gravitational radiation

A.1 Gravitational radiation and the sources of gravitational waves

Classically gravitational radiation corresponds to small deviations of the space-time metric from the empty Minkowski space metric (see this). Gravitational radiation is characterized by polarization, frequency, and the amplitude of the radiation. At quantum mechanical level one speaks about gravitons characterized by spin and light-like four-momentum.

The amplitude of the gravitational radiation is proportional to the quadrupole moment of the emitting system, which excludes systems possessing rotational axis of symmetry as classical radiators. Planetary systems produce gravitational radiation at the harmonics of the rotational frequency. The formula for the power of gravitational radiation from a planetary system given by

P= dE/dt=(32/π)×G2M1M2(M1+M2)/R5.

This formula can be taken as a convenient quantitative reference point.

Planetary systems are not very effective radiators. Because of their small radius and rotational asymmetry supernovas are much better candidates in this respect. Also binary stars and pairs of black holes are good candidates. In 1993, Russell Hulse and Joe Taylor were able to prove indirectly the existence of gravitational radiation. Hulse-Taylor binary consists of ordinary star and pulsar with the masses of stars around 1.4 solar masses. Their distance is only few solar radii. Note that the pulsars have small radius, typically of order 10 km. The distance between the stars can be deduced from the Doppler shift of the signals sent by the pulsar. The radiated power is about 1022 times that from Earth-Sun system basically due to the small value of R. Gravitational radiation induces the loss of total energy and a reduction of the distance between the stars and this can be measured.

A.2 How to detect gravitational radiation?

Concerning the detection of gravitational radiation the problems are posed by the extremely weak intensity and large distance reducing further this intensity. The amplitude of gravitational radiation is measured by the deviation of the metric from Minkowski metric, denote by h.

Weber bar (see this) provides one possible manner to detect gravitational radiation. It relies on a resonant amplification of gravitational waves at the resonance frequency of the bar. For a gravitational wave with an amplitude h≈10-20 the distance between the ends of a bar with length of 1 m should oscillate with the amplitude of 10-20 meters so that extremely small effects are in question. For Hulse-Taylor binary the amplitude is about h=10-26 at Earth. By increasing the size of apparatus one can increase the amplitude of stretching.

Laser interferometers provide second possible method for detecting gravitational radiation. The masses are at distance varying from hundreds of meters to kilometers(see this). LIGO (the Laser Interferometer Gravitational Wave Observatory) consists of three devices: the first one is located with Livingston, Lousiana, and the other two at Hanford, Washington. The system consist of light storage arms with length of 2-4 km and in angle of 90 degrees. The vacuum tubes in storage arms carrying laser radiation have length of 4 km. One arm is stretched and one arm shortened and the interferometer is ideal for detecting this. The gravitational waves should create stretchings not longer that 10-17 meters which is of same order of magnitude as intermediate gauge boson Compton length. LIGO can detect a stretching which is even shorter than this. The detected amplitudes can be as small as h≈ 5× 10-22.

B. Gravitons in TGD

In this subsection two models for dark gravitons are discussed. Spherical dark graviton (or briefly giant graviton) would be emitted in quantum transitions of say dark gravitational variant of hydrogen atom. Giant graviton is expected to de-cohere into topological light rays, which are the TGD counterparts of plane waves and are expected to be detectable by human built detectors.

B.1 Gravitons in TGD

Unlike the naive application of Mach's principle would suggest, gravitational radiation is possible in empty space in general relativity. In TGD framework it is not possible to speak about small oscillations of the metric of the empty Minkowski space imbedded canonically to M4× CP2 since Kähler action is non-vanishing only in fourth order in the small deformation and the deviation of the induced metric is quadratic in the deviation. Same applies to induced gauge fields. Even the induced Dirac spinors associated with the modified Dirac action fixed uniquely by super-symmetry allow only vacuum solutions in this kind of background. Mathematically this means that both the perturbative path integral approach and canonical quantization fail completely in TGD framework. This led to the vision about physics as Kähler geometry of "world of classical worlds" with quantum states of the universe identified as the modes of classical configuration space spinor fields.

The resolution of various conceptual problems is provided by the parton picture and the identification of elementary particles as light-like 3-surfaces associated with the wormhole throats. Gauge bosons correspond to pairs of wormholes and fermions to topologically condensed CP2 type extremals having only single wormhole throat.

Gravitons are string like objects in a well defined sense. This follows from the mere spin 2 property and the fact that partonic 2-surfaces allow only free many-fermion states. This forces gauge bosons to be wormhole contacts whereas gravitons must be identified as pairs of wormhole contacts (bosons) or of fermions connected by flux tubes. The strong resemblance with string models encourages to believe that general relativity defines the low energy limit of the theory. Of course, if one accepts dark matter hierarchy and dynamical Planck constant, the notion of low energy limit itself becomes somewhat delicate.

B.2 Model for the giant graviton

Detector, giant graviton, source, and topological light ray will be denoted simply by D, G, and S, and ME in the following. Consider first the model for the giant graviton.

  1. Orbital plane defines the natural quantization axis of angular momentun. Giant graviton and all dark gravitons corresponds to na-fold coverings of CP2 by M4 points, which means that one has a quantum state for which fermionic part remains invariant under the transformations φ→ φ+2π/na. This means in particular that the ordinary gravitons associated with the giant graviton have same spin so that the giant graviton can be regarded as Bose-Einstein condensate in spin degrees of freedom. Only the orbital part of state depends on angle variables and corresponds to a partial wave with a small value of L.

  2. The total angular momentum of the giant graviton must correspond to the change of angular momentum in the quantum transition between initial and final orbit. Orbital angular momentum in the direction of quantization axis should be a small multiple of dark Planck constant associated with the system formed by giant graviton and source. These states correspond to Bose-Einstein condensates of ordinary gravitons in eigen state of orbital angular with ordinary Planck constant. Unless S-wave is in question the intensity pattern of the gravitational radiation depends on the direction in a characteristic non-classical manner. The coherence of dark graviton regarded as Bose-Einstein condensate of ordinary gravitons is what distinguishes the situation in TGD framework from that in GRT.

  3. If all elementary particles with gravitons included are maximally quantum critical systems, giant graviton should contain r(G,S) =na/nb ordinary gravitons. This number is not an integer for nb>1. A possible interpretation is that in this case gravitons possess fractional spin corresponding to the fact that rotation by 2π gives a point in the nb-fold covering of M4 point by CP2 points. In any case, this gives an estimate for the number of ordinary gravitons and the radiated energy per solid angle. This estimate follows also from the energy conservation for the transition. The requirement that average power equals to the prediction of GRT allows to estimate the geometric duration associated with the transition. The condition hbar ω = Ef-Ei is consistent with the identification of hbar for the pair of systems formed by giant-graviton and emitting system.

B.3 Dark graviton as topological light ray

Second kind of dark graviton is analog for plane wave with a finite transversal cross section. TGD indeed predicts what I have called topological light rays, or massless extremals (MEs) as a very general class of solutions to field equations ((see this, this, and this).

MEs are typically cylindrical structures carrying induced gauge fields and gravitational field without dissipation and dispersion and without weakening with the distance. These properties are ideal for targeted long distance communications which inspires the hypothesis that they play a key role in living matter (see this and this) and make possible a completely new kind of communications over astrophysical distances. Large values of Planck constant allow to resolve the problem posed by the fact that for long distances the energies of these quanta would be below the thermal energy of the receiving system.

Giant gravitons are expected to decay to this kind of dark gravitons having smaller value of Planck constant via de-decoherence and that it is these gravitons which are detected. Quantitative estimates indeed support this expectation.

At the space-time level dark gravitons at the lower levels of hierarchy would naturally correspond to na-Riemann sheeted (r=GmE/v0=na/nb for m>>E) variants of topological light rays ("massless extremals", MEs), which define a very general family of solutions to field equations of TGD (see this). na-sheetedness is with respect to CP2 and means that every point of CP2 is covered by na M4 points related by a rotation by a multiple of 2π/na around the propagation direction assignable with ME. nb-sheetedness with respect to M4 is possible but does not play a significant role in the following considerations. Using the same loose language as in the case of giant graviton, one can say that r=na/nb copies of same graviton have suffered a topological condensation to this kind of ME. A more precise statement would be na gravitons with fractional unit hbar0/na for spin.

C. Detection of gravitational radiation

One should also understand how the description of the gravitational radiation at the space-time level relates to the picture provided by general relativity to see whether the existing measurement scenarios really measure the gravitational radiation as they appear in TGD. There are more or less obvious questions to be answered (or perhaps obvious after a considerable work).

What is the value of dark gravitational constant which must be assigned to the measuring system and gravitational radiation from a given source? Is the detection of primary giant graviton possible by human means or is it possible to detect only dark gravitons produced in the sequential de-coherence of giant graviton? Do dark gravitons enhance the possibility to detect gravitational radiation as one might expect? What are the limitations on detection due to energy conservation in de-coherence process?

C.1 TGD counterpart for the classical description of detection process

The oscillations of the distance between the two masses defines a simplified picture about the receival of gravitational radiation. Now ME would correspond to na-"Riemann-sheeted" (with respect to CP2)graviton with each sheet oscillating with the same frequency. Classical interaction would suggest that the measuring system topologically condenses at the topological light ray so that the distance between the test masses measured along the topological light ray in the direction transverse to the direction of propagation starts to oscillate.

Obviously the classical behavior is essentially the same as as predicted by general relativity at each "Riemann sheet". If all elementary particles are maximally quantum critical systems and therefore also gravitons, then gravitons can be absorbed at each step of the process, and the number of absorbed gravitons and energy is r-fold.

C.2. Sequential de-coherence

Suppose that the detecting system has some mass m and suppose that the gravitational interaction is mediated by the gravitational field body connecting the two systems.

The Planck constant must characterize the system formed by dark graviton and measuring system. In the case that E is comparable to m or larger, the expression for r=hbar/hbar0 must replaced with the relativistically invariant formula in which m and E are replaced with the energies in center of mass system. This gives

r= GmE/[v0(1+β)(1-β)1/2], β= z(-1+(1+2/x))1/2) , x= E/2m .

Assuming m>>E0 this gives in a good approximation

r=Gm1 E0/v0= G2 m1mM/v02.

Note that in the interaction of identical masses ordinary hbar is possible for m≤ (v0)1/2MPl. For v0=2-11 the critical mass corresponds roughly to the mass of water blob of radius 1 mm.

One can interpret the formula by saying that de-coherence splits from the incoming dark graviton dark piece having energy E1= (Gm1E0/v0)ω, which makes a fraction E1/E0= (Gm1/v0)ω from the energy of the graviton. At the n:th step of the process the system would split from the dark graviton of previous step the fraction

En/E0= (Gωn/v0)ni(mi).

from the total emitted energy E0. De-coherence process would proceed in steps such that the typical masses of the measuring system decrease gradually as the process goes downwards in length and time scale hierarchy. This splitting process should lead at large distances to the situation in which the original spherical dark graviton has split to ordinary gravitons with angular distribution being same as predicted by GRT.

The splitting process should stop when the condition r≤ 1 is satisfied and the topological light ray carrying gravitons becomes 1-sheeted covering of M4. For E<<m this gives GmE≤ v0 so that m>>E implies E<<MPl. For E>>m this gives GE3/2m1/2 <2v0 or

E/m≤ (2v0/Gm2)2/3 .

C.3. Information theoretic aspects

The value of r=hbar/hbar0 depends on the mass of the detecting system and the energy of graviton which in turn depends on the de-coherence history in corresponding manner. Therefore the total energy absorbed from the pulse codes via the value of r information about the masses appearing in the de-coherence process. For a process involving only single step the value of the source mass can be deduced from this data. This could some day provide totally new means of deducing information about the masses of distant objects: something totally new from the point of view of classical and string theories of gravitational radiation. This kind of information theoretic bonus gives a further good reason to take the notion of quantized Planck constant seriously.

If one makes the stronger assumption that the values of r correspond to ruler-and-compass rationals expressible as ratios of the number theoretically preferred values of integers expressible as n=2ksFs, where Fs correspond to different Fermat primes (only four is known), very strong constraints on the masses of the systems participating in the de-coherence sequence result. Analogous conditions appear also in the Bohr orbit model for the planetary masses and the resulting predictions were found to be true with few per cent. One cannot therefore exclude the fascinating possibility that the de-coherence process might in a very clever manner code information about masses of systems involved with its steps.

C.4. The time interval during which the interaction with dark graviton takes place?

If the duration of the bunch is T= E/P, where P is the classically predicted radiation power in the detector and T the detection period, the average power during bunch is identical to that predicted by GRT. Also T would be proportional to r, and therefore code information about the masses appearing in the sequential de-coherence process.

An alternative, and more attractive possibility, is that T is same always and correspond to r=1. The intuitive justification is that absorption occurs simultaneously for all r "Riemann sheets". This would multiply the power by a factor r and dramatically improve the possibilities to detect gravitational radiation. The measurement philosophy based on standard theory would however reject these kind of events occurring with 1/r time smaller frequency as being due to the noise (shot noise, seismic noise, and other noise from environment). This might relate to the failure to detect gravitational radiation.

D. Quantitative model

In this subsection a rough quantitative model for the de-coherence of giant (spherical) graviton to topological light rays (MEs) is discussed and the situation is discussed quantitatively for hydrogen atom type model of radiating system.

D.1. Leakage of the giant graviton to sectors of imbedding space with smaller value of Planck constant

Consider first the model for the leakage of giant graviton to the sectors of H with smaller Planck constant.

  1. Giant graviton leaks to sectors of H with a smaller value of Planck constant via quantum critical points common to the original and final sector of H. If ordinary gravitons are quantum critical they can be regarded as leakage points.

  2. It is natural to assume that the resulting dark graviton corresponds to a radial topological light ray (ME). The discrete group Zna acts naturally as rotations around the direction of propagation for ME. The Planck constant associated with ME-G system should by the general criterion be given by the general formula already described.

  3. Energy should be conserved in the leakage process. The secondary dark graviton receives the fraction Δ ω/4π= S/4π r2 of the energy of giant graviton, where S(ME) is the transversal area of ME, and r the radial distance from the source, of the energy of the giant graviton. Energy conservation gives

    S(ME)/4π r2 hbar(G,S)ω= hbar(ME,G)ω .

    or

    S(ME)/4π r2= hbar(ME,G)/hbar(G,S)≈ E(ME)/M(S) .

    The larger the distance is, the larger the area of ME. This means a restriction to the measurement efficiency at large distances for realistic detector sizes since the number of gravitons must be proportional to the ratio S(D)/S(ME) of the areas of detector and ME.

D.2. The direct detection of giant graviton is not possible for long distances

Primary detection would correspond to a direct flow of energy from the giant graviton to detector. Assume that the source is modellable using large hbar variant of the Bohr orbit model for hydrogen atom. Denote by r=na/nb the rationals defining Planck constant as hbar= r×hbar0.

For G-S system one has

r(G,S)= GME/v0 =GMmv0× k/n3 .

where k is a numerical constant of order unity and m refers to the mass of planet. For Hulse-Taylor binary m≈ M holds true.

For D-G system one has

r(D,G)=GM(D) E/v0 = GM(D)mv0× k/n3 .

The ratio of these rationals (in general) is of order M(D)/M.

Suppose first that the detector has a disk like shape. This gives for the total number n(D) of ordinary gravitons going to the detector the estimate

n(D)=(d/r)2 × na(G,S)= (d/r)2× GMmv0× nb(G,S)× k/n3 .

If the actual area of detector is smaller than d2 by a factor x one has

n(D)→ xn(D) .

n(D) cannot be smaller than the number of ordinary gravitons estimated using the Planck constant associated with the detector: n(D)≥ na(D,G)=r(D,G)nb(D,G). This gives the condition

d/r≥(M(D)/M(S))1/2× (nb(D,G)/nb(G,S))1/2×(k/xn3)1/2.

Suppose for simplicity that nb(D,G)/nb(G,S)=1 and M(D)=103 kg and M(S)=1030 kg and r= 200 MPc ≈ 109 ly, which is a typical distance for binaries. For x=1,k=1,n=1 this gives roughly d≥ 10-4 ly ≈ 1011 m, which is roughly the size of solar system. From energy conservation condition the entire solar system would be the natural detector in this case. Huge values of nb(G,S) and larger reduction of nb(G,S) would be required to improve the situation. Therefore direct detection of giant graviton by human made detectors is excluded.

D.3. Secondary detection

The previous argument leaves only the secondary detection into consideration. Assume that ME results in the primary de-coherence of a giant graviton. Also longer de-coherence sequences are possible and one can deduce analogous conditions for these.

Energy conservation gives

S(D)/S(ME)× r(ME,G) = r(D,ME) .

Using the expression for S(ME) this gives an expression for S(ME) for a given detector area:

S(ME)= r(ME,G)/r(D,ME) × S(D)≈ E(G)/M(D)× S(D) .

From S(ME)=E(ME)/M(S)4π r2 one obtains

r = (E(G)M(S)/E(ME)M(D))1/2×S(D)1/2

for the distance at which ME is created. The distances of binaries studied in LIGO are of order D=1024 m. Using E(G)≈ Mv02 and assuming M=1030 kg and S(D)= 1 m2 (just for definiteness), one obtains r≈ 1025(kg/E(ME)) m. If ME is generated at distance r≈ D and if one has S(ME)≈ 106 m2 (from the size scale for LIGO) one obtains from the equation for S(ME) the estimate E(ME)≈ 10-25 kg ≈ 10-8 Joule.

D.4 Some quantitative estimates for gravitational quantum transitions in planetary systems

To get a concrete grasp about the situation it is useful to study the energies of dark giant gravitons in the case of planetary system assuming Bohr model.

The expressions for the energies of dark gravitons can be deduced from those of hydrogen atom using the replacements Ze2→4π GMm, hbar →GMm/v0. I have assumed that second mass is much smaller. The energies are given by

En= 1/n2E1 , E1= (Zα)2 m/4= (Ze2/4π×hbar)2× m/4→m/4v02.

E1 defines the energy scale. Note that v0 defines a characteristic velocity if one writes this expression in terms of classical kinetic energy using virial theorem T= -V/2 for the circular orbits. This gives En= Tn= mvn2/2= mv02/4n2 giving

vn=(v0/21/2)/n . Orbital velocities are quantized as sub-harmonics of the universal velocity v0/2-1/2=2-23/2 and the scaling of v0 by 1/n scales does not lead out from the set of allowed velocities.

Bohr radius scales as r0= hbar/Zα m→ GM/v02.

For v0=211 this gives r0= 222GM ≈ 4× 106GM. In the case of Sun this is below the value of solar radius but not too much.

The frequency ω(n,n-k) of the dark graviton emitted in n→n-k transition and orbital rotation frequency ωn are given by

ω(n,n-k) = v03/GM× (1/n2-1/(n-k)2)≈ kωn.

ωn= v03/GMn3.

The emitted frequencies at the large n limit are harmonics of the orbital rotation frequency so that quantum classical correspondence holds true. For low values of n the emitted frequencies differ from harmonics of orbital frequency.

The energy emitted in n→n-k transition would be

E(n,n-k)= mv02× (1/n2-1/(n-k)2) ,

and obviously enormous. Single spherical dark graviton would be emitted in the transition and should decay to gravitons with smaller values of hbar. Bunch like character of the detected radiation might serve as the signature of the process. The bunch like character of liberated dark gravitational energy means coherence and might play role in the coherent locomotion of living matter. For a pair of systems of masses m=1 kg this would mean Gm2/v0≈ 1020 meaning that exchanged dark graviton corresponds to a bunch containing about 1020 ordinary gravitons. The energies of graviton bunches would correspond to the differences of the gravitational energies between initial and final configurations which in principle would allow to deduce the Bohr orbits between which the transition took place. Hence dark gravitons could make possible the analog of spectroscopy in astrophysical length scales.

E. Generalization to gauge interactions

The situation is expected to be essentially the same for gauge interactions. The first guess is that one has r= Q1Q2g2/v0, were g is the coupling constant of appropriate gauge interaction. v0 need not be same as in the gravitational case. The value of Q1Q2g2 for which perturbation theory fails defines a plausible estimate for v0. The naive guess would be v0≈ 1. In the case of gravitation this interpretation would mean that perturbative approach fails for GM1M2=v0. For r>1 Planck constant is quantized with rational values with ruler-and-compass rationals as favored values. For gauge interactions r would have rather small values. The above criterion applies to the field body connecting two gauge charged systems. One can generalize this picture to self interactions assignable to the "personal" field body of the system. In this case the condition would read as Q2g2/v0>>1.

E.1 Applications/p>

One can imagine several applications.

  • A possible application would be to electromagnetic interactions in heavy ion collisions.

  • Gamma ray bursts might be one example of dark photons with very large value of Planck constant. The MEs carrying gravitons could carry also gamma rays and this would amplify the value of Planck constant form them too.

  • Atomic nuclei are good candidates for systems for which electromagnetic field body is dark. The value of hbar would be r=Z2e2/v0, with v0≈ 1. Electromagnetic field body could become dark already for Z>3 or even for Z=3. This suggest a connection with nuclear string model (see this) in which A< 4 nuclei (with Z<3) form the basic building bricks of the heavier nuclei identified as nuclear strings formed from these structures which themselves are strings of nucleons.

  • Color confinement for light quarks might involve dark gluonic field bodies.

  • Dark photons with large value of hbar could transmit large energies through long distances and their phase conjugate variants could make possible a new kind of transfer mechanism (see this) essential in TGD based quantum model of metabolism and having also possible technological applications. Various kinds of sharp pulses suggest themselves as a manner to produce dark bosons in laboratory. Interestingly, after having given us alternating electricity, Tesla spent the rest of his professional life by experimenting with effects generated by electric pulses. Tesla claimed that he had discovered a new kind of invisible radiation, scalar wave pulses, which could make possible wireless communications and energy transfer in the scale of globe (see this for a possible but not the only TGD based explanation).

E.2 In what sense dark matter is dark?

The notion of dark matter as something which has only gravitational interactions brings in mind the concept of ether and is very probably only an approximate characterization of the situation. As I have been gradually developing the notion of dark matter as a hierarchy of phases of matter with an increasing value of Planck constant, the naivete of this characterization has indeed become obvious.

If the proposed view is correct, dark matter is dark only in the sense that the process of receiving the dark bosons (say gravitons) mediating the interactions with other levels of dark matter hierarchy, in particular ordinary matter, differs so dramatically from that predicted by the theory with a single value of Planck constant that the detected dark quanta are unavoidably identified as noise. Dark matter is there and interacts with ordinary matter and living matter in general and our own EEG in particular provide the most dramatic examples about this interaction. Hence we could consider the dropping of "dark matter" from the glossary altogether and replacing the attribute "dark" with the spectrum of Planck constants characterizing the particles (dark matter) and their field bodies (dark energy).

For more details see the chapter TGD and Astrophysics of "Classical Physics in Many-Sheeted Space-time".

Wednesday, April 25, 2007

Would I fish if fish had face?

Yesterday evening I was contacted by a friend. During the phone discussion he told me that during the flight some finnish physicist had started to talk with him. During the discussion my friend had mentioned my name and to his surprise had received a very strong negative reaction.

My friend who knows me personally of course wondered why this. It had become clear that mentioning my name touches very sensitive emotional nerves. Is there some reason? Does the mentioning of my name induce the same kind of weird aggression in all my finnish colleagues? I told as my impression that this negative attitude characterizes most of the collective. I also proposed that it might reflect the characteristic socio-pathology of the finnish society which one might refer to as "misunderstood equality". It is very difficult for us in Finland to tolerate the feeling that one of us, just one of these ordinary Finnish people, might possibly have done something that might possibly distinguish him or her among other Finns some day. It is easy to find historical reasons why for this syndrome.

I however felt that this explanation was not all of it. During the night I saw a strange dream which gave a hint about the deeper psychology. I was fishing (not my hobbies during day-time!). Strangely, half of the fish was out of the water. It had taken the bait to its mouth and was just about to swallow it. There was however something that disturbed me. The "face" of fish was very clever, I might say even human. It was clear to me that the fish would feel horror and pain if it were to swallow the bait. I woke up and decided that I would not continue my dream hobby if it depends on me.

When I began to ponder about this, I realized the connection with the phone discussion. What made the situation so unpleasant was that I saw the "face" of the fish and its ability to suffer. I also felt that in the dream I was both the fisher and fish whereas in this something that we are used to call reality my colleagues were the fisher and I was the fish. The dream clearly wanted me to imagine what it is to be in the position of my colleagues and think what they have felt during these 28 years. I have of course done this many times in my attempts to understand but not in this context.

During years I have repeatedly encountered two obvious but strange things which dream expressed symbolically. First of all, most colleagues have avoided personal contact with me to the extent that the situation has often reached comic proportions. On the other hand, colleagues have wanted to label me to be something so weird that it simply cannot belong to the same species. During first years I was labelled as a kind of idiot savant lacking all forms of intelligence and even the ability to feel insults (sic!).

Later the idea about some kind of psychopath was added to the repertoire of my psychopathologies. I got a very concrete demonstration about this as I visited Kazan for more than decade ago. One of young professors who had just labelled my work as a pure rubbish in an official statement had allowed the locals to understand that I am more or less a complete psychopath who has cold-heartedly rejected his family (I had divorced at that time).

In light of the dream I find it easy to understand these strange and cruel behaviors of the community towards individual. Dehumanization is the only possible justification for the cruel behavior of the collective against individual and saves individuals from feeling directly how unethical it is. But dehumanization is possible only if you do not see the victim. If you see that the victim is intelligent living creature able to suffer, you cannot continue. You wake up, as I did from my dream. Without the refusal from personal contacts these people could not continue their dreaming.

Net age has made this painful conflict even more difficult since it provided me with communication channels making even more obvious that I am an intelligent human being and even theoretical physicist. 15 books on home page and name in Wikipedia in category "Physicists" makes it very difficult to seriously continue believing that I am a miserable crackpot. It is clear than I can write. Probably also my ability talk would become manifest if some academic instance in Finland would invite me to tell about my work. I can express myself also in my blog, and anyone can become convinced that the label of a mentally retarded psychopath is not from this world.

My belief is that the reason why this situation has reached the verge of collective madness is that for a collective it is extremely difficult to admit that the path once light-heartedly chosen is wrong. Even if it becomes obvious that something horribly wrong and cruel is being done. As perfectly normal and benevolent individuals these people certainly feel that they are doing something very bad and have probably sometimes experienced that I have become a fish with human face crying like a mad for pain and even more horrible, refuses to die. This kind of unresolved psychological conflict must be very painful as it continues. For myself, I have been done my best to not evoke these feelings in my colleagues as anyone in this kind of position is biologically programmed to do (Stockholm syndrome, not that one;-)!)

My story is of course a rather tame version about what happens everywhere around the world all the time. There is however something very special in the community of theoretical physics. There is a horrible competition and the situation is not improved by the fact that community resembles in many respects primitive communities dominated by archaic myth figures (Newton, Einstein,...). There is something very primitive in that these persons are given a status of God and that individual who just dares to think aloud, can be professionally destroyed by dooming him to be a sufferer of Einstein syndrome.

If each of us had the social maturity of sixty years old at age of twenty, I would not be writing this. At the first pole of the problem is the self-centeredness of young person and his poor ability to put himself in the shoes of another human being. At the other pole is the desire for the social acceptance and our very poor ability to resist social pressures. No institutional reform can serve as a fast miracle cure since each individual must start from scratch his personal social evolution. Cultural evolution is needed. Although this evolution have been very fast and perhaps exponential in biological time scale it is desperately slow when you taken human lifetime as a time unit.

There are however flashes of light from darkness now and then. Just few days ago I experienced something historic. The number of those finnish colleagues, who have contacted me during these years on their own initiative, can be counted using the fingers of single hand, actually single finger is enough! Now I must add to my counting system second finger;-). A colleague, who has already resigned, sent me an email and asked my opinion about something as a theoretical physicist. My opinion! As a theoretical physicist!! I was totally embarrassed about my own happiness and had to work hard to calm down myself!

P.S. I have two especially active net-enemies in Finland: Lauri Gröhn and "Optimistx". Both of these fellows have an attitude to truth which brings in my mind the notion of "creative book-keeping". It is frightening to see how deep hatred my thoughts generate in these fellows. Finnish readers can find my comments about these skeptic militants in finish here.

Monday, April 23, 2007

De-coherence and the differential topology of nuclear reactions

I have already described the basic ideas of nuclear string model in the previous postings (such as this). Nuclear string model allows a topological description of nuclear decays in terms of closed string diagrams and it is interesting to look what characteristic predictions follow without going to detailed quantitative modelling of stringy collisions possibly using some variant of string models.

In the de-coherence process explaining giant resonances eye-glass type singularities of the closed nuclear string appear and make possible nuclear decays as decays of closed string to closed strings.

  1. At the level of 4He sub-strings the simplest singularities correspond to 4→ 3+1 and 4→ 2+2 eye-glass singularities. The first one corresponds to low energy GR and second to one of higher energy GRs. They can naturally lead to decays in which nucleon or deuteron is emitted in decay process. The singularities 4→ 2+1+1 resp. 4→ 1+1+1+1 correspond to eye-glasses with 3 {\it resp.} four lenses and mean the decay of 4He to deuteron and two nucleons resp. 4 nucleons. The prediction is that the emission of deuteron requires a considerably larger excitation energy than the emission of single nucleon. For GR at level of A=3 nuclei analogous considerations apply. Taking into account the possible tunnelling of the nuclear strings from the nuclear space-time sheet modifies this simple picture.

  2. For GR in the scale of entire nuclei the corresponding singular configurations typically make possible the emission of alpha particle. Considerably smaller collision energies should be able to induce the emission of alpha particles than the emission of nucleons if only stringy excitations matter. The excitation energy needed for the emission of alpha particle is predicted to increase with A since the number n of 4He nuclei increases with A. For instance, for Z=N=2n nuclei n→ n-1 +1 would require the excitation energy (2n-1)Ec=(A/2-1)Ec, Ec≈ .2 MeV. The tunnelling of the alpha particle from the nuclear space-time sheet can modify the situation.

The decay process allows a differential topological description. Quite generally, in the de-coherence process n→ (n-k) +k the color magnetic flux through the closed string must be reduced from n to n-k units through the first closed string and to k units through the second one. The reduction of the color color magnetic fluxes means the reduction of the total color binding energy from n2Ec ((n-k)2+k2 )Ec and the kinetic energy of the colliding nucleons should provide this energy.

Faraday's law, which is essentially a differential topological statement, requires the presence of a time dependent color electric field making possible the reduction of the color magnetic fluxes. The holonomy group of the classical color gauge field GAαβ is always Abelian in TGD framework being proportional to HAJαβ, where HA are color Hamiltonians and Jαβ is the induced Kähler form. Hence it should be possible to treat the situation in terms of the induced Kähler field alone. Obviously, the change of the Kähler (color) electric flux in the reaction corresponds to the change of (color) Kähler (color) magnetic flux. The change of color electric flux occurs naturally in a collision situation involving changing induced gauge fields.

For more details see the chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Quantum criticality, hierarchy of Planck constants, and category theory

I have used some time to ponder the problem of whether category theoretic structures might have direct topological counterparts at space-time level with objects possibly identified as the space-time correlates of elementary particles and morphisms as the space-time correlates of their (bound state) interactions.

I became again aware of this problem as I was playing with various interpretations for what darkness interpreted in terms of nonstandard value of Planck constant and the modification of imbedding space obtained by gluing together H→ H/Ga×Gb, Ga and Gb discrete subgroups of SU(2) associated with Jones inclusions along their common points. Ga and Gb could be restricted to be cyclic and thus leaving the choice of quantization axis invariant. A book like structure results with different copies of H analogous to the pages of the book. Probably brane people work with analogous structures.

The most conservative form of darkness is that only field bodies of particles are dark and that particle space-time sheet to which I assign the p-adic prime p characterizing particle corresponds to its em field body. Also Compton length as determined by em interaction would characterize this field body. Compton length would be completely operational concept. This option is implied by a strong hypothesis that elementary particles are maximally quantum critical meaning that they belong to subspace of H left invariant by all groups Ga×Gb leaving quantization axis invariant so that all dark variants of particle identified as 2-D partonic surface would be identical.

The implication would be that particle possess field body associated with each interaction and an extremely rich repertoire of phases emerges if these bodies are allowed to be dark and characterized by p-adic primes. Planck constant would be assigned with a particular interaction of particle rather than particle. This conforms with the formula of gravitational Planck constant hbargr= GMm211, whose dependence on particle masses indeed forces the assignment of this constant to the gravitational field body.

What I realized is that if elementary particles are maximally quantum critical they would be analogous to objects and the field bodies mediating interactions between them would be analogous to morphisms. The basic structures of category theory would have direct implementation at the level of many-sheeted space-time. This looks nice. On the other hand, the basic composition of morphisms f(A→B)f(B→C)=f(A→C) for morphisms would mean that particle interactions also obey composition law. The hierarchy of p-adic primes and Planck constants and composition law would imply hierarchy of interactions and one could not speak about single universal interaction between particles. As a matter fact, one cannot speak about uni-directional arrow: rather, a bi-directional arrow would be in question.

Wednesday, April 18, 2007

New strange finding about dark matter

Below a popular summary about the article Dark Energy-Dark Matter Interaction and the Violation of the Equivalence Principle from the Abell Cluster A586 by O. Bertolami, F. Gil Pedro and M. Le Delliou. The article reports an experimental finding interpreted as a possible violation of Equivalence Principle. This finding certainly provides important information about the real nature of dark matter but I would not interpret it in terms of the failure of Equivalence Principle.

"If Galileo could have dropped a lump of dark matter and a lump of normal matter from the top of the Leaning Tower of Pisa, he might have expected them to fall at the same rate," says Orfeu Bertolami at the Instituto Superior Técnico in Lisbon, Portugal. "But he would have been wrong." Bertolami and his colleagues studied a galaxy cluster known as Abell cluster A586 to see if dark matter and normal matter fall in the same way under gravity. He says this cluster is ideal because it is spherical, suggesting that it has settled down: "The only motion we are seeing now is due to gravity toward the cluster's centre."

The team studied 25 galaxies in the cluster using gravitational lensing - the shift in the apparent position of a light source caused by gravity bending the light. When they analysed the positions of galaxies using conventional models, things just didn't add up. "It only makes sense if the normal matter is falling faster than the dark matter," Bertolami says.

This is the first astronomical observation to suggest that Einstein's principle of equivalence is violated, says Bertolami (www.arxiv.org/astro-ph/0703462 ). "If dark energy interacts with dark matter in some way, it could be affecting its motion."

In the article an estimate for the ratio ρKW of the kinetic energy and potential energy densities of visible matter around Abell cluster is deduced. It is found to be about -.76 +/-.05 instead of -.5 predicted by virial theorem. If dark matter accelerates slower radially, its contribution to the potential energy per volume is smaller at a given distance and potential energy density remains smaller than otherwise so that ρKW is larger than predicted by the virial theorem.

Let us assume that this finding is not a fake. How could one one understand it in TGD framework? The characteristic difference between ordinary and dark matter in TGD framework is that dark matter in astronomical length scales corresponds to very large values of Planck constant forming macroscopic quantum phases in astrophysical length scales. The rate of dissipation is expected to be proportional to 1/hbar and should therefore be much slower for the dark matter than its visible counterpart. This conforms with the much longer time scale of quantum coherence.

Consider in this framework the free fall experiment for objects consisting of visible and dark matter.

  • One cannot just pick up two objects which are both at rest. One must assume that in the initial situation both matter and dark matter have non-radial velocity component with respect to the galaxy cluster which is gradually reduced by dissipation so that finally a radial fall towards center results. For dark matter this reduction occurs at much slower rate so that the assumption that dark matter would have ended in a free radial fall need not hold true or it has ended in free radial fall much later. Therefore the radial component of the velocity for dark matter should be smaller as the naive interpretation of observations suggests.

  • If dark matter is in quantum state at Bohr orbit situation is like in hydrogen atom: dark matter makes transitions to lower orbitals through discrete quantum jumps with the duration of quantum jump scaling like hbar. It would never end up to the situation in which radial free radial fall takes place! One would have a situation similar to that what one encountered more than century ago with hydrogen atom: no infrared catastrophe was observd although classical theory predicted it. If so, the solution of the recent depressing situation in quantum gravitation (to put it mildly) would be the same as in the similar situation for a century ago (also at that time physics was thought to be done apart from some minor details!)

The idea about Bohr orbits of astrophysical size sounds of course weird but the basic finding which led to the evolution of ideas about dark matter and spectrum of Planck constants was the Bohr quantization of orbital radii for inner planets and outer planets with a huge value of Planck constant hbar= GMm/v0, v0≈ 10-4. Evidence for a similar quantization has been observed for exoplanets. The explanation would be that dark matter moves along Bohr orbits and planetary visible matter attached to it gravitationally follows.

For references about planetary Bohr orbitology and details about TGD based model for it, see the chapter TGD and Astrophysics of "Classical Physics in Many-Sheeted Space-Time".

Tuesday, April 17, 2007

Chimps more evolved than humans?

The story Chimps 'more evolved' than humans in the latest New Scientist should give some food for thought for the believer in the standard model of genetic evolution. The story summarizes the article published in Proceedings of the National Academy of Sciences (DOI: 10.1073/pnas.0701705104).
It is time to stop thinking we are the pinnacle of evolutionary success – chimpanzees are the more highly evolved species, according to new research.

Evolutionary geneticist Jianzhi Zhang and colleagues at the University of Michigan in Ann Arbor, US, compared DNA sequences for 13,888 genes shared by human, chimp and rhesus macaques.

For each DNA letter at which the human or chimp genes differ from our shared ancestral form – inferred from the corresponding gene in macaques – researchers noted whether the change led to an altered protein. Genes that have been transformed by natural selection show an unusually high proportion of mutations leading to altered proteins.

Zhang's team found that 233 chimp genes, compared with only 154 human ones, have been changed by selection since chimps and humans split from their common ancestor about 6 million years ago.

This contradicts what most evolutionary biologists had assumed. "We tend to see the differences between us and our common ancestor more easily than the differences between chimps and the common ancestor," observes Zhang.

The result makes sense, he says, because until relatively recently the human population has been smaller than that of chimps, leaving us more vulnerable to random fluctuations in gene frequencies. This prevents natural selection from having as strong an effect overall.

Now that the macaque genome has been sequenced, biologists will be able to learn more about the differences between the apes.

How to interpret this? In my own TGD world view I can imagine two interpretations.

1. Could this relate to introns?

The changes in genes coding for proteins are studied. Most (more than 95 per cent) of human genome however consist of introns which do not code for proteins but could express themselves in some other manner, say electromagnetically. For instance, language could reduce at the basic level to a gene expression but proceeding through, say, electromagnetic wave patterns. The portion of introns in the genome increases steadily as one climbs up along the evolutionary tree. It might be that our evolution is basically the evolution of introns and corresponding gene expression.

I have proposed the notion of memetic code along these lines as a third code in the hierarchy of codes labelled by Mersenne primes of so called Combinatorial Hierarchy. Mersenne primes are defined as Mn=2n-1 and the hierarchy is defined as M(n)= MM(n-1). More explicitly, the levels are given by

3=22-1,7=23-1, 127= 27-1,M127=2127-1.

It is not known whether higher Mersennes in the hierarchy are primes. 7 would correspond to the lowest level. 127 would correspond to genetic code, and M127 to what I call memetic code perhaps related to the cultural evolution. One motivation is that the fundamental biorhythm 10 Hz appearing also in alpha band corresponds to the secondary p-adic time scale associated with M127. At DNA level the codons would consist of sequences of 21 genetic codons and this kind of structural element should be found from the intronic sequences if they relate to the memetic code.

As a matter fact, M127 plays a fundamental role in TGD based physics. It codes for the p-adic length scale of electron and graviton. Also electro-pions which are bound states of color excitations of leptons correspond to M127: their existence was suggested by old anomaly in the heavy ion collisions near Coulomb wall and quite recently evidence for muo-pion has emerged. The exotic quarks appearinng in the nuclear string model about which I have been talking a lot recently, correspond to M127.

2. Could the notions of super- and hypergenome relate to the finding?

TGD inspired biology leads to the notions of super- and hyper-genome which could also allow to understand the strange discovery. The explanation need not exclude the interpretation of introns.

  • The key notion is that of magnetic body having size much larger than biological body. The notion of field body is forced by what I call topological field quantization. Magnetic body carrying dark matter would be the quantum controlling agent using biological body as a motor instrument and sensory receptor. Magnetic body has an onionlike fractal structure consisting of magnetic bodies within magnetic bodies. EEG consisting of dark photons with large Planck constant (to guarantee that EEG photon energies are above thermal energy at room temperature) and its fractal counterparts would be the tool of quantum control and communication.

  • Motor control would naturally take place via the genome: magnetic flux sheets of magnetic body would traverse through the DNA strands. Communication of the sensory data to the magnetic body would in turn take place from cell membranes which are full of sensory receptors. If one accepts the hypothesis about the hierarchy of increasing values of Planck constants explaining dark matter as macroscopic quantum phases, flux quantization implies a surprising result. The number of DNAs traversed by single magnetic flux sheets is very large if the flux sheets carry dark matter. Thus genomes would arrange to larger structures which could relate directly to the body scale coherence of biological activities.

  • Super-genome would be formed by flux sheets containing genomes associated with single organ or even organism and arrange like pages of book with lines of text at each paged formed by the sequences of genomes.

  • For hyper-genome lines of text would consist of super-genomes of different organisms, even those belonging to different species. Each great leap in the evolution at the level of individual would bring in a new level of dark matter hierarchy with larger Planck constant and scaled up characteristic quantum time scale relating directly to the time scale of planned action and memory. The explosive evolution of hypergenome would distinguish us from our cousins in jungle whose genetic evolution has been restricted to that of genome and super-genome. The emergence of language would have launched our cultural evolution.

For more details see the chapters of (say) TGD and EEG.

Saturday, April 14, 2007

TGD prediction for gravitomagnetic field differs from GRT prediction

I realized that the imbeddability in the post-Newtonian approximation is questionable if one assumes vacuum extremal property as I began to look in detail small deformations of the simplest imbedding for the Schwartschild metric.

1. Simplest candidate for the metric of a rotating star

The simplest situation for the metric of rotating start is obtained by assuming that vacuum extremal imbeddable to M4 × S2, where S2 is the geodesic sphere of CP2 with vanishing homological charge and induce Kähler form. Use coordinates Θ,Φ for S2 and spherical coordinates (t,r,θ,φ) in space-time identifiable as M4 spherical coordinates apart from scaling and r-dependent shift in the time coordinate.

  1. For Schartschild metric one has Φ= ωt

    and

    u= sin(Θ)= f(r),

    f is fixed highly uniquely by the imbedding of Schwartschild metric and asymptotically one must have

    u =u0 + C/r

    in order to obtain gtt= 1-2GM/r (=1+Φgr) behavior for the induced metric.

  2. The small deformation giving rise to the gravitomagnetic field and metric of rotating star is given by

    Φ = ωt+nφ

    There is obvious analogy with the phase of Schödinger amplitude for angular momentum eigenstate with Lz=n which conforms with the quantum classical correspondence.

  3. The non-vanishing component of Ag is proportional to gravitational potential Φgr

    Agφ= g = (n/ω)Φgr.

  4. A little calculation gives for the magnitude of Bgθ from the curl of Ag the expression

    Bgθ= (n/ω)× (1/sin(θ)× 2GM/r3.

    In the plane θ=π/2 one has dipole field and the value of n is fixed by the value of angular momentum of star.

  5. Quantization of angular momentum is obtained for a given value of ω. This becomes clear by comparing the field with dipole field in θ= π/2 plane. Note that GJ, where J is angular momentum, takes the role of magnetic moment in Bg (see this). ω appears as a free parameter analogous to energy in the imbedding and means that the unit of angular momentum varies. In TGD framework this could be interpreted in terms of dynamical Planck constant having in the most general case any rational value but having a spectrum of number theoretically preferred values. Dark matter is interpreted as phases with large value of Planck constant which means possibility of macroscopic quantum coherence even in astrophysical length scales. Dark matter would induce quantum like effects on visible matter. For instance, the periodicity of small n states might be visible as patterns of visible matter with discrete rotational symmetry (could this relate to strange goings on in Saturn? See also the Red Square!).

2. Comparison with the dipole field

The simplest candidate for the gravitomagnetic field differs in many respects from a dipole field.

  1. Gravitomagnetic field has 1/r3 dependence so that the distance dependence is same as in GRT.

  2. Gravitomagnetic flux flows along z-axis in opposite directions at different sides of z=0 plane and emanates from z-axis radially and flows along spherical surface. Hence the radial component of Bg would vanish whereas for the dipole field it would be proportional to cos(θ).

  3. The dependence on the angle θ of spherical coordinates is 1/sin(θ) (this conforms with the radial flux from z-axis whereas for the dipole field the magnitude of Bθg has the dependence sin(θ). At z=0 plane the magnitude and direction coincide with those of the dipole field so that satellites moving at the gravitomagnetic equator would not distinguish between GRT and TGD since also the radial component of Bg vanishes here.

  4. For other orbits effects would be non-trivial and in the vicinity of the flux tube formally arbitrarily large effects are predicted because of 1/sin(θ) behavior whereas GRT predicts sin(θ) behavior. Therefore TGD could be tested using satellites near gravito-magnetic North pole.

  5. The strong gravimagnetic field near poles causes gravi-magnetic Lorentz force and could be responsible for the formation of jets emanating from black hole like structures and for galactic jets. This additional force might have also played some role in the formation of planetary systems and the plane in which planets move might correspond to the plane θ=π/2, where gravimagnetic force has no component orthogonal to the plane. Same applies in the case of galaxies.

3. Consistency with the model for the asymptotic state of star

In TGD framework natural candidates for the asymptotic states of the star are solutions of field equations for which gravitational four-momentum is locally conserved. Vacuum extremals must therefore satisfy the field equations resulting from the variation of Einstein's action (possibly with cosmological constant) with respect to the induced metric. Quite remarkably, the solution representing asymptotic state of the star is necessarily rotating (see this).

The proposed picture is consistent with the model of the asymptotic state of star. Also the magnetic parts of ordinary gauge fields have essentially similar behavior. This is actually obvious since CP2 coordinates are fundamental dynamical variables and the field line topologies of induced gauge fields and induced metric are therefore very closely related.

Addition: Lubos Motl's blog tells that the error bars are still twice the size of the predicted frame dragging effect. Already this information would have killed TGD inspired (strongly so) model unless the satellite had been at equator! The sad conclusion is that unless my blog page inspires new many year project with satellite nearer to either pole, which does not seem very plausible, we lose the possibility to kill GRT or TGD for years to come.

For the background see the chapter TGD and GRT of "Classical Physics in Many-Sheeted Space-Time".

Gravity Probe B results are coming today

Kea has already informed us that Gravity Probe B results will be discussed today by C. W. Francis Everitt from Stanford 8.30 local time. There are also links in Kea's blog. Here is a slightly reformatted abstract of the talk.

The NASA Gravity Probe B (GP-B) orbiting gyroscope test of General Relativity, launched from Vandenberg Air Force Base on 20 April, 2004, tests two consequences of Einstein's theory:

  1. the predicted 6.6 arc-s/year geodetic effect due to the motion of the gyroscope through the curved space-time around the Earth;

  2. the predicted 0.041 arc-s/year frame-dragging effect due to the rotating Earth.

The mission has required the development of cryogenic gyroscopes with drift-rates 7 orders of magnitude better than the best inertial navigation gyroscopes. These and other essential technologies, for an instrument which once launched must work perfectly, have come into being as the result of an intensive collaboration between Stanford physicists and engineers, NASA and industry. GP-B entered its science phase on August 27, 2004 and completed data collection on September 29, 2005. Analysis of the data has been in continuing progress during and since the mission. This paper will describe the main features and challenges of the experiment and announce the first results.

The Confrontation between General Relativity and Experiment gives an excellent summary of various test of GRT. The predictions tested by GP-B relate to gravitomagnetic effects. The field equations of general relativity in post-Newtonian approximation with a choice of a preferred frame can in good approximation (gij=-δij) be written in a form highly reminiscent of Maxwell's equestions with gtt component of metric defining the counterpart of the scalar potential giving rise to gravito-electric field and gti the counterpart of vector potential giving rise to the gravitomagnetic field.

Rotating body generates a gravitomagnetic field so that bodies moving in the gravitomagnetic field of a rotating body experience the analog of Lorentz force and gyroscope suffers a precession similar to that suffered by a magnetic dipole in magnetic field (Thirring-Lense efffect or frame-drag). Besides this there is geodetic precession due to the motion of the gyroscope in the gravito-electric field present even in the case of non-rotating source which might be perhaps understood in terms of gravito-Faraday law. Both these effects are tested by GP-B.

In the following something general about how TGD and GRT differs and also something about the predictions of TGD concerning GP-B experiment.

1. TGD and GRT?

Consider first basic differences between TGD and GRT.

  1. In TGD local Lorentz invariance is replaced by exact Poincare invariance at the level of the imbedding space H= M4× CP2. Hence one can use unique global Minkowski coordinates for the space-time sheets and gets rids of the problems related to the physical identification of the preferred coordinate system.

  2. General coordinate invariance holds true in both TGD and GRT.

  3. The basic difference between GRT and TGD is that in TGD framework gravitational field is induced from the metric of the imbedding space. One important cosmological implication is that the imbeddings of the Robertson-Walker metric for which the gravitational mass density is critical or overcritical fail after some value of cosmic time. Also classical gauge potentials are induced from the spinor connection of H so that the geometrization applies to all classical fields. Very strong constraints between fundamental interactions at the classical level are implied since CP2 are the fundamental dynamical variables at the level of macroscopic space-time.

  4. Equivalence Principle holds in TGD only in a weak form in the sense that gravitational energy momentum currents (rather than tensor) are not identical with inertial energy momentum currents. Inertial four-momentum currents are conserved but not gravitational ones. This explains the non-conservation of gravitational mass in cosmological time scales. At the more fundamental parton level (light-like 3-surfaces to which an almost-topological QFT is assigned) inertial four-momentum can be regarded as the time-average of the non-conserved gravitational four-momentum so that equivalence principle would hold in average sense. The non-conservation of gravitational four-momentum relates very closely to particle massivation.

2. TGD and GP-B

There are excellent reasons to expect that Maxwellian picture holds true in a good accuracy if one uses Minkowski coordinates for the space-time surface. In fact, TGD allows a static solutions with 2-D CP2 projection for which the prerequisites of the Maxwellian interpretation are satisfied (the deviations of the spatial components gij of the induced metric from -δij are negligible).

Schwartscild and Reissner-Norströom metric allow imbeddings as 4-D surfaces in H but Kerr metric assigned to rotating systems probably not. If this is indeed the case, the gravimagnetic field of a rotating object in TGD Universe cannot be identical with the exact prediction of GRT but could be so in the Post-Newtonian approximation. Scalar and vector potential correspond to four field quantities and the number of CP2 coordinates is four. Imbedding as vacuum extremals with 2-D CP2 projection guarantees automatically the consistency with the field equations but requires the orthogonality of gravito-electric and -magnetic fields. This holds true in post-Newtonian approximation in the situation considered. This indeed suggests that apart from restrictions caused by the failure of the global imbedding at short distances one can imbed Post-Newtonian approximations into H in the approximation gij=-δij. If so, the predictions for Thirring-Lense effect would not differ measurably. The predictions for the geodesic precession involving only scalar potential would be identical.

There are some reasons to think that gravimagnetic fields might have a surprise in store. The physicists M. Tajmar and C. J. Matos and their collaborators working in ESA (European Satellite Agency) have made an amazing claim of having detected strong gravimagnetism with gravimagnetic field having a magnitude which is about 20 orders of magnitude higher than predicted by General Relativity (arXiv.org gr-gc 0603032; arXiv.org gr-gc 0603033; Phys. Rev. Lett. 62 (8), 845-848; Phys. Rev. B 42(13), 7885-7893). A possible TGD based explanation of the effect is discussed here.

To sum up, TGD predicts that the geodesic precession should come out as in GRT but that Thirring lense effect might differ from the prediction of GRT.

For more details see the chapter TGD and GRT of "Classical Physics in Many-Sheeted Space-Time".

Thursday, April 12, 2007

About the phase transition transforming ordinary deuterium to exotic deuterium in cold fusion

In the previous posting I already told about a model of cold fusion based on the nuclear string model predicting ordinary nuclei to have exotic charge states. In particular, deuterium nucleus possesses a neutral exotic state which would make possible to overcome Coulomb wall and make cold fusion possible.

1. The phase transition

The exotic deuterium at the surface of Pd target seems to form patches (for a detailed summary see TGD and Nuclear Physics). This suggests that a condensed matter phase transition involving also nuclei is involved. A possible mechanism giving rise to this kind of phase would be a local phase transition in the Pd target involving both D and Pd. In the above reference it was suggested that deuterium nuclei transform in this phase transition to "ordinary" di-neutrons connected by a charged color bond to Pd nuclei. In the recent case di-neutron could be replaced by neutral D.

The phase transition transforming neutral color bond to a negatively charged one would certainly involve the emission of W+ boson, which must be exotic in the sense that its Compton length is of order atomic size so that it could be treated as a massless particle and the rate for the process would be of the same order of magnitude as for electro-magnetic processes. One can imagine two options.

  1. Exotic W+ boson emission generates a positively charged color bond between Pd nucleus and exotic deuteron as in the previous model.

  2. The exchange of exotic W+ bosons between ordinary D nuclei and Pd induces the transformation Z→Z+1 inducing an alchemic phase transition Pd→Ag. The most abundant Pd isotopes with A=105 and 106 would transform to a state of same mass but chemically equivalent with the two lightest long-lived Ag isotopes. 106Ag is unstable against β+ decay to Pd and 105Ag transforms to Pd via electron capture. For 106Ag (105Ag) the rest energy is 4 MeV (2.2 MeV) higher than for 106Pd (105Pd), which suggests that the resulting silver cannot be genuine.

    This phase transition need not be favored energetically since the energy loaded into electrolyte could induce it. The energies should (and could in the recent scenario) correspond to energies typical for condensed matter physics. The densities of Ag and Pd are 10.49 g·cm-3 and 12.023 gcm-3 so that the phase transition would expand the volume by a factor 1.0465. The porous character of Pd would allow this. The needed critical packing fraction for Pd would guarantee one D nucleus per one Pd nucleus with a sufficient accuracy.

2. Exotic weak bosons seem to be necessary

The proposed phase transition cannot proceed via the exchange of the ordinary W bosons. Rather, W bosons having Compton length of order atomic size are needed. These W bosons could correspond to a scaled up variant of ordinary W bosons having smaller mass, perhaps even of the order of electron mass. They could be also dark in the sense that Planck constant for them would have the value h= nh0 implying scaling up of their Compton size by n. For n≈ 248 the Compton length of ordinary W boson would be of the order of atomic size so that for interactions below this length scale weak bosons would be effectively massless. p-Adically scaled up copy of weak physics with a large value of Planck constant could be in question. For instance, W bosons could correspond to the nuclear p-adic length scale L(k=113) and n=211.

For more details see the chapter TGD and Nuclear Physics and the new chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Wednesday, April 11, 2007

LSND and MiniBooNE are consistent in TGD Universe

MiniBooNE group has published its first findings concerning neutrino oscillations in the mass range studied in LSND experiments. For the results see the press release, the guest posting of Dr. Heather Ray in Cosmic Variance, and the more technical article A Search for Electron Neutrino in Δ m2=1 eV2 scale by MiniBooNE group.

1. The motivation for MiniBooNE

Neutrino oscillations are not well-understood. Three experiments LSND, atmospheric neutrinos, and solar neutrinos show oscillations but in widely different mass regions (1 eV2 , 3× 10-3 eV2, and 8× 10-5 eV2). This is the problem.

In TGD framework the explanation would be that neutrinos can appear in several p-adically scaled up variants with different mass scales and therefore different scales for the differences Δ m2 for neutrino masses so that one should not try to try to explain the results of these experiments using single neutrino mass scale. TGD is however not main stream physics so that colleagues stubbornly try to put all feet in the same shoe (Dear feet, I am sorry for this: I can assure that I have done my best to tell the colleagues but they do not want to listen;-)).

One can of course understand the stubbornness of colleagues. In single-sheeted space-time where colleagues still prefer to live it is very difficult to imagine that neutrino mass scale would depend on neutrino energy (space-time sheet at which topological condensation occurs using TGD language) since neutrinos interact so extremely weakly with matter. The best known attempt to assign single mass to all neutrinos has been based on the use of so called sterile neutrinos which do not have electro-weak couplings. This approach is an ad hoc trick and rather ugly mathematically.

2. The result of MiniBooNE experiment

The purpose of the MiniBooNE experiment was to check whether LSND result Δ m2=1 eV2 is genuine. The group used muon neutrino beam and looked whether the transformations of muonic neutrinos to electron neutrinos occur in the mass squared region considered. No such transitions were found but there was evidence for transformations at low neutrino energies.

What looks first as an overdiplomatic formulation of the result was

MiniBooNE researchers showed conclusively that the LSND results could not be due to simple neutrino oscillation, a phenomenon in which one type of neutrino transforms into another type and back again.

rather than direct refutation of LSND results.

3. LSND and MiniBooNE are consistent in TGD Universe

The habitant of the many-sheeted space-time would not regard the previous statement as a mere diplomatic use of language. It is quite possible that neutrinos studied in MiniBooNE have suffered topological condensation at different space-time sheet than those in LSND if they are in different energy range. To see whether this is the case let us look more carefully the experimental arrangements.

  1. In LSND experiment 800 MeV proton beam entering in water target and the muon neutrinos resulted in the decay of produced pions. Muonic neutrinos had energies in 60-200 MeV range. This one can learn from the article Evidence for νμe oscillations from LSND.

  2. In MiniBooNE experiment 8 GeV muon beam entered Beryllium target and muon neutrinos resulted in the decay of resulting pions and kaons. The resulting muonic neutrinos had energies the range 300-1500 GeV to be compared with 60-200 MeV! This is it! This one can learn from the article A Search for Electron Neutrino in Δ m2=1 eV2 scale by MiniBooNE group.

Let us try to make this more explicit.
  1. Neutrino energy ranges are quite different so that the experiments need not be directly comparable. The mixing obeys the analog of Schrödinger equation for free particle with energy replaced with Δm2/E, where E is neutrino energy. Mixing probability as a function of distance L from the source of muon neutrinos is in 2-component model given by

    P= sin2(θ)sin2(1.27Δm2L/E).

    The characteristic length scale for mixing is L= E/Δm2. If L is sufficiently small, the mixing is fifty-fifty already before the muon neutrinos enter the system, where the measurement is carried out and no energy dependent mixing is detected in the length scale resolution used. If L is considerably longer than the size of the measuring system, no mixing is observed either. Therefore the result can be understood if Δm2 is much larger or much smaller than E/L, where L is the size of the measuring system and E is the typical neutrino energy.

  2. MiniBooNE experiment found evidence for the appearance of electron neutrinos at low neutrino energies (below 500 MeV) which means direct support for the LSND findings and for the dependence of neutron mass scale on its energy relative to the rest system defined by the space-time sheet of laboratory.

  3. Uncertainty Principle inspires the guess Lp propto 1/E implying mp propto E. Here E is the energy of the neutrino with respect to the rest system defined by the space-time sheet of the laboratory. Solar neutrinos indeed have the lowest energy (below 20 MeV) and the lowest value of Δm2. However, atmospheric neutrinos have energies starting from few hundreds of MeV and Δm2 is by a factor of order 10 higher. This suggests that the the growth of Δm2; with E2 is slower than linear. It is perhaps not the energy alone which matters but the space-time sheet at which neutrinos topologically condense. MiniBooNE neutrinos above 500 MeV would topologically could condense at space-time sheets for which the p-adic mass scale is higher than in LSND experiments and one would have Δ m2>> 1 eV2 implying maximal mixing in length scale much shorter than the size of experimental apparatus.

  4. One could also argue that topological condensation occurs in condensed matter and that no topological condensation occurs for high enough neutrino energies so that neutrinos remain massless. One can even consider the possibility that the p-adic length scale Lp is proportional to E/m02, where m0 is proportional to the mass scale associated with non-relativistic neutrinos. The p-adic mass scale would obey mp propto m02/E so that the characteristic mixing length would be by a factor of order 100 longer in MiniBooNE experiment than in LSND.

To sum up, in TGD Universe LSND and MiniBooNE are consistent and provide additional support for the dependence of neutrino mass scale on neutrino energy.

For more details see the chapter p-Adic Mass Calculations: Elementary Particle Masses of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".