^{th}birth day, so the time goes!". An unpredictability of say plus minus 2 years would be much better. People could make bets about the precise moment of the event in a world lottery. No doubt, a new branch of science called string-ology would emerge tending to predict the precise occurrence of revolutions by identifying some precursors including foreshocks in physics oriented web blogs, weird behavior of young researchers in physics departments, and increase in the number and propagation speed of rumours about what Witten is doing just now. Predictability would be highly desirable since university people could receive a warning about what is to be expected so that panic behavior could be avoided and the number of academic victims minimized. Matti Pitkänen

## Thursday, May 26, 2005

### Is it possible to predict the occurrence of super string revolutions?

As I mentioned in my previous posting there will be a panel discussion about the next superstring revolution in Strings 05 conference to be held in the University of Toronto July 11-16.
It seems that these revolutions occur more or less periodically, roughly once in decade. I would bet eleven year period since it would suggest a correlation with sunspot maxima causing intense magnetic storms known to have strong effect on sensitive people as the statistics from mental hospitals demonstrates. The strictly periodical occurrence would certainly diminish the media value of these events since people soon just nod and say something like "God grief, I am again one string revolution older, only two and half revolutions and I will have my 70

## Wednesday, May 25, 2005

### Superstring Revolution or TGD Liberation?

Lubos Motl wrote a nice posting about next super string revolution making a list of questions. What made me astonished and happy was the openness to new ideas giving hopes of getting out from the recent dead end.
I would perhaps express my dream differently and speak about TGD liberation rather than super string revolution. M-theorists have developed enormous know how about advanced mathematics and since M-theory has produced practically no experimental predictions, I see no deep reason why one should stick to the M-theory context. Standard model symmetries are of course something totally different.
In order to proceed all of us should do just what Lubos proposes and I did for almost 27 years ago, and ask what is possibly wrong in the recent approach, which has developed from GUT type unifications to M-theory sharing with it the basic interpretation of quantum number spectrum.
There are three kinds of key questions I can imagine of making.
a) What are the deep questions which might be asked even by non-professionals and which are not properly answered in the recent conceptual framework of physics?
b) What we can really conclude about basic symmetries using experimental facts from particle physics and cosmology?
c) Is the recent form of quantum field/string theory based on the poorly defined path integral concept final? Could one imagine generalizations of the basic quantum theory?

### A. Philosophical questions

- Do we really understand the notions of inertial and gravitational energy?
The notions of energy and momentum are well defined in Special Relativity but become poorly defined in General Relativity. Poincare momenta are conserved but gravitational momenta are not conserved in cosmological length scales. Should one weaken Equivalence Principle somehow? Could one unify special and general relativities?
Space-time as a 4-surface in M
^{4}×S does this. Poincare invariance at the level of imbedding spaces. Classical gravitational field identified as induced metric. This means also a generalization of string models by replacing string with a 3-surface. One basic prediction is that Poincare energy can have both signs. The proposal is that gravitational energy corresponds to the absolute value of inertial energy and is thus non-conserved and in general non-vanishing for inertial vacua. Robertson-Walker cosmologies are predicted to be inertial vacua: in other words, inertial mass density should vanish in cosmological length scales. The possibility of negative energies and signals propagating backwords in geometric time as deep implications both for the understanding of living matter and energy/communication technology. Phase conjugate photons could be perhaps interpreted as negative energy photons. Questions: Is the identification of the space-time as a 4-surface in 8-D space M^{4}×CP_{2}consistent with experimental facts? Is d=8 too low a dimension for classical gravitations. Many-sheetedness allows extreme flexibility and cosmological predictions (mass density cannot be over-critical) provide strong supprt for the proposal. - Should one generalize the geometrization of physics program to infinite-dimensional context?
Configuration space of 3-surfaces, the world of classical worlds, as a basic object to be endowed with Kähler geometry and spinor structure. Quantum states of the Universe as modes of classical spinor fields in CH.
- General coordinate invariance for space-time surfaces much more general concept. The definition of configuration space Kähler metric assigns a unique space-time surface to given 3-surface. Space-time as a Bohr orbit of 3-surface.
- Is the infinite dimensional Kähler geometric existence unique? Loop space Kähler geometries are unique and in higher-dimensional context the constraints from existence of Riemann connection are even stronger. Loop space has an infinite Ricci scalar, which signals that something must go wrong with string models. The world fo classical worlds as a union of symmetric spaces for which all points are metrically equivalent: gives hopes of calculability. Parameters labelling the spaces in the union are non-quantum fluctuation zero modes identifiable as classical observables of quantum measurement theory. The requirement of finite curvature scalar means its vanishing and Einstein tensor must vanish. Hyper Kähler structure is expected. Imbedding space not dynamical but "God given" so that landscape misery disappears.
- Geometrization of the fermionic oscillator operators and conformal super generators via the identification as configuration space gamma matrices. The new element is that gammas carry fermion number and half odd integer spin. Majorana condition must be given up already by the separate conservation of quark and lepton numbers, which correspond to different chiralities of M
^{4}×CP_{2}spinors. Number theoretically favoured D=8 for imbedding space becomes thus possible. - Clifford algebra of CH (for which tangent space is separable Hilber space) is a hyper-finite factor of type II
_{1}. Quantum groups, conformal theories, ...are an automatic outcome. Jones inclusions as subsystem-system inclusions. The value of hbar characterizes inclusion. Dynamical hbar highly suggestive. - Generalization of 2-D conformal invariance requires D=4 for space-time. Lightlike 3-D CDs metrically 2-dimensional and allow generalization of 2-D conformal invariance. Stringy conformal invariance generalizes and fixes space-time dimension to be D=4. Effective 2-dimensionality is means that 2-dimensional partonic surface code for physical states. The dependence of states on normal derivatives at boundary components means that there is actually 3-dimensionality. There is a hydrodynamical analogue for this: the fluxes of various conserved quantities over 2-D surfaces surrounding sub-systems code for the evolution of the state of the system in practise.

- Quantum measurement theory
Quantum measurement theory is plagued by paradoxes.
- What differentiates between experienced time and geometric time? There are many paradoxes resulting from the identification of these two times in quantum measurement theory and thermodynamics. The generalization of the notion of quantum jump so that deterministic quantum evolution identifiable as quantum superposition of classical evolutions is replaced with a new one would resolve these logical paradoxes. Experience time would correspond to the sequence of quantum jumps and correspond to geometric time only under special conditions. Some implications: space-time surface as a living system; four-dimensional brain and quantum model of memory.
- Configuration space (the world of classical worlds) zero modes as genuine classical variables characterizing the shape and other non-quantum fluctuating aspects of space-time surface?
- Quantum and consciousness: should one make conscious observer part of the system studied? Matter and theory of matter as part of physical a system in generalized sense. von Neumann inclusions could realize the hierarchy of matter, theory about matter, theory about theory of matter,... Zero energy states entangled by a crossing symmetric S-matrix have interpretation as Connes tensor product. S-matrix would realize laws of quantum physics in the structure of the zero energy state. Cognitive representation would be in question. Infinite hierarchy of Jones inclusions would correspond to a hierarchy of levels of reflective consciousness.
- Quantum classical correspondence. Also quantum jump sequences should have space-time correlates. Forces the breaking of strict classical non-determinism. Maxwell action for the projection of CP
_{2}Kähler form to space-time surface is a unique choice guaranteing this. Pure gauge solutions give rise to infinite vacuum degeneracy implying non-determinism for non-vacua and make universe quantum spin glass.

- Should one give up the reductionistic dogma? Is the reduction to Planck scale an illusion: can we really understand living matter from M-theory? Is Planck scale the fundamental scale or does it follow as a prediction? Many-sheeted space-time breaks reductionistic view.

### B. Questions inspired by particle physics and cosmology

- Standard model symmetries something more fundamental than thought?
- Does standard model gauge group have much deeper meaning than thought? CP
_{2}codes for standard model gauge group and M^{4}×CP_{2}predicts conserved B and L. Proton would not decay. There are also other differences. Color corresponds to CP_{2}partial waves. CP_{2}codes for quaternionic planes of octonionic space containing fixed complex plane. Standard model symmetries from number theory? - Are space-time supersymmetries really there? There is no evidence for the super partners yet. Majorana spinors forced by space-time supersymmetry lead to M-theory. In TGD conformal super-symmetries correspond to non-Hermitian super-generators generators carrying quark and lepton numbers. It seems clear that space-time supersymmetries are absent. Both Particle Data Table, the construction of the geometry of the world of classical worlds, and the number theoretic approach favour space-time dimension 4 and imbedding space dimension 8.
- Are Higgs particles/scalars really there? Could p-adic thermodynamics give the dominant part of fermionic mass? Ew gauge couplings favour Higgs but it is not yet clear do they force its existence. Could the couplings of possibly existing Higgs to fermions define only a small shift of fermion masses? Small couplings to fermions would explain why Higgs has not been observed.
- Family replication phenomenon: topological explanation from effective 2-dimensionality. Fermion generations correspond to 2-dimensional
orientable topologies labelled by genus (sphere, torus, etc.). One could perhaps understand why g≥ 3 topologies are different, presumably heavy and short lived, on basis of hyper-ellipticity which means the existence of a global conformal Z
_{2}symmetry. g≤ 2 surfaces are always hyper-elliptic but not those with g≥3. The ground states in conformal moduli should be maximally symmetric and concentrate strongly around 2-surfaces with Z_{2}conformal symmetry. One the other hand, elementary particle vacuum functionals vanish for hyper-elliptic surfaces for g&ge 3 so that they must correspond to "higher partial waves" and could be heavy for this reason.

- Does standard model gauge group have much deeper meaning than thought? CP
- The problem of mass scales
Elementary particles are characterized by widely different mass scales. This forces to ask whether Planck length really the only fundamental length scales and where the mystery number 10
^{38}comes from?- When I got interested on p-adic numbers in the beginning of nineties I soon found that leptons, hadrons, and intermediate gauge bosons seem to correspond naturally to Mersenne primes M
_{127}, M_{107}, M_{89}if their mass scales are proportional to √p as implied by the simple attempts to calculate particle masses. This finding generalized to the p-adic length scale hypothesis stating that p-adic primes near integer powers of 2 are especially interesting physically. Number theory would thus explain the fundamental mass scales. - Super-conformal invariance and p-adic thermodynamics for Virasoro generator L
_{0}lead to successful predictions for elementary particle and hadron mass spectrum and a lot of new predictions follow. The totally different mass scales of fermion generations can be also understood on basis of p-adic length scale hypothesis. - A given p-adic topology would serve as an effective topology of real space-time sheet in an appropriate length scale range. The inherent non-determinism of p-adic differential equations for some value of p would resemble the nondeterminism for Kähler action. p-Adic non determinism would also characterize different quantum non-determisnisms and the basic prediction would be long range correlations to which local chaos would be superposed.

- When I got interested on p-adic numbers in the beginning of nineties I soon found that leptons, hadrons, and intermediate gauge bosons seem to correspond naturally to Mersenne primes M
- Cosmological questions
- Why mass density is not over-critical?
Robertson Walker cosmologies imbeddable to M
^{4}×CP_{2}are necessary sub-critical or critical. RW metric during the critical period is unique apart from the parameter characterizing the duration of this period. - Inflation or quantum criticality? Inflationary scenario troubled by scalar fields for which no experimental evidence exists. Quantum criticality does not require them and predicts also flat 3-space since it requires vanishing of constants with dimensions of length so that 3-D curvature scalar for RW cosmology must vanish. Critical period corresponds to a phase transition from cosmic string dominated phase to an ordinary space-time. Similar phase transitions occur as scaled up versions with cosmic strings replaced by magnetic flux tubes.
- How to explain the smallness of cosmological constant and accelerated cosmic expansion? Many-sheeted space-time with a spectrum of Hubble constants depending on size of the space-time sheet explains the small value of Hubble constant observed for photons from very distant objects. The acceleration of the expansion can be understood as an apparent effect.
- The problem of initial values: are all physically acceptable universes creatable from vacuum? Inertial energy density vanishes in cosmological scales whereas gravitational energy density does not. If the states of the universe are creatable from vacuum (vanishing total conserved quantum numbers), the problem about what are the values of net conserved quantum numbers of the Universe disappears. Also the problem of initial values disappears. Fine tuning of various parameters can be understood as a result of self-organization by quantum jumps replacing entire 4-D cosmology (also initial values) with a new one.

- Why mass density is not over-critical?
Robertson Walker cosmologies imbeddable to M
- Questions related to quantum gravitation
- What are dark matter and dark energy? Does dark matter reside at larger space-time sheets? Does dark energy correspond to magnetic energy of magnetic flux tubes so that cosmological constant would characterize the density of energy assignal to magnetic flux tubes. Λ depends on the p-adic length scale of space-time sheet and its recent smallness can be understood.
- Should one start quantizing gravitation from bound states just as Bohr did in case of atomic physics. Length of order Schwartschild radius is a natural counterpart of Bohr radius in the quantization of gravitational bound states? hbar
_{gr}would have a gigantic value. Ordinary matter would corresponds to very large values of principal quantum number n and small hbar. Could dark matter correspond to small values of n and astrophysical quantum phase? - Is there really need for assuming a dynamical imbedding space? In M-theories it leads to landscape misery. All depends on whether many-sheeted space-time is capable of explaining what is known about gravitation plus making new successful predictions.
- What black holes are? Can one identify black hole like objects as highly tangled strings in Hagedorn temperature? Do their scaled up variants with effective gravitational constant defined by p-adic length scale instead of Planck length exist. RHIC observations suggest that this is the case in case of hdaronic case. 3-D color magnetic flux tubes in Hagedorn temperature containing gluon condensate would explain the observations. Could black holes be the ultimate quantum computing structures in macroscopic quantum phase with large value of hbar?

### C. Should quantum theory be modified and/or generalized?

- The Clifford algebra for the tangent space of the world of classical worlds is basic example of a hyper-finite factors of type II
_{1}. Inclusions of hyper-finite factors relate very closely to quantum groups, conformal field theories, quantum spaces, knot invariants, 3-manifold invariants, braidings, etc. There is a unique inherent unitary time evolution operator defining a propagator for 3-surface (or 2-D partonic surface) as automorphism of algebra: for the imbedding of factor M to N this corresponds to a unitary rotation. von Neuman algebra M becomes an extended particle moving inside algebra N! One obtains generalized Feynmann rules. Hierarchy of S-matrices results corresponding to a hierarchy of inclusions results. At the lowest level is the S-matrix for matter. The states entangled by S-matrix have necessarily vanishing total quantum numbers and would have interpretation as cognitive states representing quantum dynamics (S-matrix) in their own structure. Crossing symmetric S-matrix defines a cognitive entanglement consistent with Connes tensor product. Feynman rules for cognition would result and reduce to the Feynman rules of matter apart from the different single particle time evolution! - Is hbar dynamical and quantized? Can one express the quantization
in terms of Beraha numbers closely related to the square of quantum dimension as von Neumann inclusions suggest? Variation of the fine structure constant would be a prediction and there is indeed a variation by about 10
^{-6}depending on determination. Does hbar characterize Jones inclusion for hyperfinite type II_{1}factors? Can one identify dark matter as a conformally confined macroscopically quantum coherent phase (conformal weights for particles are in the most general case complex and expressible in terms of non-trivial zeros of zeta and net conformal weight for the many particle state must be real). Could dark matter be staring directly to our face? Dark matter as a quantum coherent matter with a large value of hbar? Living matter and dark matter? Dark matter and grey matter? - Should one give up the poorly defined path integral approach? Are generalized Feynman diagrams always equivalent with tree diagrams or diagrams with single N-vertex? This would conform with the classical picture in which only generalized Bohr orbits appear instead of all extremals. The reduction implies strong constraints on vertices allowing algebraic formulation. Unitary is implied by the conditions. Couplings constants fixed points of ordinary coupling constant evolution. There is infinite number of fixed points labelled by p-adic primes and p-adic coupling constant evolution replaces ordinary coupling constant evolution.
- Physics as a generalized number theory?
- p-Adic number fields and fusion of p-adic number fields and their extensions and reals to a larger structure? Does physics in various number fields resilt by an algebraic continuation from rational number based physics? p-Adic physics as physics of intentionality and imagination since p-adic non-determinism allows mimicry. Classical non-determinism in real context resembles p-adic non-determinism for some p in some length scale range always. Does the transformation of intention to action correspond to a quantum jump replacing p-adic space-time sheet representing intention with a real space-time sheet representing action.
- Quaternions and octonions and space-time dimensions.
Effective 2-dimensionality and dimensions of space-time and imbeddings space relate to the dimensions of classical division algebras. Hyper-octonions/-quaternions are forced by the requirement that number-theoretic norm defines metric with Minkowski signature (hyper octonions/quaternions correspond to a sub-space of complexified octonions/quaternions obtained by multiplying non-commuting imaginary units by a commuting √(-1)). The dynamics of the Kähler action defining generalized Bohr orbits could correspond to a purely number theoretic dynamics: space-time surfaces as hyper-quaternionic or co-hyperquaternionic sub-manifolds of the hyper-octonionic imbedding spaces.
This approach suggests several number theoretic dualities. In particular, there is a generalization of wave-particle duality relating descriptions based on the identification of space-time surface as a surface in M
^{8}defined by the conserved classical currents associated with M^{4}translations and SU(3) isometries in the complement of U(2) as functions of space-time point on hand, and as a surface of M^{4}×CP_{2}. The values of these densities depend on the choise of time coordinate but general coordinate invariance is achieved by using the lightcone proper time as a preferred time coordinate. - Infinite primes. Construction isomorphic to a repeated second quantization of an algebraic quantum field theory. Infinite primes correspond to Fock states for a supersymmetric arithmetic quantum field theory. Hierarchy of infinite primes could correspond to the hierarchy of space-time sheets and infinite primes/integers/... would have representation as four-surfaces somewhat like ordinary primes/integers/... have representation as points of real axis.

## Tuesday, May 24, 2005

### Lost in the Landscape

The newest twists in the landscape business strengthen further the pessimism concerning the future of M-theory and super string models as an enterprise deserving the attribute "scientific". Last Wednesday appeared the eprint of Shamit Kachru et al claiming that there exists an infinite number of flux compactifications stabilizing all moduli rather than "only" 10

^{100},10^{500}, or 10^{1000}of them. In practice this is a death blow for the hopes that one could find some justification for possibly existing physical compactifications. Peter Woit commented this in his Not-Even-Wrong blog. Also Lubos Motl, who has openly expressed his skepticism on lanscaping, has commented in detail the mathematics behind the landscaping. Today an eprint of Michael Dine appeared in arXiv.org. Peter Woit has commented also this in his posting "Running Scared" today. Although Dine still wants to remain optimistic, the following comment in this conclusions tells something else. "There are many ways, as we have indicated, in which the ideas described here might fail. Perhaps the most dramatic is that the landscape may not exist, or alternatively that there might exist infinite numbers of states, whose existence might require significant rethinking of our basic understanding of string theory and what it might have to do with nature." What is still lacking is a convincing argument demonstrating that M-theory does not allow any compactification reproducing realistic physics. I hope that someone could provide this argument before the status of theoretical physics as a respectable branch of science is completely lost. From bird's eye of view provided by TGD the situation is by no means a surprise and it is easy to list the most important wrong choices made during these three decades responsible for the recent catastrophical situation. The misery started with GUTs and with the highly un-imaginative idea of forcing both lepton and quark families into the same multiplet by extending the gauge group. For a reason which has remained mystery to me, a consensus that proton must be unstable emerged and led the theorizing to a totally wrong track. The outcome was an industry of Grand Unified Theories, which relates to the great theories of physics as the musical creations of a fifteen year old who has just learned blues formula relate to the symphonies of Beethoven. Kaluza-Klein theories and higher-dimensional super-gravity theories were the next fashions, and it soon became clear that they do not work since they cannot explain chirality breaking couplings. The problem did not disappear in string model compactifications and should have been enough to kill the whole approach. In conflict with obvious aesthetic arguments, these pragmatical TOERs however argued that the problem can be put under the rug by allowing orbifold compactifications, which mean replacing the smooth compact manifold by a singular one. This is like replacing perfectly smooth sphere as a fundamental geometric construct explaining the physical existence from Planck scale to atomic physics to biology to cosmology with a tear drop. This kind of pragmatism should squeeze tears from eyes of anyone having even most a rudimentary sense of mathematical beauty and a respect for the deep mystery of physical existence. The notion of spontaneous compactification is an idea which would have been regarded as absolutely silly if represented by some unkown graduate student. After discovering that quantum gravitation might be obtained from strings moving in fixed background, these practical thinkers made also background dynamical. Double gravitation! We would now be wittnessing the flourishing of this exotic flower of pragmatic thinking if string world sheet would have been generalized to a 4-dimensional space-time surface in a fixed non-dynamical imbedding space. A sloppy use of no-go theorem explains this weird twist in the development of ideas (perhaps the fact that certain completely unkown person had already made this discovery might also relate to this weird twist as well as to the birth of M-theory). The problem is that the huge 2-dimensional conformal invariance is replaced by a finite-dimensional group of conformal transformations in higher dimensions. No-one had time to sit calmly the necessary five minutes to realize that light-like 3-surfaces of 4-dimensional space-time are metrically 2-dimensional and allow a generalization of 2-dimensional conformal invariance. This also forces space-time to be 4-dimensional. These blunders tell that in the hype created by the amazing success of the standard model particle theorist failed to realize that the building a theory of everything is not like repairing old car by using the spare parts which you happen to find in the garage. Every hypothesis must have a deep mathematical and physical motivation and ad hoc constructs are doomed to be wrong. All these erraneous choices were amplified to a colossal proportion by the vision that all resources must be focused to a single promising idea and that the era of individuals is over in science. The sad fact that most active young M-theorists know practically nothing about particle physics explains why the fatal grand unification hypothesis is taken as an experimental fact. Admittedly, the idea of coding the whole particle physics to single gauge group makes things very easy for an algebraic geometrist eager to apply his powerful technical tools. While wittnessing the falldown of super string models during last year I have experienced confused and frustrated feelings and questions fill my mind. Do these negative results have any impact on the mainstream or do they continue in the autistic mode producing papers about a theory having nothing to do with real world? Is there any hope of communicating these people the simple fact that the basic problems have been solved long ago in TGD framework? After all, TGD approach generalizing string models emerged already 1978, and my thesis appeared 1982, two years before the first super string revolution, and the interpretation represented in the thesis has remained essentially intact during these years. Are there any hopes of catching the attention of an M-theorist and force him to spend few days to develop rough view about how brilliantly TGD solves the basic conceptual problems of modern physics, and how the same unique theory emerges from extremely general number theoretical considerations and just the brief inspection of the symmetries of the Particle Data Table? Just wondering? Matti Pitkänen## Monday, May 16, 2005

### Could microwave induced ionization distinguish between the chemistries of living and dead matter?

Every day brings something new in day-light in the plasmoid model for primitive life forms. Recall that the simplest plasmoid consists of a topologically quantized dipolar magnetic field with the ordinary matter residing in the region corresponding to the bar magnet whereas magnetic flux tubes/walls carry the dark matter in a state corresponding to large hbar equal to hbar_s/hbar =about 2^11.
The BE condensate of conformally confined dark electrons sucks microwave energy from microwave power source by sending phase conjugate (negative energy) laser beams to the source and in this manner is able to preserve the magnetization and keep magnetic field in rotation. This in turn guarantees the flow of Ohmic radial current through the ordinary matter and thus self-organization identified as the basic cause of molecular evolution.
A detailed TGD inspired model for the mystrious microwave induced ionization discussed in the previous posting is following.

- A rotating approximately dipole magnetic field is generated and the induced radial Ohmic current generates a negative charge at the space-time sheet carrying the magnetic field inside the region corresponding to the magnet. This space-time sheet corresponds to a space-time sheet larger than k=137 space-time sheet associated with the electrons of the atoms of air.
- The microscopic mechanism generating the radial Ohmic current involves the dropping of electrons of surrounding air from k=137 atomic space-time sheet to the space-time sheet of the magnet and drifting to the region of magnet. The electron in question must have a small kinetic energy so that the large zero point kinetic of electron at k=137 atomic space-time sheet must be emitted as a virtual X ray and be absorbed by a second atom or molecule of air: two-particle process is required by the momentum conservation.
- The zero point kinetic energy of electron is =about .94 keV for k=137 atomic space-time sheet and is enough to induce the ionization of C, O and N molecules (for N the ionization energy of n=1 electron is .87 keV according to Bohr model). The energetic electrons from the ionized atoms in turn excite and ionize further atoms and molecules. That the zero point kinetic energy of k=137 electron is enough to ionize n=1 inner non-valence electrons of C, O and N atoms but not those of heavier atoms, might distinguish them from the chemically similar atoms Si, P and S in the next period of the periodic system and relate to their role as basic building blocks of bio-molecules. A kind of primitive metabolic symbiosis between C, O and N atoms and plasmoids leading to the evolution of more complex bio-molecules would look like a natural outcome of the self-organization process induced by the radial Ohmic current.
- Molecules containing P ions (Z=15) have exceptional role in the ADP-ATP cycle, and the ionization energy of n=2 electron of P differs by a factor (15/16)^2 from that for n=1 electron for O. Hence one can wonder whether for the P^+ ions of living matter might lack the inner n=2 electrons of P and possess all valence electrons and whether same might apply to Na^+ and Mg^+ with Z=11 and 12. K^+, Ca^{++}, Mn^+ and Fe^{++} with Z=19,20,25 and 26 important for the functioning of living matter might be produced by the same ionization mechanism. The ionization energies of n=3 electron of Mn and Fe are nearly equal to that of O. The proposed ionization mechanism is not able to ionize atoms with Z higher than 40.

## Saturday, May 14, 2005

### TGD based mechanism for microwave induced ionization and drag reduction of air crafts

The progress in relating TGD based view about dark matter as a macroscopically quantum coherent phase with a large value of Planck constant has been amazingly rapid during last weeks. In the previous posting I summarized the refinements to the model of plasmoid as a primitive life form identified as a rotating topologically quantized magnetic field with the counterpart of magnet consisting of ordinary matter and flux tubes carrying dark matter playing the role of quantum controlling intentional agent.
Plasmoids can be generated using microwave radiation: this possible even in microwave oven as I told in the previous posting. Obviously microwave photons have so small energies that they are not expected to generate ionized plasma in the universe obeying the laws of standard physics. This is what they however do. In many-sheeted space-time the mechanism responsible for this, as well as for a wide variety of other free energy phenomena, can be identified. The same mechanism is the star actor in the functioning of living systems and among other things predicts a spectrum of universal metabolic energy currencies.
As it often happens, ignorant technologists apply un-ashamedly phenomena declared to be impossible by respected academic theorists. The reduction of the drag of aircrafts by ionizing the air using microwaves represents one example of the applications of the microwave ionization: see the New Scientist articles Plasma magic and Powerful ion engine relies on microwaves.
The measurements performed by Jean Naudin in his Advanced Reduced Drag Aircraft project are related to the work of Industrial Plasma Engineering Group of the UTK Plasma Sciences Laboratory, which has been supported by NASA Grant NCC 1-223 since October 1, 1995 to develop the applications of a One Atmosphere Uniform Glow Discharge Plasma (OAUGDP) to aerodynamic boundary layer and flow control. These and related applications are described in U. S. Patent 5,669,583, "Method and Apparatus for Covering Bodies with a Uniform Glow Discharge Plasma and Applications Thereof". Wind tunnel measurements for this research were taken in the 7 X 11 inch Low Speed Wind Tunnel of the NASA Langley Research Center's Fluid Modeling and Control Branch, Hampton, VA. More references to the phenomenon can found on Naudin's web pages.
TGD inspired explanation for microwave induced ionization is that a rotating approximately dipolar magnetic field is generated and the reslting radial Ohmic currents induce the dropping of electrons to larger space-time sheets (perhaps magnetic flux tubes) and the transformation of zero point large zero point kinetic energies to kinetic energies above keV induces the ionization. Dark electrons at the magnetic walls are an essential element of the system taking care of the preservation and rotation of the dark matter magnetic field.
Jean Naudin has studied a system that he calls glow discharge plasma panel (GDP panel) in an attempt to understand what happens in the generation of plasma by microwaves. The measurements of Naudin are summarized here and here .
The system studied consists of a primary system generating pulses at frequency of about 6 kHz coupled via an ignition coil to a GDP coil containing no magnetic core. The GDP panel starts to glow and generates a plasma discharge in the surrounding air.
The findings of J. Naudin relate to the behavior of currents breaking the basic rules of circuit theory, in particular the conservation of current. The resolution of the strange findings is based on the presence of the radial ohmic current in the GDP coil implying an exchange of charge with the surrounding air and also with the secondary ignition coil occurring most naturally along the flux tubes of mutual induction magnetic field connecting these systems.
The analysis of the current conservation anomalies observed by Naudin and their resolution is discussed in the chapter Quantum Coherent Dark Matter and Bio-Systems as Macroscopic Quantum Systems of "Gemes, Memes, Qualia, ...".
It is fair to say that the interpretation of TGD has now matured to a level allowing highly non-trivial applications in the high-tech frontier. In this kind of situation it is pretty frustrating to wittness how the academic theoretical physics community continues to fill hep-th with increasingly bizarre M-theoretical speculations without a slightest connection with the observed world since some power holders of science have decided that "M-theory is the only known quantum theory of gravitation".
Matti Pitkänen

## Thursday, May 12, 2005

### Plasmoids as Primitive Life Forms, Large Planck constant, and Dark Matter

The idea that plasmoids defined as magnetic flux tube structures containing plasma could define primitive life forms emerged several years ago: see this ,this,this, and this] chapter of "Genes, Memes, Qualia,...".
For a couple of years ago I learned that plasmoids generated in simple electric circuits have been experimentally found to possess basic characteristics of living system: they can replicate, they can communicate, they form outer boundary, etc... See also

*Plasma blobs hint at new form of life*, New Scientist vol. 179 issue 2413 – 20, September 2003, page 16. The work with the model for the strange experimental findings about the behavior of rotating magnetic system involving a static magnet at center and smaller cylindrical magnets rolling along it (Searl machine) carried out by Russian researchers Godin and Roschin led to a realization that this system might have in common with living systems very important basic function, namely remote metabolism based on time mirror mechanism. The first key element of the model was the rotating magnetic field generating radial vacuum electric field with a non-vanishing density of vacuum electric charge not possible in Maxwell's electrodynamics but observed already by Faraday but put under a rug since Maxwell's ED was the was decided to be the TOE at that time. This electric field generates radial ohmic current charging the system and means that the fundamental prerequisite of self-organization is satisfied: there is a feed of charge and energy forcing the system to self-organize with dissipation taking the role of Darwinian selector. For instance, if you have a vessel of water in turbulent flow, it comes rapidly to rest in absence of external energy feed by dissipation. If you feed energy into it by heating it from below, convective flow sets on and becomes more and more complex as the heat power is increased. Now ohmic current replaces heating power and implies that the material part of the system starts to evolve and more and more complex structures emerge. One of the strange features of this system is the appearance of cylindrical magnetic walls spaced at even intervals during the period when the rotational motion of the rollers accelerates spontaneously. The realization was that these magnetic walls provide energy and angular momentum via remote metabolism to the roller magnets. They could be also seen as a concrete example of magnetic body central in the TGD based model of living systems. At that time I had not realized that the formation of the radial ohmic current is the quintessential property of the system as a self-organizing system, nor that the simple magnetic flux tube structure defining an elementary plasmoid, say topologically quantized magnetic dipole field, must rotate along its symmetry axis to generate radial ohmic current charging the core region containing the ordinary matter. Neither had I discovered the possibility that Planck constant might be dynamical and quantized, and that dark matter and living matter would involve in an essential manner conformally confined phase with a large value of hbar very naturally located at the magnetic flux tubes or walls emanating from the core region. These ideas lead to a considerable refinement of the earlier model for plasmoids as primitive life forms discussed in various chapters of Genes, Memes, Qualia, .... Magnetic body containing dark matter in flux tubes (or walls) and ordinary matter in the core region (“bar magnet”) is quite not all that is required to have a system which could be said to be living. Only if the magnetic field if the system is rotating along the axis defined by the fictitious bar magnet, the system possesses basic properties of a very simple living system. This simple model explains basic facts about living matter not easy to understand in the framework of standard chemistry.- The presence of the radial Ohmic current charges the core and means also a continual transfer of energy through it leading to self-organization. In particular, bio-chemical evolution starts. The simplest model for linear bio-molecules is as a plasmoid with the molecule serving as counterpart of "magnet" creating topologically quantized dipole field.
- The model predicts breaking of mirror symmetry (second direction of rotation for magnetic field is preferred by the presence of long range classical Z^0 fields predicted by TGD and breaking mirror symmetry in its couplings): the chiral selection (only left or right-handed biomolecules appear in living matter) of biomolecules reduces to this.
- The model explains also why biomolecules and cell have a negative charge, and why the cell has resting potential making in turn possible nerve pulses. Even more, a simple capacitor model for a sensory receptor explains how a capacitor like system near a di-electic breakdown (as the situation is now due to the continual charging by the radial current) can generate sensory qualia in di-electric breakdown chracterized by the increments of various quantum numbers in the interior of the system.
- Dark matter part applies remote metabolism and acts as an intentional agent by sending laser beams of dark photons to the "bar magnet" part of the system. The dropping of various charges from atomic and smaller space-time sheets generates a hierarchy of universal metabolic currencies as increments of zero point kinetic energies fixed by the p-adic length scale hypothesis.
- The replication of biomolecules and cells reduces basically to a splitting of the counterpart of bar magnet in the core of the plasmoid to two parts. The mitosis of cell indeed brings strongly in mind the pinching of the field lines of a dipole magntetic field leading to a generation of two separate dipole magnetic fields.

## Friday, May 06, 2005

### Dark matter and grey matter

This posting is a continuation to the previous one relating to the interaction of visible and dark matter so that I do not bother to repeat the definitions. A simple order of magnitude model for condensed dark matter derived by scaling from an elementary model of condensed matter together with the finding that the basic neuronal modules in cortex have 1 mm size, roughly the predicted size of dark matter super-atoms, suggests that dark matter and grey matter might have a lot in common.

### 1. Simple model for dark condensed matter and dark molecules

The BE condensates containing N_{cr} dark atoms define what might be called dark super-atoms. One cannot avoid asking whether these super-atoms could form molecular structures with a typical distance between super-atoms given by dark Bohr radius a_d=about .2*X^2 mm, and whether also the dark counterpart of the condensed matter could exist. Even super-dark counterparts of bio-molecules can be imagined. One can also wonder whether the 1 mm sized basic structural units of cortex might be visible matter quantum controlled by a dark condensed matter. This prelude motivates the following simple scaling arguments allowing to deduce the basic characteristics of the spectrum of dark super molecules.- The scale for the vibrational energy spectrum of dark super-molecules would be given by hbar_s*sqrt{k/N_{cr}m} with elastic constant behaving as k propto 1/a_d^2 so that dark vibrational energy spectrum would relate by a factor (hbar/hbar_s)/N_{cr}=about 2^{-11}/N_{cr} to the ordinary spectrum of vibrational energies. The scale of the rotational energy energy spectrum would be hbar^2_s/(a_d^2*N_{cr}) being by a factor N_{cr} smaller than for ordinary molecules. Since the ratio of scales for rotational and vibrational energies is about 10^{-3} for ordinary molecules, these scales would be essentially same for dark molecules and about 10^{-4}/N_{cr} eV corresponding to frequencies f < 1 GHz for ordinary light.
- Also the dark counterparts of condensed matter phases can be imagined. The lattice constant would be of order a_d and the widths of electronic energy bands would be below the maxima of electronic kinetic energies hbar^2_s*pi^2/(2*m_e*a^2_d)=about 10^{-4} eV. A more precise estimate in the case of solids is obtained from the scaling of the Fermi energy determined by the density of electrons which is at most 2N_{cr} electrons per atomic volume at a given energy level. In free electron approximation the Fermi energy of electron determining the width of the band is given by E_F(dark)=about (hbar/hbar_s)^2*N_{cr}^{2/3}*E_F, E_F= (3*pi^2/2^{1/2})^{2/3}*(hbar^2/a^2*m_e) , and differs from the rough estimate by N_{cr}^{2/3} factor.

### 2. A connection with bio-photons

The biologically active radiation at UV energies was first discovered by Russian researcher Gurwitsch using a very elegant experimental arrangement. Gurwitsch christened this radiation mitogenetic radiation since it was especially intense during the division of cell. A direct proof for the biological activity of mitogenetic radiation consisted of a simple experiment in which either quartz or glass plate was put between two samples. The first sample contained already growing onion roots whereas the second sample contained roots which did not yet grow. In the case of quartz plate no stimulation of growth occurred unlike for glass plate. Since quartz is not transparent to UV light wheras the ordinary glass is, the conclusion was that the stimulation of growth is due to UV light. The phenomenon was condemned by skeptics as a pseudo science and only the modern detection technologies demonstrated its existence (Popp), and mitogenetic radiation became also known as bio-photons (there is also a TGD based model for bio-photons). Bio-photons form a relatively featureless continuum at visible wavelengths continuing also to UV energies, and are believed to be generated by DNA or at least to couple with DNA. The emission of bio-photons is most intense from biologically active organisms and the irradiation by UV light induces an emission of mitogenetic radiation by a some kind of amplification mechanism. It has been suggested that bio-photons represent some kind of leakage of a coherent light emitted by living matter. Dark radiation with wavelengths coming as sub-harmonics of the dark atomic distance a_d =about 1 mm is predicted. This radiation would correspond to visible and UV wave lengths for the ordinary photons: bio-photons have energies in this energy range. According to Russian researcher V. M. Injushin, mitochondrios emit red light at wavelengths 620 nm and 680 nm corresponding to energies 2 eV and 1.82 eV. According to the same source, the nucleus of cell sends UV light at wavelengths 190, 280 and 330 nm corresponding to the energies 6.5, 4.4 and 3.8 eV. The interpretation as a kind of leakage of coherent light would conform with the identification in terms of BE condensates of dark photons with hbar_s/hbar=about 2^{11} emitted at wavelengths varying in the range .3-1.25 mm and decaying to photons with energies visible and UV range. For instance, 1.82 eV radiation corresponds to a dark photon wave length of 1.4 mm for v_0(eff)= 2^{-11}. A bio-control of ordinary bio-matter at sub-cellular level performed by dark matter from the millimeter length scale could be in question. This proposal conforms with the fact that 1 mm defines the scale of the blobs of neurons serving as structural units in cortex. The analysis of Kirlian photographs has shown that the pattern of visible light emitted by various body parts, for instance ear, code information about other body parts. These bio-holograms for which I have proposed a TGD based general model, could be realized as dark photon laser beams. In phantom DNA effect a chamber containing DNA is irradiated with a visible laser light and the DNA generates as a response coherent visible radiation at same wavelength. Strangely enough, the chamber continues to emit weak laser light even after the removal of DNA. This effect could be due to the decay of a dark photon BE condensate remaining in the chamber. Also the findings of Peter Gariaev about the effects of visible laser light on DNA, in particular the stimulated emission of radio waves in kHz-MHz frequency range might also relate to dark photons somehow. For more details see the chapter Quantum Coherent Dark Matter and Bio-Systems as Macroscopic Quantum Systems of "Genes, Memes, Qualia,...". Matti Pitkanen## Thursday, May 05, 2005

### How dark matter and visible matter interact?

How dark matter and visible matter interact is the question that I have pondered for a few weeks in light of results achieved in understanding the role of von Neumann algebras in quantum TGD. I have posted two variants of the model for this interaction but found better to delete the postings. I hope that the third version is already nearer to the final one.
The basic hypothesis that the value of hbar is dynamical, quantized and becomes large at the verge of a transition to a non-perturbative phase in the ordinary sense of the word has fascinating implications. In particular, dark matter, would correspond to a large value of hbar and could be responsible for the properties of the living matter. In order to test the idea experimentally, a more concrete model for the interaction of ordinary matter and dark matter must be developed and here of course experimental input and the consistency with the earlier quantum model of living matter is of considerable help.

### 1. Basic implications from the scaling of hbar

It is relatively easy to deduce the basic implications of the scaling of hbar.- If the rate for the process is non-vanishing classically, it is not affected in the lowest order. For instance, scattering cross sections for say electron-electron scattering and e^+e^-annihilation are not affected in the lowest order since the increase of Compton length compensates for the reduction of alpha_{em}. Photon-photon scattering cross section, which vanishes classically and is proportional to alpha_{em}^4*hbar^2/E^2, scales down as 1/hbar^2.
- Higher order corrections coming as powers of the gauge coupling strength alpha are reduced since alpha= g^2/4*pi*hbar is reduced. Since one has hbar_s/hbar= alpha*Q_1*Q_2/v_0, alpha*Q_1*Q_2 is effectively replaced with a universal coupling strength v_0=about 4.6*10^{-4}. In the case of QCD the paradoxical sounding implication is that alpha_s becomes very small in non-perturbative phase.

### 2. Simple model for dark atoms

The model for dark atoms requires model for dark nuclei and dark electrons.- The simplest model for dark nuclei is as blobs of N_{cr} dark nuclei surrounded by dark electrons at distance which is scaled by a factor k^2, k=hbar_s/hbar from that for ordinary atom. Thus the dark nucleus looks point-like from the point of view of the electron cloud. As far as electrons are considered there are two options. a) Dark electrons could behave as independent particles with the interaction strength with the dark nucleus given by N*Z*alpha. The critical value N_{cr} of N is determined from the condition of criticality for single electron dark nucleus interaction: [X]=1, X=N_{cr}*Z*alpha. Here [X] denotes the largest integer smaller than X. Criticality implies hbar_s/hbar=X/v_0=about 1/v_0. Note that the condition 1 <=X< 2 implying that hbar_s/hbar is in the range [1/v_0, 2/v_0). b) Dark electrons are in the same state apart from the values of super-canonical conformal weights making possible to satisfy fermionic statistics. They behave like a single super-electron with mass N_{cr}*m_e and em charge N_{cr}*e. In this case the criticality condition reads [X]=1, X= N_{cr}^2*Z*alpha. The criticality implies hbar_s/hbar=X/v_0=about 1/v_0.
- The binding energy scale E propto Z^2*alpha_{em}^2*m_e of atoms scales as 1/hbar^2 so that a partially dark matter for which protons have large value of hbar does not interact appreciably with the visible light. Scaled down spectrum of atomic binding energies would be the experimental signature of dark atoms. The resulting binding energy spectrum is independent of the atom in the approximation X=1. The binding energy scale defined by the ionization energy E_0=Z^2*alpha^2*m_e/4 as given by Bohr model is replaced with E_0=v_0^2*m_e/4*X^2 =about 26.5/X^2 meV. Different values of X allow to distinguish between different atoms since the energy scale differ by a factor 1/4 from the maximal one. It should be noticed that the resting potential of neuron is around .64 meV (the value varies in considerable limits up to 80 meV).
- The ionization wavelength for
*ordinary*hbar would be =about 46.5*X^2 microm, which for X=1 below the maximal size of neuron about 100 microm. For hbar_s= hbar/v_0 the wavelength is given by factor 46.5*X^3/v_0 and =about 9.4*X^3 cm, which happens to be the size scale of brain hemisphere for X=1. - The Bohr radius of dark atom scales as hbar^2 and is given by a_d=(X/v_0)^2*a_0 =about .2*X^2<.8 mm, (a_0=hbar/alpha_{em}m_e). The size of basic multi-neuron modules in cortex is about 1 mm. These intriguing observations give hints about the possible role of dark atoms in the functioning of living matter and brain.

### 3. How dark photons transform to ordinary photons?

The transitions of dark atoms naturally correspond to coherent transitions of the entire dark electron BE condensate and thus generate N_{cr} dark photons which have complex conformal weights and are conformally confined and behave thus like laser beams. Dark photons do not interact directly with the visible matter. The simplest guess is that the transformation of dark photon BE condensates to ordinary photons corresponds to a loss of coherence by conformal liberation in which conformal weights of photons become real. An open question is whether even ordinary laser beams could be identified as beams of dark photons. Note that the transition from dark to ordinary photons implies the reduction of wave length and thus also of coherence length by a factor v_0. Dark-visible transition should have also a space-time correlate. The so called topological light rays or MEs ("massless extremals") represent a crucial deviation of TGD from Maxwell's ED and have all the properties characterizing macroscopic classical coherence. Therefore MEs are excellent candidates for the space-time correlate of BE condensate of dark photons. MEs carry in general a superposition of harmonics of some basic frequency determined by the length of ME. A natural expectation is that the frequency of classical field corresponds to the generalized de Broglie frequency of dark photon and is thus hbar/hbar_s *lower than for ordinary photons. In completely analogous manner de Broglie wave length is scaled up by k=hbar_s/hbar. Classically the decay of dark photons to visible photons would mean that an oscillation with frequency f inside topological light ray transforms to an oscillation of frequency f/k such that the intensity of the oscillation is scaled up by a factor k. Furthermore, the ME in question could naturally decompose into 1< N_{cr}=< 137 ordinary photons in case that dark atoms are in question. Of course also MEs could decay to lower level MEs and this has an interpretation in terms of hierarchy of dark matters to be discussed next.### 4. Hierarchy of dark matters and hierarchy of minds

The notion of dark matter is only relative concept in the sense that dark matter is invisible from the point of view of the ordinary matter. One can imagine an entire hierarchy of dark matter structures corresponding to the hierarchy of space-time sheets for which p-adic length scales differ by a factor 1/v_0=about 2^{11}. The BE condensates of N_{cr} ordinary matter particles would serve as dynamical units for "dark dark matter" invisible to the dark matter. The above discussed criticality criterion can be applied at all levels of the hierarchy to determine the value of the dynamical interaction strength for which BE condensates of BE condensates are formed. This hierarchy would give rise to a hierarchy of the values of hbar_n/hbar coming as powers of v_0^{-n} as well as a hierarchy of wavelengths with same energy coming as powers or v_0^{n}. For zero point kinetic energies proportional to hbar^2 this hierarchy would come in powers of v_0^{-2n}, for magnetic interaction energies proportional to hbar the hierarchy would come in powers v_0^{-n} whereas for atomics energy levels the hierarchy would come in powers of v_0^{2n} (assuming that this hierarchy makes sense). The most interesting new physics would emerge from the interaction between length scales differing by powers of v_0 made possible by the decay of BE condensates of dark photons to ordinary photons having wavelength shorter by a factor =about v_0. This interaction could provide the royal road to the quantitative understanding how living matter manages to build up extremely complex coherent interactions between different length and time scales. In the time domain dark matter hierarchy could allow to understand how moments of consciousness organize to a hierarchy with respect to the time scales of moment of consciousness coming as 2^{11k} multiples of CP_2 time scale. Even human life span could be seen as single moment of consciousness at k=14^{th} level of the dark matter hierarchy whereas single day in human life would correspond to k=12.### 5. Realization of intentional action and hierarchy of dark matters

How long length scales are able to control the dynamics in short length scales so that the extremely complex process extending down to atomic length scales realizing my intention to write this word is possible. This question has remained without a convincing answer in the recent day biology and there strong objections against the idea that this process is planned and initiated at neuronal level. I have proposed a concrete mechanism for the realization of intentional action in terms of time mirror mechanism involving the emission of negative energy photons and proceeding as a cascade in a reversed direction of geometric time from long to short length scales. This cascade would induce as a reaction analogous processes proceeding in the normal direction of geometric time as a response and would correspond to the neural correlates of intentional action in very general sense of the word. The counterparts for the negative energy signals propagating to the geometric past would be phase conjugate (negative energy) laser beams identifiable as Bose-Einstein condensates of dark photons. In the time reflection these beams would transform to positive energy dark matter photons eventually decaying to ordinary photons. The space-time correlate would be MEs decaying into MEs and eventually to CP_2 type extremals representing ordinary photons. The realization of intentional action as desires of boss expressed to lower level boss would naturally represented the decay of the phase conjugate dark laser beam to lower level laser beams decaying to lower level laser beams decaying to... . This would represent the desire for action whereas the time reflection at some level would represent the realization desire as stepwise decay to lower level laser beams and eventually to ordinary photons. The strong quantitative prediction would be that these levels correspond to a length and time scale hierarchies coming in powers of 1/v_0sim 2^{11}.### 6. Wave-length hierarchy, coherent metabolism, and proton-electron mass ratio

The fact that a given wavelength length corresponds to energies related to each other by a scaling with powers of v_0 provides a mechanism allowing to transfer energy from long to short long scales by a de-coherence occurring either in the standard or reversed direction of geometric time. De-coherence in the reversed direction of time would be associated with mysterious looking processes like self-assembly allowing thus an interpretation as a normal decay process in reversed time direction. It is perhaps not an accident that the value of v_0=about 4.6*10^{-4} is not too far from the ratio of m_e/m_p=about 5.3*10^{-4} giving the ratio of zero point kinetic energies of proton and electron for a given space-time sheet. This co-incidence could in principle make possible a metabolic mechanism in which dark protons and ordinary electrons co-operate in the sense that dark protons generate dark photon BE condensates with wave length lambda transforming to ordinary photons with wavelength v_0*lambda absorbed by ordinary electrons. Some examples are in order to illustrate these ideas.- As already found, in the case of dark atoms the scaling of binding energies as 1/hbar^2 allows the coupling of about 9 cm scale of brain hemisphere with the length scale about 50 mum of large neuron. N_{cr}<= 137 ordinary IR photons would be emitted in single burst and interacting with neuron.
- For a non-relativistic particle in a box of size L the energy scale is given by E_1=hbar^2*pi^2/2*m*L^2 so that the visible photons emitted would have energy scaled up by a factor (hbar_s/hbar)^2=about 4*10^6. The collective dropping of N_{cr} dark protons to larger space-time sheet would liberate a laser beam of dark photons with energy equal to the liberated zero point kinetic energy. For instance, for the p-adic length scale L(k=159=3*53)=about .63 microm this process would generate laser beam of IR dark photons with energy about. 5 eV also generated by the dropping of ordinary protons from k=137 atomic space-time sheet. There would thus be an interaction between dark protons in cell length scale and ordinary protons in atomic length scale. For instance, the dropping of dark protons in cell length scale could induce driving of protons back to the atomic space-time sheet essential for the metabolism. Similar argument applies to electrons with the scale of the zero point kinetic energy about 1 keV.
- If the energy spectrum associated with the conformational degrees of freedom of proteins, which corresponds roughly to a frequency scale of 10 GHz remains also invariant in the phase transition to dark protein state, coherent emissions of dark photons with microwave wave lengths would generate ordinary infrared photons. For instance, metabolic energy quanta of=about.5 eV could result from macroscopic Bose-Einstein condensates of 58 GHz dark photons resulting from the oscillations in the conformational degrees of freedom of dark proteins. A second option is that the conformal energies are scaled by hbar_s/hbar (omega would remain invariant). In this case these coherent excitations would generate ordinary photons with energy of about 1 keV able to drive electrons back to the atomic k=137 space-time sheet.
- Since magnetic flux tubes have a profound role in TGD inspired theory of consciousness, it is interesting to look also for the behavior of effective magnetic transition energies in the phase transition to the dark matter phase. This transition increases the scale of the magnetic interaction energy so that anomalously large magnetic spin splitting hbar_seB/m in the external magnetic field could serve as a signature of dark atoms. The dark transition energies relate by a factor hbar_s/hbar to the ordinary magnetic transition energies. For instance, in the magnetic field of Earth with a nominal value .5*10^{-4} Tesla dark electron cyclotron frequency is 6*10^5 Hz and corresponds to ordinary microwave photon with frequency =about1.2 GHz and wavelength lambda=about 25 cm. For proton the cyclotron frequency of 300 Hz would correspond to energy of ordinary photon with frequency of 6*10^5 Hz and could induce electronic cyclotron transitions and spin flips in turn generating for instance magneto-static waves. It is easy to imagine a few step dark matter hierarchy connecting EEG frequencies of dark matter with frequencies of visible light for ordinary photons. This kind of hierarchy would give considerable concreteness for the notion of magnetic body having size scale of Earth.

### 7. A connection with the scaling law of homeopathy

The value of the parameter 1/v_0(is essentially the ratio of CP_2 radius and Planck length scale (as also the ratio of Compton lengths of electron and proton) and rather near to 2^{11}. Interestingly, much larger number 2 *10^{11}=about 25*2^{33} appears in the simplest form for what I have christened the scaling law of homeopathy. This rule has been proposed on basis of experimental findings but has no convincing theoretical justification. The scaling law of homeopathy states that high frequency em radiation transforms to a low frequency radiation and vice versa preferably with the frequency ratio f_{high}/f_{low}=about 2*10^{11}. I have discussed some mechanisms for the transformation of high energy photons to low energy photons consistent with the rule and proposed a generalization of the rule based on p-adic length scale hypothesis. For instance, high energy visible photons of frequency f could induce an excitation of the receiving system having same frequency, propagating with velocity beta =v/c=about 10^{-11}/2, and having wave length equal lambda_0= f/v= lambda/beta. This excitation would in turn couple to photons of wavelength lambda_0 and frequency f_0=beta f. The proposed hierarchy of dark matter and ensuing hierarchy of dark laser beams decaying into lower level beams might provide a deeper explanation for the scaling law of homeopathy. First of all, the factor about 2*10^{11} is roughly equal to 3^3*v_0^{-3} which means that only three level hierarchy can be in question and that the sub-harmonic v_0/3 instead of v_0 must appears. Also the model of outer planetary orbits requires the third sub-harmonic v_0/3, and TGD provides a topological explanation for these harmonics in terms of the decay of magnetic flux tubes to three smaller flux tubes carrying each one third of the original magnetic flux. If v_0 is replaced with v_0/3, the predicted scaling factor is correct for v_0=5.13*10^{-4} corresponding exactly to 1/v_0=1950 (I cannot resist the temptation to mention that this happens to be my own birth year!: the world is filled with funny numerological accidents). For v_0=4.6*10^{-4} and n=3 sub-harmonic the scaling factor is 2.8*10^{11} and is definitely too large. It must be however noticed that hbar_s/hbar=1/v_0 holds true only only approximately: in reality hbar_s/hbar belongs to the interval [1/v_0,2/v_0). This allows to consider the possibility that the value of v_0 deduced from the quantization of the planetary orbits is quite not correct but equals to v_0(eff)=v_0/X, X \in [1,2). For X= 1.4 and v_0=5.13*10^{-4} one would indeed have v_0(eff)=4.8*10^{-4}. Even the value v_0= m_e/m_p=about 5.3*10^{-4} can be considered. Also the value v_0(eff)=2^{-11}, which is especially natural from the point of view of p-adic length scale hypothesis could be achieved as effective value of v_0. For more details see the chapter TGD and Astrophysics" of TGD and the chapter "Quantum Coherent Dark Matter and Bio-Systems as Macroscopic Quantum Systems" of "Genes, Memes, Qualia,...".
Subscribe to:
Posts (Atom)