https://matpitka.blogspot.com/search?updated-max=2009-10-19T23:13:00-07:00&max-results=100&reverse-paginate=true

Monday, September 22, 2008

Tritium beta decay anomaly and variations in the rates of radioactive processes

The determination of neutrino mass from the beta decay of tritium leads to a tachyonic mass squared [2,3,4,5]. I have considered several alternative explanations for this long standing anomaly. The first class of models relies on the presence of dark neutrino or antineutrino belt around the orbit of Earth. The second class of models relies on the prediction of nuclear string model that the neutral color bonds connecting nucleons to nuclear string can be also charged. This predicts large number of fake nuclei having only apparently the proton and neutron numbers deduced from the mass number.

  1. 3He nucleus resulting in the decay could be fake (tritium nucleus with one positively charged color bond making it to look like 3He). The idea that slightly smaller mass of the fake 3He might explain the anomaly: it however turned out that the model cannot explain the variation of the anomaly from experiment to experiment.

  2. Later (yesterday evenening!) I realized that also the initial 3H nucleus could be fake (3He nucleus with one negatively charged color bond). It turned out that fake tritium option has the potential to explain all aspects of the anomaly and also other anomalies related to radioactive and alpha decays of nuclei.

  3. Just one day ago I still believed on the alternative based on the assumption of dark neutrino or antineutrino belt surrounding Earth's orbit. This model has the potential to explain satisfactorily several aspects of the anomaly but fails in its simplest form to explain the dependence of the anomaly on experiment. Since the fake tritium scenario is based only on the basic assumptions of the nuclear string model and brings in only new values of kinematical parameters it is definitely favored.

In the following I shall describe only the models based on the decay of tritium to fake Helium and the decay of fake tritium to Helium.

1. Fake 3He option

Consider first the fake 3He option. Tritium (pnn) would decay with some rate to a fake 3He, call it 3Hef, which is actually tritium nucleus containing one positively charged color bond and possessing mass slightly different than that of 3He (ppn).

  1. In this kind of situation the expression for the function K(E,k) differs from K(stand) since the upper bound E0 for the maximal electron energy is modified:
    E0 ® E1=M(3H)-M(3Hef)-mm = M(3H)-M(3He)+DM-mm ,

    DM = M(3He)-M(3Hef) .

    Depending on whether 3Hef is heavier/lighter than 3He E0 decreases/decreases. From Vb Î [5-100] eV and from the TGD based prediction order m([`(n)]) ~ .27 eV one can conclude that DM should be in the range 5-100 eV.

  2. In the lowest approximation K(E) can be written as

    K(E) = K0(E,E1)q(E1-E) @ (E1-E)q(E1-E).

    Here q(x) denotes step function and K0(E,E1) corresponds to the massless antineutrino.

  3. If the fraction p of the final state nuclei correspond to a fake 3He the function K(E) deduced from data is a linear combination of functions K(E,3He) and K(E,3Hef) and given by

    K(E) = (1-p)K(E,3He)+ pK(E,3Hef)

    @ (1-p)(E0-E)q(E0-E)+ p(E1-E)q(E1-E)

    in the approximation mn=0.

    For m(3Hef) < m(3He) one has E1 > E0 giving


    K(E) = (E0-E)q(E0-E)+ p(E1-E0)q(E1-E)q(E-E0).

    K(E,E0) is shifted upwards by a constant term (1-p)DM in the region E0 > E. At E=E0 the derivative of K(E) is infinite which corresponds to the divergence of the derivative of square root function in the simpler parametrization using tachyonic mass. The prediction of the model is the presence of a tail corresponding to the region E0 < E < E1.

  4. The model does not as such explain the bump near the end point of the spectrum. The decay 3H® 3Hef can be interpreted in terms of an exotic weak decay d® u+W- of the exotic d quark at the end of color bond connecting nucleons inside 3H. The rate for these interactions cannot differ too much from that for ordinary weak interactions and W boson must transform to its ordinary variant before the decay W® e+`n. Either the weak decay at quark level or the phase transition could take place with a considerable rate only for low enough virtual W boson energies, say for energies for which the Compton length of massless W boson correspond to the size scale of color flux tubes predicted to be much longer than nuclear size. Is so the anomaly would be absent for higher energies and a bump would result.

  5. The value of K(E) at E=E0 is Vb º p(E1-E0). The variation of the fraction p could explain the observed dependence of Vb on experiment as well as its time variation. It is however difficult to understand how p could vary.

2. Fake 3H option

Assume that a fraction p of the tritium nuclei are fake and correspond to 3He nuclei with one negatively charged color bond.

  1. By repeating the previous calculation exactly the same expression for K(E) in the approximation mn=0 but with the replacement

    DM = M(3He)-M(3Hef)® M(3Hf)-M(3H) .

  2. In this case it is possible to understand the variations in the shape of K(E) if the fraction of 3Hf varies in time and from experiment to experiment. A possible mechanism inducing this variation is a transition inducing the transformation 3Hf® 3H by an exotic weak decay d+p® u+n, where u and d correspond to the quarks at the ends of color flux tubes. This kind of transition could be induced by the absorption of X-rays, say artificial X-rays or X-rays from Sun. The inverse of this process in Sun could generate X rays which induce this process in resonant manner at the surface of Earth.

  3. The well-known poorly understood X-ray bursts from Sun during solar flares in the wavelength range 1-8 A correspond to energies in the range 1.6-12.4 keV, 3 octaves in good approximation. This radiation could be partly due to transitions between ordinary and exotic states of nuclei rather than brehmstrahlung resulting in the acceleration of charged particles to relativistic energies. The energy range suggests the presence of three p-adic length scales: nuclear string model indeed predicts several p-adic length scales for color bonds corresponding to different mass scales for quarks at the ends of the bonds. This energy range is considerably above the energy range 5-100 eV and suggests the range [4×10-4, 6×10-2] for the values of p. The existence of these excitations would mean a new branch of low energy nuclear physics, which might be dubbed X-ray nuclear physics.

  4. The approximately 1/2 year period of the temporal variation would naturally correspond to the 1/R2 dependence of the intensity of X-ray radiation from Sun. There is evidence that the period is few hours longer than 1/2 years which supports the view that the origin of periodicity is not purely geometric but relates to the dynamics of X-ray radiation from Sun. Note that for 2 hours one would have DT/T @ 2-11, which defines a fundamental constant in TGD Universe and is also near to the electron proton mass ratio.

  5. All nuclei could appear as similar anomalous variants. Since both weak and strong decay rates are sensitive to the binding energy, it is possible to test this prediction by finding whether nuclear decay rates show anomalous time variation.

  6. The model could explain also other anomalies of radioactive reaction rates including the findings of Shnoll [1] and the unexplained fluctuations in the decay rates of 32Si and 226Ra reported quite recently and correlating with 1/R2, R distance between Earth and Sun. 226Ra decays by alpha emission but the sensitive dependence of alpha decay rate on binding energy means that the temporal variation of the fraction of fake 226Ra isotopes could explain the variation of the decay rates. The intensity of the X-ray radiation from Sun is proportional to 1/R2 so that the correlation of the fluctuation with distance would emerge naturally.

  7. Also a dip in the decay rates of 54Mn coincident with a peak in proton and X-ray fluxes during solar flare has been observed: the proposal is that neutrino flux from Sun is also enhanced during the solar flare and induces the effect. A peak in X-ray flux is a more natural explanation in TGD framework.

  8. The model predicts interaction between atomic physics and nuclear physics, which might be of relevance in biology. For instance, the transitions between exotic and ordinary variants of nuclei could yield X-rays inducing atomic transitions or ionization. The wave length range 1-8 Angstroms for anomalous X-rays corresponds to the range Z in the rage [11,30] for ionization energies. The biologically important ions Na+, Mg++, P-, Cl-, K+, Ca++ have Z= (11,15,17,19,20). I have proposed that Na+, Cl-, K+ (fermions) are actually bosonic exotic ions forming Bose-Einstein condensates at magnetic flux tubes (see this). The exchange of W bosons between neutral Ne and A(rgon) atoms (bosons) could yield exotic bosonic variants of Na+ (perhaps even Mg++, which is boson also as ordinary ion) and Cl- ions. Similar exchange between A atoms could yield exotic bosonic variants of Cl- and K+ (and even Ca++, which is also boson as ordinary variant). This transformation might relate to the paradoxical finding that noble gases can act as narcotics. This hypothesis is testable by measuring the nuclear weights of these ions. X-rays from Sun are not present during night time and this could relate to the night-day cycle of living organisms. Note that the nagnetic bodies are of size scale of Earth and even larger so that the exotic ions inside them could be subject to intense X-ray radiation. X-rays could also be dark X-rays with large Planck constant and thus with much lower frequency than ordinary X-rays so that control could be possible.

References

[1] S. E. Shnoll et al (1998), Realization of discrete states during fluctuations in macroscopic processes, Uspekhi Fisicheskikh Nauk, Vol. 41, No. 10, pp. 1025-1035.

[2]V. M. Lobashev et al(1996), in Neutrino 96 (Ed. K. Enqvist, K. Huitu, J. Maalampi). World Scientific, Singapore.

[3] Ch. Weinheimer et al (1993), Phys. Lett. 300B, 210.

[4] J. I. Collar (1996), Endpoint Structure in Beta Decay from Coherent Weak-Interaction of the Neutrino, hep-ph/9611420. [5]G. J. Stephenson Jr. (1993), Perspectives in Neutrinos, Atomic Physics and Gravitation, ed. J. T. Thanh Van, T. Darmour, E. Hinds and J. Wilkerson (Editions Frontieres, Gif-sur-Yvette), p.31.

For more details see the chapters TGD and Nuclear Physics and Nuclear String Hypothesis of "p-Adic length scale Hypothesis and Dark Matter Hierarchy".

Monday, September 15, 2008

Zero energy ontology, self hierarchy, and the notion of time

In the previous posting I discussed the most recent view about zero energy ontology and p-adicization program. One manner to test the internal consistency of this framework is by formulating the basic notions and problems of TGD inspired quantum theory of consciousness and quantum biology in terms of zero energy ontology. I have discussed these topics already earlier but the more detailed understanding of the role of causal diamonds (CDs) brings many new aspects to the discussion.

In consciousness theory the basic challenges are to understand the asymmetry between positive and negative energies and between two directions of geometric time at the level of conscious experience, the correspondence between experienced and geometric time, and the emergence of the arrow of time. One should also explain why human sensory experience is about a rather narrow time interval of about .1 seconds and why memories are about the interior of much larger CD with time scale of order life time. One should also have a vision about the evolution of consciousness takes place: how quantum leaps leading to an expansion of consciousness take place.

Negative energy signals to geometric past - about which phase conjugate laser light represents an example - provide an attractive tool to realize intentional action as a signal inducing neural activities in the geometric past (this would explain Libet's classical findings), a mechanism of remote metabolism, and the mechanism of declarative memory as communications with the geometric past. One should understand how these signals are realized in zero energy ontology and why their occurrence is so rare.

In the following my intention is to demonstrate that TGD inspired theory of consciousness and quantum TGD proper indeed seem to be in tune and that this process of comparison helps considerably in the attempt to develop the TGD based ontology at the level of details.

1  Causal diamonds as correlates for selves

Quantum jump as a moment of consciousness, self as a sequence of quantum jumps integrating to self, and self hierarchy with sub-selves experienced as mental images, are the basic notion of TGD inspired quantum theory of consciousness. In the most ambitious program self hierarchy reduces to a fractal hierarchy of quantum jumps within quantum jumps.

It is natural to interpret CD:s as correlates of selves. CDs can be interpreted in two manners: as subsets of the generalized imbedding space or as sectors of the world of classical worlds (WCW). Accordingly, selves correspond to CD:s of the generalized imbedding space or sectors of WCW, literally separate interacting quantum Universes. The spiritually oriented reader might speak of Gods. Sub-selves correspond to sub-CD:s geometrically. The contents of consciousness of self is about the interior of the corresponding CD at the level of imbedding space. For sub-selves the wave function for the position of tip of CD brings in the delocalization of sub-WCW.

The fractal hierarchy of CDs within CDs defines the counterpart for the hierarchy of selves: the quantization of the time scale of planned action and memory as T(k) = 2kT0 suggest an interpretation for the fact that we experience octaves as equivalent in music experience.

2. Why sensory experience is about so short time interval?

CD picture implies automatically the 4-D character of conscious experience and memories form part of conscious experience even at elementary particle level: in fact, the secondary p-adic time scale of electron is T=1 seconds defining a fundamental time scale in living matter. The problem is to understand why the sensory experience is about a short time interval of geometric time rather than about the entire personal CD with temporal size of order life-time. The obvious explanation would be that sensory input corresponds to sub-selves (mental images) which correspond to CD:s with T(127) @ .1 s (electrons or their Cooper pairs) at the upper light-like boundary of CD assignable to the self. This requires a strong asymmetry between upper and lower light-like boundaries of CD:s.

  1. The only reasonable manner to explain the situation seems to be that the addition of CD:s within CD:s in the state construction must always glue them to the upper light-like boundary of CD along light-like radial ray from the tip of the past directed light-cone. This conforms with the classical picture according to which classical sensory data arrives from the geometric past with velocity which is at most light velocity.

  2. One must also explain the rare but real occurrence of phase conjugate signals understandable as negative energy signals propagating towards geometric past. The conditions making possible negative energy signals are achieved when the sub-CD is glued to both the past and future directed light-cones at the space-like edge of CD along light-like rays emerging from the edge. This exceptional case gives negative energy signals traveling to the geometric past. The above mentioned basic control mechanism of biology would represent a particular instance of this situation. Negative energy signals as a basic mechanism of intentional action would explain why living matter seems to be so special.

  3. Geometric memories would correspond to the lower boundaries of CD:s and would not be in general sharp because only the sub-CD:s glued to both upper and lower light-cone boundary would be present. A temporal sequence of mental images, say the sequence of digits of a phone number, could corresponds to a sequence of sub-CD:s glued to the upper light-cone boundary.

  4. Sharing of mental images corresponds to a fusion of sub-selves/mental images to single sub-self by quantum entanglement: the space-time correlate for this could be flux tubes connecting space-time sheets associated with sub-selves represented also by space-time sheets inside their CD:s. It could be that these ëpisodal" memories correspond to CD:s at upper light-cone boundary of CD.

On basis of these arguments it seems that the basic conceptual framework of TGD inspired theory of consciousness can be realized in zero energy ontology. Interesting questions relate to how dynamical selves are.

  1. Is self doomed to live inside the same sub-WCW eternally as a lonely god? This question has been already answered: there are interactions between sub-CD:s of given CD, and one can think of selves as quantum superposition of states in CD:s with wave function having as its argument the tips of CD, or rather only the second one since T is assumed to be quantized.

  2. Is there a largest CD in the personal CD hierarchy of self in an absolute sense? Or is the largest CD present only in the sense that the contribution to the contents of consciousness coming from very large CD:s is negligible? Long time scales T correspond to low frequencies and thermal noise might indeed mask these contributions very effectively. Here however the hierarchy of Planck constants and generalization of the imbedding space would come in rescue by allowing dark EEG photons to have energies above thermal energy.

  3. Can selves evolve in the sense that the size of CD increases in quantum leaps so that the corresponding time scale T=2kT0 of memory and planned action increases? Geometrically this kind of leap would mean that CD becomes a sub-CD of a larger CD either at the level of conscious experience or in absolute sense. This leap can occur in two senses: as an increase of the largest p-adic time scale in the personal hierarchy of space-time sheets or as increase of the largest value of Planck constants in the personal dark matter hierarchy. At the level of individual this would mean emergence of increasingly lower frequencies of generalization of EEG and of the levels of dark matter hierarchy with large value of Planck constant.

  4. In 2-D illustration of the leap leading to a higher level of self hierarchy would mean simply the continuation of CD to right or left in the 2-D visualization of CD. Since the preferred M2 is contained in the tangent space of space-time surfaces, and since preferred M2 plays a key role in dark matter hierarchy too, one must ask whether the 2-D illustration might have some deeper truth in it.

3. New view about arrow of time

Perhaps the most fundamental problem related to the notion of time concerns the relationship between experienced time and geometric time. The two notions are definitely different: think only the irreversibility of experienced time and the reversibility of the geometric time and the absence of future of the experienced time. Also the deterministic character of the dynamics in geometric time is in conflict with the notion of free will supported by the direct experience.

In the standard materialistic ontology experienced time and geometric time are identified. In the naivest picture the flow of time is interpreted in terms of the motion of 3-D time=constant surface of space-time towards geometric future without any explanation for why this kind of motion would occur. This identification is plagued by several difficulties. In special relativity the difficulties relate to the impossibility define the notion of simultaneity in a unique manner and the only possible manner to save this notion seems to be the replacement of time=constant 3-surface with past directed light-cone assignable to the world-line of observer. In general relativity additional difficulties are caused by the general coordinate invariance unless one generalizes the picture of special relativity: problems are however caused by the fact that past light-cones make sense only locally. In quantum physics quantum measurement theory leads to a paradoxical situation since the observed localization of the state function reduction to a finite space-time volume is in conflict with the determinism of Schrödinger equation.

TGD forces a new view about the relationship between experienced and geometric time. Although the basic paradox of quantum measurement theory disappears the question about the arrow of geometric time remains.

  1. Selves correspond to CD:s the own sub-WCW:s. These sub-WCW:s and their projections to the imbedding space do not move anywhere. Therefore standard explanation for the arrow of geometric time cannot work. Neither can the experience about flow of time correspond to quantum leaps increasing the size of the largest CD contributing to the conscious experience of self.

  2. The only plausible interpretation is based on quantum classical correspondence and the fact that space-times are 4-surfaces of the imbedding space. If quantum jump corresponds to a shift of quantum superposition of space-time sheets towards geometric past in the first approximation (as quantum classical correspondence suggests), one can indeed understand the arrow of time. Space-time surfaces simply shift backwards with respect to the geometric time of the imbedding space and therefore to the 8-D perceptive field defined by the CD. This creates in the materialistic mind a kind of temporal variant of train illusion. Space-time as 4-surface and macroscopic and macro-temporal quantum coherence are absolutely essential for this interpretation to make sense.

Why this shifting should always take place to the direction of geometric past of the imbedding space? What seems clear is that the asymmetric construction of zero energy states should correlate with the preferred direction. If question is about probabilities, the basic question would be why the probabilities for shifts in the direction of geometric past are higher. Here some alternative attempts to answer this question are discussed.

  1. Cognition and time relate to each other very closely and the required fusion of real physics with various p-adic physics of cognition and intentionality could also have something to do with the asymmetry. Indeed, in the p-adic sectors the transcendental values of p-adic light-cone proper time coordinate correspond to literally infinite values of the real valued light-cone proper time, and one can say that most points of p-adic space-time sheets serving as correlates of thoughts and intentions reside always in the infinite geometric future in the real sense. Therefore cognition and intentionality would break the symmetry between positive and negative energies and geometric past and future, and the breaking of arrow of geometric time could be seen as being induced by intentional action and also due to the basic aspects of cognitive experience.

  2. Zero energy ontology suggests also a possible reason for the asymmetry. Standard quantum mechanics encourages the identification of the space of negative energy states as the dual for the space of positive energy states. There are two kinds of duals. Hilbert space dual is identified as the space of continuous linear functionals from Hilbert space to the coefficient field and is isometrically anti-isomorphic with the Hilbert space. This justifies the bra-ket notation. In the case of vector space the relevant notion is algebraic dual. Algebraic dual can be identified as an infinite direct product of the coefficient field identified as a 1-dimensional vector space. Direct product is defined as the set of functions from an infinite index set I to the disjoint union of infinite number of copies of the coefficient field indexed by I. Infinite-dimensional vector space corresponds to infinite direct sum consisting of functions which are non-vanishing for a finite number of indices only. Hence vector space dual in infinite-dimensional case contains much more states than the vector space and does not have enumerable basis.

    If negative energy states correspond to a subspace of vector space dual containing Hilbert space dual, the number of negative energy states is larger than the number of positive energy states. This asymmetry could correspond to better measurement resolution at the upper light-cone cone boundary so that the state space at lower light-cone boundary would be included via inclusion of HFFs to that associated with the upper light-cone boundary. Geometrically this would mean the possibility to glue to the upper light-cone boundary CD which can be smaller than those associated with the lower one.

  3. The most convincing candidate for an answer comes from consciousness theory. One must understand also why the contents of sensory experience is concentrated around a narrow time interval whereas the time scale of memories and anticipation are much longer. The proposed mechanism is that the resolution of conscious experience is higher at the upper boundary of CD. Since zero energy states correspond to light-like 3-surfaces, this could be a result of self-organization rather than a fundamental physical law.

    1. The key assumption is that CDs have CDs inside CDs and that the vertices of generalized Feynman diagrams are contained within sub-CDs. It is not assumed that CDs are glued to the upper boundary of CD since the arrow of time results from self organization when the distribution of sub-CDs concentrates around the upper boundary of CD. In a category theoretical formulation for generalized Feynman diagrammatics based on this picture is developed.

    2. CDs define the perceptive field for self. Selves are curious about the space-time sheets outside their perceptive field in the geometric future (relative notion) of the imbedding space and perform quantum jumps tending to shift the superposition of the space-time sheets to the direction of geometric past (past defined as the direction of shift!). This creates the illusion that there is a time=snapshot front of consciousness moving to geometric future in fixed background space-time as an analog of train illusion.

    3. The fact that news come from the upper boundary of CD implies that self concentrates its attention to this region and improves the resolutions of sensory experience and quantum measurement here. The sub-CD:s generated in this manner correspond to mental images with contents about this region. As a consequence, the contents of conscious experience, in particular sensory experience, tend to be about the region near the upper boundary.

    4. This mechanism in principle allows the arrow of the geometric time to vary and depend on p-adic length scale and the level of dark matter hierarchy. The occurrence of phase transitions forcing the arrow of geometric time to be same everywhere are however plausible for the reason that the lower and upper boundaries of given CD must possess the same arrow of geometric time.

For details see chapters TGD as a Generalized Number Theory I: p-Adicization Program.

Sunday, September 14, 2008

The most recent vision about zero energy ontology and p-adicization

The generalization of the number concept obtained by fusing real and p-adics along rationals and common algbraics is the basic philosophy behind p-adicization. This however requires that it is possible to speak about rational points of the imbedding space and the basic objection against the notion of rational points of imbedding space common to real and various p-adic variants of the imbedding space is the necessity to fix some special coordinates in turn implying the loss of a manifest general coordinate invariance. The isometries of the imbedding space could save the situation provided one can identify some special coordinate system in which isometry group reduces to its discrete subgroup. The loss of the full isometry group could be compensated by assuming that WCW is union over sub-WCW:s obtained by applying isometries on basic sub-WCW with discrete subgroup of isometries.

The combination of zero energy ontology realized in terms of a hierarchy causal diamonds and hierarchy of Planck constants providing a description of dark matter and leading to a generalization of the notion of imbedding space suggests that it is possible to realize this dream. The article TGD: What Might be the First Principles? provides a brief summary about recent state of quantum TGD helping to understand the big picture behind the following considerations.

1. Zero energy ontology briefly

  1. The basic construct in the zero energy ontology is the space CD×CP2, where the causal diamond CD is defined as an intersection of future and past directed light-cones with time-like separation between their tips regarded as points of the underlying universal Minkowski space M4. In zero energy ontology physical states correspond to pairs of positive and negative energy states located at the boundaries of the future and past directed light-cones of a particular CD. CD:s form a fractal hierarchy and one can glue smaller CD:s within larger CD along the upper light-cone boundary along a radial light-like ray: this construction recipe allows to understand the asymmetry between positive and negative energies and why the arrow of experienced time corresponds to the arrow of geometric time and also why the contents of sensory experience is located to so narrow interval of geometric time. One can imagine evolution to occur as quantum leaps in which the size of the largest CD in the hierarchy of personal CD:s increases in such a manner that it becomes sub-CD of a larger CD. p-Adic length scale hypothesis follows if the values of temporal distance T between tips of CD come in powers of 2n. All conserved quantum numbers for zero energy states have vanishing net values. The interpretation of zero energy states in the framework of positive energy ontology is as physical events, say scattering events with positive and negative energy parts of the state interpreted as initial and final states of the event.

  2. In the realization of the hierarchy of Planck constants CD×CP2 is replaced with a Cartesian product of book like structures formed by almost copies of CD:s and CP2:s defined by singular coverings and factors spaces of CD and CP2 with singularities corresponding to intersection M2ÇCD and homologically trivial geodesic sphere S2 of CP2 for which the induced Kähler form vanishes. The coverings and factor spaces of CD:s are glued together along common M2ÇCD. The coverings and factors spaces of CP2 are glued together along common homologically non-trivial geodesic sphere S2. The choice of preferred M2 as subspace of tangent space of X4 at all its points and having interpretation as space of non-physical polarizations, brings M2 into the theory also in different manner. S2 in turn defines a subspace of the much larger space of vacuum extremals as surfaces inside M4×S2.

  3. Configuration space (the world of classical worlds, WCW) decomposes into a union of sub-WCW:s corresponding to different choices of M2 and S2 and also to different choices of the quantization axes of spin and energy and and color isospin and hyper-charge for each choice of this kind. This means breaking down of the isometries to a subgroup. This can be compensated by the fact that the union can be taken over the different choices of this subgroup.

  4. p-Adicization requires a further breakdown to discrete subgroups of the resulting sub-groups of the isometry groups but again a union over sub-WCW:s corresponding to different choices of the discrete subgroup can be assumed. Discretization relates also naturally to the notion of number theoretic braid.

Consider now the critical questions.

  1. Very naively one could think that center of mass wave functions in the union of sectors could give rise to representations of Poincare group. This does not conform with zero energy ontology, where energy-momentum should be assignable to say positive energy part of the state and where these degrees of freedom are expected to be pure gauge degrees of freedom. If zero energy ontology makes sense, then the states in the union over the various copies corresponding to different choices of M2 and S2 would give rise to wave functions having no dynamical meaning. This would bring in nothing new so that one could fix the gauge by choosing preferred M2 and S2 without losing anything. This picture is favored by the interpretation of M2 as the space of longitudinal polarizations.

  2. The crucial question is whether it is really possible to speak about zero energy states for a given sector defined by generalized imbedding space with fixed M2 and S2. Classically this is possible and conserved quantities are well defined. In quantal situation the presence of the lightcone boundaries breaks full Poincare invariance although the infinitesimal version of this invariance is preserved. Note that the basic dynamical objects are 3-D light-like "legs" of the generalized Feynman diagrams.

2. Definition of energy inzero energy ontology

Can one then define the notion of energy for positive and negative energy parts of the state? There are two alternative approaches depending on whether one allows or does not allow wave-functions for the positions of tips of light-cones.

Consider first the naive option for which four momenta are assigned to the wave functions assigned to the tips of CD:s.

  1. The condition that the tips are at time-like distance does not allow separation to a product but only following kind of wave functions

    Ψ = exp(ip·m)Θ(m2) Θ(m0)× Φ(p) , m=m+-m-.

    Here m+ and m- denote the positions of the light-cones and Q denotes step function. F denotes configuration space spinor field in internal degrees of freedom of 3-surface. One can introduce also the decomposition into particles by introducing sub-CD:s glued to the upper light-cone boundary of CD.

  2. The first criticism is that only a local eigen state of 4-momentum operators p± = (h/2p) Ñ/i is in question everywhere except at boundaries and at the tips of the CD with exact translational invariance broken by the two step functions having a natural classical interpretation. The second criticism is that the quantization of the temporal distance between the tips to T = 2kT0 is in conflict with translational invariance and reduces it to a discrete scaling invariance.

The less naive approach relies of super conformal structures of quantum TGD assumes fixed value of T and therefore allows the crucial quantization condition T=2kT0.

  1. Since light-like 3-surfaces assignable to incoming and outgoing legs of the generalized Feynman diagrams are the basic objects, can hope of having enough translational invariance to define the notion of energy. If translations are restricted to time-like translations acting in the direction of the future (past) then one has local translation invariance of dynamics for classical field equations inside dM4± as a kind of semigroup. Also the M4 translations leading to interior of X4 from the light-like 2-surfaces surfaces act as translations. Classically these restrictions correspond to non-tachyonic momenta defining the allowed directions of translations realizable as particle motions. These two kinds of translations have been assigned to super-canonical conformal symmetries at dM4±×CP2 and and super Kac-Moody type conformal symmetries at light-like 3-surfaces. Equivalence Principle in TGD framework states that these two conformal symmetries define a structure completely analogous to a coset representation of conformal algebras so that the four-momenta associated with the two representations are identical .

  2. The condition selecting preferred extremals of Kähler action is induced by a global selection of M2 as a plane belonging to the tangent space of X4 at all its points . The M4 translations of X4 as a whole in general respect the form of this condition in the interior. Furthermore, if M4 translations are restricted to M2, also the condition itself - rather than only its general form - is respected. This observation, the earlier experience with the p-adic mass calculations, and also the treatment of quarks and gluons in QCD encourage to consider the possibility that translational invariance should be restricted to M2 translations so that mass squared, longitudinal momentum and transversal mass squared would be well defined quantum numbers. This would be enough to realize zero energy ontology. Encouragingly, M2 appears also in the generalization of the causal diamond to a book-like structure forced by the realization of the hierarchy of Planck constant at the level of the imbedding space.

  3. That the cm degrees of freedom for CD would be gauge like degrees of freedom sounds strange. The paradoxical feeling disappears as one realizes that this is not the case for sub-CDs, which indeed can have non-trivial correlation functions with either upper or lower tip of the CD playing a role analogous to that of an argument of n-point function in QFT description. One can also say that largest CD in the hierarchy defines infrared cutoff.

3. p-Adic variants of the imbedding space

Consider now the construction of p-adic variants of the imbedding space.

  1. Rational values of p-adic coordinates are non-negative so that light-cone proper time a4,+=Ö(t2-z2-x2-y2) is the unique Lorentz invariant choice for the p-adic time coordinate near the lower tip of CD. For the upper tip the identification of a4 would be a4,-=Ö((t-T)2-z2-x2-y2). In the p-adic context the simultaneous existence of both square roots would pose additional conditions on T. For 2-adic numbers T=2nT0, n ³ 0 (or more generally T=åk ³ n0bk 2k), would allow to satisfy these conditions and this would be one additional reason for T=2nT0 implying p-adic length scale hypothesis. The remaining coordinates of CD are naturally hyperbolic cosines and sines of the hyperbolic angle h±,4 and cosines and sines of the spherical coordinates q and f.

  2. The existence of the preferred plane M2 of un-physical polarizations would suggest that the 2-D light-cone proper times a2,+ = Ö(t2-z2) a2,- = Ö((t-T)2-z2) can be also considered. The remaining coordinates would be naturally h±,2 and cylindrical coordinates (r,f).

  3. The transcendental values of a4 and a2 are literally infinite as real numbers and could be visualized as points in infinitely distant geometric future so that the arrow of time might be said to emerge number theoretically. For M2 option p-adic transcendental values of r are infinite as real numbers so that also spatial infinity could be said to emerge p-adically.

  4. The selection of the preferred quantization axes of energy and angular momentum unique apart from a Lorentz transformation of M2 would have purely number theoretic meaning in both cases. One must allow a union over sub-WCWs labeled by points of SO(1,1). This suggests a deep connection between number theory, quantum theory, quantum measurement theory, and even quantum theory of mathematical consciousness.

  5. In the case of CP2 there are three real coordinate patches involved . The compactness of CP2 allows to use cosines and sines of the preferred angle variable for a given coordinate patch.

    ξ1= tan(u)× cos(Θ/2)× exp(i(Ψ+Φ)/2) ,

    ξ2= tan(u)× sin(Θ/2)× exp(i(Ψ-Φ)/2).

    The ranges of the variables u,Q, F,Y are [0,p/2],[0,p],[0,4p],[0,2p] respectively. Note that u has naturally only the positive values in the allowed range. S2 corresponds to the values F = Y = 0 of the angle coordinates.

  6. The rational values of the (hyperbolic) cosine and sine correspond to Pythagorean triangles having sides of integer length and thus satisfying m2 = n2+r2 (m2=n2-r2). These conditions are equivalent and allow the well-known explicit solution . One can construct a p-adic completion for the set of Pythagorean triangles by allowing p-adic integers which are infinite as real integers as solutions of the conditions m2=r2±s2. These angles correspond to genuinely p-adic directions having no real counterpart. Hence one obtains p-adic continuum also in the angle degrees of freedom. Algebraic extensions of the p-adic numbers bringing in cosines and sines of the angles p/n lead to a hierarchy increasingly refined algebraic extensions of the generalized imbedding space. Since the different sectors of WCW directly correspond to correlates of selves this means direct correlation with the evolution of the mathematical consciousness. Trigonometric identities allow to construct points which in the real context correspond to sums and differences of angles.

  7. Negative rational values of the cosines and sines correspond as p-adic integers to infinite real numbers and it seems that one use several coordinate patches obtained as copies of the octant (x ³ 0,y ³ 0,z ³ 0,). An analogous picture applies in CP2 degrees of freedom.

  8. The expression of the metric tensor and spinor connection of the imbedding in the proposed coordinates makes sense as a p-adic numbers in the algebraic extension considered. The induction of the metric and spinor connection and curvature makes sense provided that the gradients of coordinates with respect to the internal coordinates of the space-time surface belong to the extensions. The most natural choice of the space-time coordinates is as subset of imbedding space-coordinates in a given coordinate patch. If the remaining imbedding space coordinates can be chosen to be rational functions of these preferred coordinates with coefficients in the algebraic extension of p-adic numbers considered for the preferred extremals of Kähler action, then also the gradients satisfy this condition. This is highly non-trivial condition on the extremals and if it works might fix completely the space of exact solutions of field equations. Space-time surfaces are also conjectured to be hyper-quaternionic , this condition might relate to the simultaneous hyper-quaternionicity and Kähler extremal property. Note also that this picture would provide a partial explanation for the decomposition of the imbedding space to sectors dictated also by quantum measurement theory and hierarchy of Planck constants.

4. p-Adic variants for the sectors of WCW

One can also wonder about the most general definition of the p-adic variants of the sectors of the world of classical worlds.

  1. The restriction of the surfaces in question to be expressible in terms of rational functions with coefficients which are rational numbers of belong to algebraic extension of rationals means that the world of classical worlds can be regarded as a a discrete set and there would be no difference between real and p-adic worlds of classical worlds: a rather unexpected conclusion.

  2. One can of course whether one should perform completion also for WCWs. In real context this would mean completion of the rational number valued coefficients of a rational function to arbitrary real coefficients and perhaps also allowance of Taylor and Laurent series as limits of rational functions. In the p-adic case the integers defining rational could be allowed to become p-adic transcendentals infinite as real numbers. Also now also Laurent series could be considered.

  3. In this picture there would be close analogy between the structure of generalized imbedding space and WCW. Different WCW:s could be said to intersect in the space formed by rational functions with coefficients in algebraic extension of rationals just real and p-adic variants of the imbedding space intersect along rational points. In the spirit of algebraic completion one might hope that the expressions for the various physical quantities, say the value of Kähler action, Kähler function, or at least the exponent of Kähler function (at least for the maxima of Kähler function) could be defined by analytic continuation of their values from these sub-WCW to various number fields. The matrix elements for p-adic-to-real phase transitions of zero energy states interpreted as intentional actions could be calculated in the intersection of real and p-adic WCW:s by interpreting everything as real.

For details see chapters TGD as a Generalized Number Theory I: p-Adicization Program.

Wednesday, September 03, 2008

Dark nuclear strings as analogs of DNA-, RNA- and amino-acid sequences and baryonic realization of genetic code

In the earlier posting I considered the possibility that the evolution of genome might not be random but be controlled by magnetic body and that various DNA sequences might be tested in the virtual world made possible by the virtual counterparts of bio-molecules realized in terms of the homeopathic mechanism as it is understood in TGD framework. The minimal option is that virtual DNA sequences have flux tube connections to the lipids of the cell membrane so that their quality as hardware of tqc can be tested but that there is no virtual variant of transcription and translation machinery. One can however ask whether also virtual amino-acids could be present and whether this could provide deeper insights to the genetic code.

  1. Water molecule clusters are not the only candidates for the representatives of linear molecules. An alternative candidate for the virtual variants of linear bio-molecules are dark nuclei consisting of strings of scaled up dark variants of neutral baryons bound together by color bonds having the size scale of atom, which I have introduced in the model of cold fusion and plasma electrolysis both taking place in water environment. Colored flux tubes defining braidings would generalize this picture by allowing transversal color magnetic flux tube connections between these strings.

  2. Baryons consist of 3 quarks just as DNA codons consist of three nucleotides. Hence an attractive idea is that codons correspond to baryons obtained as open strings with quarks connected by two color flux tubes. The minimal option is that the flux tubes are neutral. One can also argue that the minimization of Coulomb energy allows only neutral dark baryons. The question is whether the neutral dark baryons constructed as string of 3 quarks using neutral color flux tubes could realize 64 codons and whether 20 aminoacids could be identified as equivalence classes of some equivalence relation between 64 fundamental codons in a natural manner.

The following model indeed reproduces the genetic code directly from a model of dark neutral baryons as strings of 3 quarks connected by color flux tubes.

  1. Dark nuclear baryons are considered as a fundamental realization of DNA codons and constructed as open strings of 3 dark quarks connected by two colored neutral flux tubes. DNA sequences would in turn correspond to sequences of dark baryons. It is assumed that the net charge of the dark baryons vanishes so that Coulomb repulsion is minimized.

  2. One can classify the states of the open 3-quark string by the total charges and spins associated with 3 quarks and to the two color bonds. Total em charges of quarks vary in the range ZB Î {2,1,0,-1} and total color bond charges in the range Zb Î {2,1,0,-1,-2}. Only neutral states are allowed. Total quark spin projection varies in the range JB=3/2,1/2,-1/2,-3/2 and the total flux tube spin projection in the range Jb = 2,1,-1,-2. If one takes for a given total charge assumed to be vanishing one representative from each class (JB,Jb), one obtains 4×5=20 states which is the number of amino-acids. Thus genetic code might be realized at the level of baryons by mapping the neutral states with a given spin projection to single representative state with the same spin projection.

  3. The states of dark baryons in quark degrees of freedom can be constructed as representations of rotation group and strong isospin group. The tensor product 2Ä2Ä2 is involved in both cases. Physically it is known that only representations with isospin 3/2 and spin 3/2 (D resonance) and isospin 1/2 and spin 1/2 (proton and neutron) are realized. Spin statistics problem forced to introduce quark color (this means that one cannot construct the codons as sequences of 3 nucleons!).

  4. Second nucleon spin doublet has wrong parity. Using only 4Å2 for rotation group would give degeneracies (1,2,2,1). One however requires the representations 4Å2Å2 rather than only 4Å2 to get 8 states with a given charge. One should transform the wrong parity doublet to positive parity doublet somehow. Since open string geometry breaks rotational symmetry to a subgroup of rotations acting along the direction of the string, the attractive possible is add a stringy excitation with angular momentum projection L=-1 to the wrong parity doublet so that parity comes out correctly. This would give degeneracies (1,2,3,2).

  5. In flux tube degrees of freedom the situation is analogous to construction of mesons from quarks and antiquarks and one obtains pion with spin 0 and r meson with spin 1. States of zero charge correspond to the tensor product 2Ä2=3Å1 for rotation group. Drop the singlet and take only the analog of neutral r meson. The tensor product 3Ä3=5Å3Å1 gives 8+1 states and leaving only spin 2 and spin 1 states gives 8 states. The degeneracies of states with given spin projection for 5Å3 are (1,2,2,2,1). Genetic code means projection of the states of 5Å3 to those of 5 with the same spin projection.

  6. Genetic code maps of ( 4Å2Å2)Ä(5Å3) to the states of 4×5. The most natural map maps the states with given spin to state with same spin so that the code is unique. This would give the degeneracies D(k) as products of numbers DB Î {1,2,3,2} and Db Î {1,2,2,2,1}. The numbers N(k) of aminoacids coded by D(k) codons would be


    [N(1),N(2),N(3),N(4),N(6)]=[2,7,2,6,3] .

    The correct numbers for vertebrate nuclear code are (N(1),N(2),N(3),N(4),N(6)) = (2,9,1,5,3). Some kind of symmetry breaking must take place and should relate to the emergence of stopping codons. If one codon in second 3-plet becomes stopping codon, 3-plet becomes doublet. If 2 codons in 4-plet become stopping codons it also becomes doublet and one obtains the correct result (2,9,1,5,3)!

The conclusion is that genetic code can be understand as a map of stringy baryonic states induced by the projection of all states with same spin projection to a representative state with same spin projection. Genetic code would be realized at the level of dark nuclear physics and perhaps also at the level of ordinary nuclear physics and that biochemical representation would be only one particular higher level representation of the code.

For details see chapters Homeopathy in Many-Sheeted Space-time of "Bio-Systems as Conscious Holograms" and The Notion of Wave-Genome and DNA as Topological Quantum Computer of "Genes and Memes"

Sunday, August 31, 2008

Could virtual DNAs allow a controlled development of the genome?

In the previous postings I have discussed TGD based model of homeopathy and phantom DNA based on the possibility that water molecules receiving the magnetic bodies of biomolecules in homeopathic manifacture process can mimick those aspects of these molecules most relevant for the biological functions. By combining these ideas with DNA as topological quantum computer hypothesis, one ends up with the idea that the evolution of DNA is not just random mutations plus selection, but takes place in controlled manner like the development of computer hardware in the virtual mimicry of internal chemical milieu in turn providing an abstract representation for the external world.

The fundamental question in the evolution biology is the question about the interaction between genome (G), phenotype (P), and environment (E).

  1. The standard dogma is that the information transfer from G to P is unidirectional and that environment acts on G by inducing random mutations of G, from which E selects the lucky survivors as those with the best ability to reproduce. Lamarckism represents a deviation from standard dogma by assuming direct information transfer from E to G.

  2. Genetic expression is controlled by environment, at least by silencing , which is like selecting only few books to be read from a big library. Cell differentiation represents basic example of selective gene expression. DNA methylation and transposition are accepted to reflect information transfer from E to G, perhaps via P. These modifications are however believed to be short lasting and not transferred to the offspring since it is difficult to imagine a mechanism transferring the mutations to the germ cells.

  3. The question however remains whether the G® P-E actually could complete to a closed loop G® P-E-G so that genome could directly respond to the changing physical environment and could transfer the successful response to the next generation .

In TGD framework the sequence G® P-E is replaced with a closed loop G-P-M-E to which E is attached at P by bidirectional arrow (organisms do also modify their environment actively). Magnetic body thus controls genome and receives information from cell membrane (P). The hierarchy of genomes (super-genome, hyper-genome,...) corresponding to the different levels of dark matter hierarchy allows this loop to be realized in different scales rather only at the level of single cell.

The question is whether the magnetic body of organism or higher level magnetic bodies could modify genomes, super-genomes, and hyper-genomes directly, perhaps by generating mutations of the genome in a short time scale; by monitoring how genetically modified organism survives in the environment; and -if the outcome of the experiment is successful - replacing the corresponding portion of DNA with the modified DNA both in ordinary germ cells. One can even ask whether the abstract model of the external environment provided by the internal chemical milieu might be mimicked by water magnetic bodies of water molecule clusters and provide a virtual world testing ground for a search of favorable mutations.

In DNA as a tqc vision essentially the development of a new computer hardware would be in question, and should take place in a controlled manner and involve an experimentation before going to the market rather than by random modifications taking place in computer CPUs. Second basic aspect of DNA as tqc paradigm is that water and bio-molecules live in symbiosis in the sense that self organization patterns of the cellular water flow define the tqc programs. The following first guess for how the development of computer hardware might be achieved is just a first guess but might have something to do with reality.

  1. What would be needed is a mechanism generating rapidly modifications of DNA. The mutations should be carried out using a kind of virtual DNA mimicking all the essential aspects of the symbolic dynamics associated with DNA. The magnetic bodies of DNA consisting of flux tubes connecting the nucleotides of DNA strands to cell membrane satisfy these conditions since A,T,G,C is coded to exotic light quarks u, d and anti-quarks `u, `d at the ends of flux tubes . DNA nucleotides could be replaced with clusters of water molecules but also other options can be imagined. Note that it does not matter when one speaks of mimicry of RNA or DNA molecules.

  2. If the proposed model of the phantom DNA and homeopathy is correct, this kind of virtual DNA exists and is generated in phantom DNA effect as magnetic bodies of DNA, including of course the magnetic flux tubes connecting the nucleotides to the cell membrane or conjugate strand of DNA.

  3. The crucial additional assumption would be that also the reversal of phantom DNA effect is possible and corresponds to the analog of DNA replication in which nucleotides attach to the virtual conjugate nucleotides of the virtual DNA strand or RNA strand in turn transformed to DNA strand be reverse transcription. The hypothesis would have rather strong implications for the genetic engineering since homeopathic remedies of genetically engineered DNA sequences could be transferred to cell nuclei just by drinking them.

  4. Phantom DNA sequences could form populations and - as far as their properties as a hardware of topological quantum computer are involved - evolve under selection pressures of the virtual world defined by the nuclear, cellular water, and intercellular water. A competition of components of tqc hardware developed by the higher level magnetic body to realize optimally tqc programs needed for survival would be in question. The simplest mutation of phantom DNA would replace the quark pairs at the ends the (wormhole-) magnetic flux tube with a new one and could occur in very short time scale. Also basic editing operations like cutting and pasting would be possible for these competing phantom DNA sequences. The winners in the competition would be transformed to actual DNA sequences by utilizing the reverse phantom DNA (or RNA -) effect and be inserted to genome. The genetic machinery performing cutting, gluing, and pasting of real DNA in a controlled manner exists. What is needed is the machinery monitoring who is the winner and making the decision to initiate the modification of the real DNA.

  5. The transfer of the mutations to germ cells could be achieved by allowing the population of the virtual DNA sequences to infect the water inside germ cells. The genetic program inducing the modification of DNA by using the winner of the tqc hardware competition should run automatically.

  6. One open question is whether the cellular or perhaps also extracellular water should represent the physical environment and - if answer is affirmative - how it achieves this. As a matter fact, considerable fraction of water inside cells is in gel phase and it might be that the intercellular water, which naturally defines a symbolic representation of environment, is where the virtual evolution takes place. Internal chemical milieu certainly reflects in an abstract manner the physical environment and the ability of the water molecule clusters to mimic bio-molecules would make the representation of the chemical environment possible. Also sudden changes of external milieu would be rapidly coded to the changes in internal milieu which might help to achieve genetic re-organization.

For details see chapters Homeopathy in Many-Sheeted Space-time of "Bio-Systems as Conscious Holograms" and The Notion of Wave-Genome and DNA as Topological Quantum Computer of "Genes and Memes"

Thursday, August 28, 2008

Phantom DNA effect and the notion of magnetic body

Phantom DNA is fourth anomalous effect in which the notion of magnetic body provides understanding (other three effects have been discussed in three previous postings). In phantom DNA effect [1] there is an elastic scattering of the coherent laser radiation from irradiated DNA. When one removes the DNA from the chamber containing it, and irradiates it by laser light, a weak pattern of scattered light is still produced: as if there were a kind of phantom DNA there. The pattern can last for months.

For years ago I considered an explanation of the effect based on dropping of part of DNA to larger space-time sheets characterized by larger value of p-adic prime and remaining in the vessel as visible DNA is removed . A variant of this explanation inspired by the dark matter hierarchy is that the anomalous scattering takes place on dark DNA at wormhole flux tubes remaining in the vessel. The DNA strands would simply lose their magnetic bodies which could be stealed by clusters of water molecules so that they become able to mimic DNA molecules as far as their magnetic bodies are considered.

The most science fictive possibility is that the flux tubes connect the vessel boundaries to the removed DNA by wormhole flux tubes which are very long and correspond to a large value of hbar. In this case the scattering would involve a phase transition increasing the value of Planck constant and a travel of photons to the removed DNA and back followed by a phase transition to ordinary photons.

Similar explanation works also in the case of homeopathy and allows to understand why the classic experiments of Benveniste could not be replicated when experimenters did not know which bottles contained the treated water. In this case the molecules dissolved in water would lose their magnetic bodies as a consequence of the shaking of the homeopathic remedy and one can say that clusters of water molecules would steal their magnetic coats. This would allow them to mimic the behavior of molecules and their presence would allow the immune system would develop a resistance against real molecules. This of course works only if the cyclotron radiation from the magnetic body is responsible for the biological effects. It is known that em radiation at low frequencies is indeed responsible for the ability of molecules to recognize each other. The generation of cyclotron radiation requires metabolic energy and the magnetic flux tubes connecting the experimenter to the treated bottle of water (correlates for directed attention) could have served as bridges along which metabolic energy could be transferred by using topological light rays (MEs serving as TGD counterparts of Alfwen waves). Experimentalists certainly did have strong desire to have successful experiments and this helped to realize the transfer of the metabolic energy.

If this is the correct explanation of phantom DNA effect and homeopathy, homeopathy and phantom DNA effect would provide fundamental research tools for studying the physics of the magnetic bodies of bio-molecules. Since dark matter characterized by large values of Planck constants is expected to reside at the magnetic bodies, also the study of dark matter would become possible using these methods.

For details see the chapter The Notion of Wave-Genome and DNA as Topological Quantum Computer of "Genes and Memes".

References

[1] P. P. Gariaev, V. I. Chudin, G. G. Komissarov, A. A. Berezin , A. A. Vasiliev (1991), Holographic Associative Memory of Biological Systems}, Proceedings SPIE - The International Society for Optical Engineering. Optical Memory and Neural Networks. v.1621, p. 280- 291. USA.

The experimental work of William Tiller about intentional imprinting of electronic devices

Phantom DNA is fourth anomalous effect in which the notion of magnetic body provides understanding (other three effects have been discussed in three previous postings). In phantom DNA effect [1] there is an elastic scattering of the coherent laser radiation from irradiated DNA. When one removes the DNA from the chamber containing it, and irradiates it by laser light, a weak pattern of scattered light is still produced: as if there were a kind of phantom DNA there. The pattern can last for months.

For years ago I considered an explanation of the effect based on dropping of part of DNA to larger space-time sheets characterized by larger value of p-adic prime and remaining in the vessel as visible DNA is removed . A variant of this explanation inspired by the dark matter hierarchy is that the anomalous scattering takes place on dark DNA at wormhole flux tubes remaining in the vessel. The DNA strands would simply lose their magnetic bodies which could be stealed by clusters of water molecules so that they become able to mimic DNA molecules as far as their magnetic bodies are considered.

The most science fictive possibility is that the flux tubes connect the vessel boundaries to the removed DNA by wormhole flux tubes which are very long and correspond to a large value of hbar. In this case the scattering would involve a phase transition increasing the value of Planck constant and a travel of photons to the removed DNA and back followed by a phase transition to ordinary photons.

Similar explanation works also in the case of homeopathy and allows to understand why the classic experiments of Benveniste could not be replicated when experimenters did not know which bottles contained the treated water. In this case the molecules dissolved in water would lose their magnetic bodies as a consequence of the shaking of the homeopathic remedy and one can say that clusters of water molecules would steal their magnetic coats. This would allow them to mimic the behavior of molecules and their presence would allow the immune system would develop a resistance against real molecules. This of course works only if the cyclotron radiation from the magnetic body is responsible for the biological effects. It is known that em radiation at low frequencies is indeed responsible for the ability of molecules to recognize each other. The generation of cyclotron radiation requires metabolic energy and the magnetic flux tubes connecting the experimenter to the treated bottle of water (correlates for directed attention) could have served as bridges along which metabolic energy could be transferred by using topological light rays (MEs serving as TGD counterparts of Alfwen waves). Experimentalists certainly did have strong desire to have successful experiments and this helped to realize the transfer of the metabolic energy.

If this is the correct explanation of phantom DNA effect and homeopathy, homeopathy and phantom DNA effect would provide fundamental research tools for studying the physics of the magnetic bodies of bio-molecules. Since dark matter characterized by large values of Planck constants is expected to reside at the magnetic bodies, also the study of dark matter would become possible using these methods.

For details see the chapter The Notion of Wave-Genome and DNA as Topological Quantum Computer of "Genes and Memes".

References

[1] P. P. Gariaev, V. I. Chudin, G. G. Komissarov, A. A. Berezin , A. A. Vasiliev (1991), Holographic Associative Memory of Biological Systems}, Proceedings SPIE - The International Society for Optical Engineering. Optical Memory and Neural Networks. v.1621, p. 280- 291. USA.

Local sidereal time, geomagnetic fluctuations, and remote mental interactions

The notion of magnetic body has become a key concept TGD inspired qantum biology and it is nice to see how this notion gradually receives support both from the understanding of the mysteries of biology and at the same time becomes more and more concrete. It is also nice that there is no need to reject paranormal phenomena from TGD based world view: as a matter fact, magnetic body is key player also in the understanding of these phenomena.

The article of J. Spottiswoode (2002), Geomagnetic fluctuations and free response anomalous cognition: a new understanding, submitted to the Journal of Parapsychology discusses two strange findings about remote mental interactions.

The findings

  1. There is a statistical tendency of the anomalous cognition (AC includes telepathy, clairvoyance, and precognition) performance to concentrate in a 2 hour period around 13.30 of the local sidereal time (ST), which is the time measured using as a reference distant stars and thus running at a slightly different rate than the solar time: the lag is DT = 24/365 hours ~ 3.7 minutes during 24 hours.

  2. The anticorrelation between the level of geomagnetic fluctuations and AC performance has also a maximum during 2-hour period around ~ 13.30 ST.

The fact that AC performance is associated with the same sidereal hour suggests the identification of the galactic magnetosphere as a conscious involved with remote cognition. For interstellar and galactic magnetic fields cyclotron time scales correspond to the time scales of human consciousness so that also these magnetic flux quanta could receive sensory input from biosphere and control it.

The so called ap index measures the intensity of the fluctuations of the Earth's magnetic field. If the magnetosphere is a conscious entity, ap index can be interpreted as a measure for the level of arousal of the magnetospheric mind. The negative correlation between ap and AC performance tells that AC is most probable, when the magnetosphere is in a "calm state of mind". This is natural since only in this kind of situation the noise masks minimally the signals from the galactic magnetosphere.

The local magnetic noise produced by the modern high tech environment is much stronger than the geomagnetic noise but this does not matter. If artificial magnetic fields correspond to kem=0 level of the dark matter hierarchy, they have no effect on higher levels of dark matter hierarchy.

The obvious question is why the anticorrelation between anomalous cognition effect size and ap index is highest at 13.30 ST? What this finding means that a particular portion of the sky defined by a definite longitude is above the head of a successful anomalous cognizer independently of the time of year. Thus there should be something special in a direction at this longitude.

The explanation of findings

The simplest explanation for these findings goes as follows.

  1. Suppose that there is a higher level conscious entity at the direction 13.30 ST at the galactic magnetic body such that various cyclotron frequencies involved with the communications with this entity correspond to a typical time scale of the anomalous cognition. This conscious entity could have size of galaxy or it could correspond to a flux tube of galactic magnetic body using the cognizer and target as sensory receptors and motor instruments just as our magnetic body might use neurons of our brain or our body parts.

  2. Anomalous cognition could involve positive and negative energy signals to this magnetic body and back so that essentially instantaneous AC events would be possible.

  3. The information transfer between two kinds of flux tubes is made possible by the topological condensation of the flux tubes of BE or its dark variant at those of the galactic magnetic field or its dark variant and would be maximal when both are nearly vertical. Also geomagnetic noise would be transferred via wormhole contacts to the flux tubes of the galactic magnetic field and perturb these communications. Both AC and its anticorrelation with geomagnetic noise would be maximal when the flux tubes of of magnetic fields in question are approximately parallel. Since the flux tubes of BE are approximately vertical, this the case when the galactic center is directly above the head. This would explain the special value of sidereal time. One can say that the magnetic flux tubes of the interstellar magnetic field define kind of cosmic umbilic cord which might serve as a correlate for the tunnel experience associated with NDEs.

  4. If signals to geometric past and back (both are possible in zero energy ontology replacing standard positive energy ontology in the formulation of quantum TGD) are involved the time and length scales would measured using 105 years as unit. The signals themselves would be coded using frequencies characterizing time scales of neural consciousness as kinds of ripples to the very slowly oscillating background signal just as perturbations due to nerve pulses interfere with EEG rhythms. Since remote psychokinesis and anomalous cognition should rely on the same mechanism, the first guess for the time scale involved with these signals is as the time lag of 13 to 17 seconds involved with the remote realization of intentions by Qigong masters : the interpretation as a typical duration of charge entanglement was already proposed. It would not be surprising if the time scale of entanglement would determine also the scale of cyclotron frequencies. This would mean the importance of the frequencies in the range .06 to .08 Hz for anomalous cognition.

For details see the chapter Bio-Systems as Conscious Holograms of the book with the same title.

A possible realization of water memory

The Benveniste's discovery of water memory initiated quite dramatic sequence of events. The original experiment involved the homeopathic treatment of water by human antigene. This meant dilution of the water solution of antigene so that the concentration of antigene became extremely low. In accordance with homeopathic teachings human basophils reacted on this solution.

The discovery was published in Nature and due to the strong polemic raised by the publication of the article, it was decided to test the experimental arrangement. The experimental results were reproduced under the original conditions. Then it was discovered that experimenters knew which bottles contained the treated water. The modified experiment in which experimenters did not possess this information failed to reproduce the results and the conclusion was regarded as obvious and Benveniste lost his laboratory among other things. Obviously any model of the effect taking it as a real effect rather than an astonishingly simplistic attempt of top scientists to cheat should explain also this finding.

The model based on the notion of field body and general mechanism of long term memory allows to explain both the memory of water and why it failed under the conditions described.

1. A model for the water memory and homeopathic effect

  1. Also molecules have magnetic field bodies acting as intentional agents controlling the molecules. Nano-motors do not only look co-operating living creatures but are such. The field body of the molecule contains besides the static magnetic and electric parts also dynamical parts characterized by frequencies and temporal patterns of fields. To be precise, one must speak both field and relative field bodies characterizing interactions of molecules. Right brain sings-left brain talks metaphor might generalize to all scales meaning that representations based on both frequencies and temporal pulse with single frequency could be utilized.

  2. The effects of complex bio-molecule to other bio-molecules (say antigene on basofil) in water could be characterized to some degree by the temporal patterns associated with the dynamical part of its field body and bio-molecules could recognize each other via these patterns. This would mean that symbolic level in interactions would be present already in the interactions of bio-molecules. Cyclotron frequencies are most natural candidates for the frequency signatures and the fact that frequencies in 10 kHz range are involved supports this view.

  3. The original idea was that water molecule clusters are able to mimic the bio-molecules themselves -say their vibrational and rotational spectra could coincide with those of molecules in reasonable approximation. A more natural idea is that they can mimic their field bodies. Homeopathy could rely on extremely simple effect: water molecule clusters would steal the magnetic bodies of the molecules used to manufacture the homeopathic remedy. The shaking of the bottle containing the solution would enhance the probability for bio-molecule to lose its magnetic body in this manner. For instance, water could produce fake copies of say antigenes recognized by basofils and reacting accordingly if the reaction is based on interaction with the magnetic body of the antigene.

  4. The basic objection against this picture is that it does not explain why the repeated dilution works. Rather, it seems that dilution of molecules reduces also the density of mimicking pseudo-molecules. Even more, the potency of the homeopathic remedy is claimed to increase as the the dilution factor increases. Also alcohol is used instead of water so that also alcohol must allow homepathic mechanism. (I am grateful for Ulla Matfolk for questions which made me to realize these objections).

    1. The only way out seems to be that the magnetic bodies or water molecule clusters having these magnetic bodies can replicate. The shaking of the remedy could provide the needed metabolic energy so that the population of magnetic bodies grows to a limiting density determined by the metabolic energy feed. In principle it would be possible to infect unlimited amount of water by these pseudo-molecules. When in bottle the population would be in dormant state but in the body of the patient it would wake up and form a population of molecular actors and stimulate the immune system to develop immune response to the real molecule.

    2. The potency of the homeopathic remedy is claimed to increase with the increased dilution factor. This would suggest that the continued dilution and shaking also increases the density of pseudo molecules, perhaps by feeding to the system metabolic energy or by some other mechanism.

    3. Also magnetic bodies must replicate in cell replication and their role as intentional agents controlling bio-matter requires that this replication serves as a template for biochemical replication. On can indeed interpret the images about cell replication in terms of replication of dipole type magnetic field. This process is very simple and could have preceded biological replication. The question is therefore whether water is actually a living system in presence of a proper metabolic energy feed. Also the water's ability near critical point for freezing to form nice patterns correlating with sound stimuli might be due to the presence of the molecular actors.

    4. This picture fits nicely with the vision that evolution of water in this kind of life form might have happened separately and that pre-biotic chemical life forms have formed symbiosis with living water. In the model of DNA as topological quantum computer the asymptotic self organization patterns of water flow in the vicinity of lipid layers indeed define quantum computer programs by inducing the braiding of the magnetic flux tubes connecting DNA nucleotides to lipids so that this symbiosis would have brought in new kind of information processing tool.

  5. The magnetic body of the molecule could mimic the vibrational and rotational spectra using harmonics of cyclotron frequencies. Cyclotron transitions could produce dark photons with large Planck constant, whose ordinary counterparts resulting in de-coherence would have large energies due to the large value of hbar and could thus induce vibrational and rotational transitions. This would provide a mechanism by which molecular magnetic body could control the molecule. Note that also the antigenes possibly dropped to the larger space-time sheets could produce the effect on basofils. The transformation of large Planck constant photons to ordinary ones would reduce the frequency of photon by the factor hbar0/hbar: this kind of reduction represents basic finding about water memory. The so scaled scaling law states that favored scaling factor corresponds to hbar/hbar0∼ 2× 1010.

  6. There is a considerable experimental support for the Benveniste's discovery that bio-molecules in water environment are represented by frequency patterns, and several laboratories are replicating the experiments of Benveniste as I learned from the lecture of Yolene Thomas in the 7:th European SSE Meeting held in Röros . The scale of the frequencies involved is around 10 kHz and as such does not correspond to any natural molecular frequencies. Cyclotron frequencies associated with electrons or dark ions accompanying these macromolecules would be a natural identification if one accepts the notion of molecular magnetic body. For ions the magnetic fields involved would have a magnitude of order .03 Tesla if 10 kHz corresponds to scaled up alpha band. Also Josephson frequencies would be involved if one believes that EEG has fractally scaled up variants in molecular length scales.

2. Why Benveniste's experiments could not be replicated?

Consider now the argument explaining the failure to replicate the experiments of Benveniste.

  1. The magnetic bodies of water molecules need metabolic energy for communications with their "biological body" using the fractally scaled analog of EEG. There is no obvious source for this energy in water. The model for protein folding and DNA as topological quantum computer assumes that magnetic flux tubes connecting subject person and target of directed attention serve as correlates for directed attention at the molecular level . This should be true also in macroscopic scales so that the experimentalist and the bottle containing the treated water should be connected by magnetic flux tubes. If experimenter has directed his attention to the bottle of water, the resulting magnetic flux tubes could allow a transfer of metabolic energy as a radiation along massless extremals parallel to the flux tubes and defining TGD counterparts of Alfwen waves. Experimenter's strong motivation to replicate experiments would help to realize the transfer of the metabolic energy. Experimenters not knowing, which bottles were treated did not have these flux tube bridges to the bottles, and were not able to provide the needed metabolic energy, and the magnetic bodies of antigenes failed to generate the cyclotron radiation making them visible to the basofil.

  2. If this interpretation is correct, then Benveniste's experiment would demonstrate besides water memory also psychokinesis and direct action of desires of experimenters on physics at microscopic level. Furthermore, the mere fact that we know something about some object or direct attention to it would mean a concrete interaction of our magnetic body with the object.

For details see the chapter Homeopathy in Many-Sheeted Space-time of "Bio-Systems as Conscious Holograms".

Sunday, August 10, 2008

TGD prediction for Higgs mass is consistent with the most recent bounds 115-135 GeV

In previous posting I already made some passing comments concerning the newest data about Higgs mass. Because of the importance of this topic for TGD and all theories claiming to be able to predict the value of Higgs mass, a separate posting is in order.

Let us first sum up the new information about Higgs mass that emerged in the beginning of August 2008.
  1. A press release from Tevatron excluded the possibility that the mass is in a narrow interval around 170 GeV, roughly the average of the above mentioned mass values. Ironically, this mass value corresponds exactly to the Higgs mass predicted by the non-commutative variant of standard model of Alain Connes (Alain Connes has already commented the result).
  2. The second piece of information discussed in detail in Tommaso Dorigo's blog gives much stronger limits on Higgs mass. The first plot discussed in Tommaso's blog is obtained by combining enormous amount of information except that coming from LEPII and Tevatron and at 1 sigma limit bounds Higgs mass to the interval 57-100 GeV with favored value around 80 GeV. At 2 sigma the interval is 39-156 GeV. If one includes also the information from LEPII and Tevatron the mass range 115-135 GeV.

Consider now these results in TGD framework.
  1. The basic prediction of p-adic mass calculations is that elementary particles can appear in several mass scales differing by a power of 21/2. Quarks do so in TGD based model for hadron masses. This explains also why neutrinos seem to appear in several mass scales. Also Higgs could appear in two mass scales as the experiments giving two values of mass differing by a factor of 8 suggest: this point I have discussed earlier in my blog. A convenient manner to parametrize the TGD prediction is as

    m= 2(k-94)/2×129 GeV.

  2. TGD would predict mass 129 GeV for k=94 which is near the upper end of the allowed interval 115-135 GeV obtained by combining all data. If these limits are taken absolutely seriously, one can say that TGD is able to predict correctly also Higgs mass. Recalling that the prediction is exponentially sensitive to the value of the integer k, this could be regarded as as the final triumph of TGD.
  3. The reported results are consistent with the proposal that Higgs appears with at least two different mass values. All these mass values and even others could be there depending on experimental conditions. k=96 would predicts mass 91 GeV which is near the upper bound of the 1 sigma range 57-100 GeV with LEPII and Tevatron data excluded. k=97 would predict mass 45.5 GeV belonging to the lower boundary of the 2 sigma range. In particular, the mass value 182 GeV, not too far from 160 GeV- the mass of the could-this-be-Higgs! about which there was a lot of discussion for some time ago in Tommaso Dorigo's blog (see for instance this) is possible.

For details about p-adic mass calculations see the first chapters of the book p-Adic Length Scale Hypothesis and Hierarchy of Planck constants. The predictions for elementary particle masses including Higgs mass can be found at p-Adic particle Massivation: Elementary Particle Masses.

Saturday, August 09, 2008

Low energy physics need not be dirty and ugly

I have not had much time for blogging since I have had very difficult life situation after the funding which lasted for half a year ceased. It was a wonderful time. Now I must try to find some job and must leave TGD for some time. Beware of billionaires interested in quarter science (even worse idea than quarter economy);-)!

Despite my life situation and working for a long time near the burnout limit, I have been following blog discussions and the lively Higgs debate in Tommaso Dorigo's blog inspired me to write some lines.

I think very highly of Lubos Motl as a theoretical physicist. He has a lot of realism and realizes that theoretical physics at the top level is very very abstract thinking. What however astonishes are some of his dogmatic beliefs. The first belief you can guess. Second dogmatic belief is the belief that coupling constant evolution implies that the low energy physics must be something chaotic and unpredictable. Why so? Could it be that all this stuff looks ugly because we do not understand it? Could there might missing something important from our conceptualization?

The generalization of real physics to a fusion of real and various p-adic physics identifies this missing something and indeed leads to beautiful formulas at low energies. The basic vision is that p-adic physics at short distances corresponds to real physics at very long distances: the mere continuity and smoothness in p-adic sense give extremely powerful constraints on real physics at long scales. There are also powerful number theoretic existence conditions involved: consider only the generalization of Boltzman weight to integer power of p quantizing p-adic temperatures to T=1/n appearing in mass calculations relying on p-adic thermodynamics for mass squared represented as scaling generator L0.

Therefore the two (or actually very many) notions of nearness (p-adic for various primes and real) change the situation completely by allowing to approach coupling constant evolution from two directions. Low energy physics ceases to be the dust bin containing the dirty things. Simple universal formulas based on p-adic fractality emerge. For instance, the discretization of coupling constant evolution to half octaves in length scale and octaves in time scale brings in a hierarchy of mass scales coming as half octaves and p-adic primes near powers of 2 are strongly favored. Masses are precisely predictable, etc... The most important applications are in biology and one of the basic predictions is direct connection with biology and elementary particle physics via assignment of .1 second time scale to electron in zero energy ontology: alpha rhythm defines indeed a fundamental time scale in biology.

What is so beautiful that p-adic space-time sheets whose most point are at spatial and temporal infinity in real sense make their presence directly visible via mass formulas. The notion of infinity ceases to be something for mystics only and receives a strict physical meaning.

One particular implication related also to the problem of Higgs mass is that elementary particles can appear in several mass scales differing by a power of 21/2. Quarks do so in TGD based model for hadron masses. This explains also why neutrinos seem to appear in several mass scales. Also Higgs could appear in two mass scales as experiments giving two values of mass differing by a factor of 8 suggest: this I have discussed somewhere in my blog already earlier. The average of these masses would have been not too far from 170 GeV predicted by the non-commutative variant of standard model of Alain Connes and is now excluded. The discussion in Tommaso's blog was discussed by the recently reported bounds 115-135 GeV for Higgs mass. Recall that the data discussed in earlier posting suggested mass values which were around 31 GeV and 420 GeV with quite wide error bras (really!).

In TGD framework the new bounds would correspond to those for a heavier version of Higgs: the evidence for a much lower mass from other experiments must be still there. p-Adic length scale hypothesis would predict mass 129 GeV for k=94 which is near the upper end of the allowed interval 115-135 GeV. k=91 gives 45.5 GeV which could correspond to a lighter variant of Higgs. k=91 would give mass 363 GeV. All these mass values and even others could be there depending on experimental conditions. Perhaps it is time to start thinking about the basics again instead of just taking averages or neglecting half of the data!