https://matpitka.blogspot.com/2007/12/

Friday, December 28, 2007

The rise of the Super Hype Theory

In his This Week's Hype series Peter Woit reports the most recent achievements of Super Hype Theory, a follower to M-theory which in turn followed Super String Theory. This particular hype was published in Nature (see for instance this). Also the press release by Lancaster University should give a good view about this particular breakthrough. To speak seriously, one could expect this kind of trash in New Scientist but times are changing as theoretical particle physics is going down the hill with an accelerating pace.

Very briefly. In super fluid Helium one can have vortices which consist of ordinary fluid and are analogous to the magnetic vortices in super-conductor near phase transition. In some very very esoteric models of cosmology it is assumed that a collision of so called 3-D branes created Big Bang. You remember correctly: branes are those higher-D objects assumed to come out from string theory by some "non-perturbative mechanism". In plain english: branes were simply decided to exist since they promised to save the theory after it had become clear that the Kaluza-Klein scenario does not work. Unfortunately branes did not keep their promise.

The orbits of these particular branes are 4-D surfaces identified as space-time surfaces. In TGD 4-D surfaces are fundamental objects without the loss of the crucial conformal invariance. About this the empire remains silent since it would be too humiliating to admit that a pennyless fellow from Finland who has never had an academic position was right so that it is better to continue with superhyping. These brane collisions are believed to produce besides Big Bang also vortex like defects known as cosmic strings. That string like defects exist also in super fluid is taken as a test for super string model. Convincing?

Try however to stay serious because this is "serious" science by the basic criteria of "serious" science: it receives funding and is led by professors. In this same spirit, a leading popular science authority in Finland told some time ago in public that nowadays real science can be done only by professors and their students because it has become so complex. As the innocent interviewer wondered how Einstein - a mere clerk in patent office - could then have build his theories, the answer was that at that time physics was so incredibly simple that the ability to take square root was enough to build theory of gravitation. Well, after a little though we of course realize that Einstein was not any Einstein after all.

In the middle of this superhype we should however not forget that the analogy between super-fluidity and big bang is real and known for a long time. Only to say that it has something to with testing of string theory is really weird. The defects in super-fluid are string like objects (very thin 3-D surfaces in TGD) as are also cosmic strings (not in the sense of gauge theories however), magnetic flux tubes, etc... TGD based nuclear model relies on color flux tubes connecting nucleons (yes!, this was not a typo!). Cosmic strings populate TGD Universe during primordial cosmology and later transform to magnetic flux tubes, which play central role also in the recent Universe and especially so in living matter. For instance, the model for topological quantum computation that I have been developing during last month assumes that (also color-) magnetic flux quanta define the strands of braids.

The lesson is that if one accepts fractality (say p-adic fractality and that implied by hierarchy of Planck constants as in TGD), the study of super-fluids can give very important theoretical insights even about cosmology and astrophysics. But for God's sake - not in the manner as claimed in this particular This Week's Hype.

P.S. See Oswald Spengler's comment in comment section of This Week's Hype.

Thursday, December 27, 2007

Winds of change

Times are changing. Some time ago we learned in Peter Woit's blog that particle physics funding has been dropped dramatically in US. Iraq is one explanation. A much more plausible explanation is that it does not make sense to build gigantic accelerators if there is no theory able to make testable predictions. The gigantic data stream is completely useless unless we have theories making clear predictions.

For a decade ago anyone talking seriously about quantum biology was regarded as a mad man and in certain circles near to me this is the situation still ("No nonsense" as these fellows express their deep convictions about what is possible and what is not). Also the idea that number theory might have something to do with physics is sheer madness for these besserwissers of science. However, from the recent blog posting of Peter Woit we learn that an institute known as DARPA is starting to fund a research closely related to two basic branches of TGD which I have been developing for more than decade.

The first program "Geometric Langlands and Quantum Physics" is the application of Geometric Langlands program to Fundamental Physics: there is a book about "TGD as a Generalized Number Theory" program at my homepage containing also a chapter about number theoretical Langlands program which might be unified with the geometric one using the notion of infinite prime.

Second program is titled "Biological Quantum Field Theory". I have 8 books about TGD inspired theory of consciousness and of quantum biology at my homepage. DNA as a topological quantum computer is one particular application that I have been intensely developing during the last month. The irony is that it is military which seems to realize first what is important rather than so called "serious" scientists.

DNA as a topological quantum computer: IX

In previous postings I, II, III, IV, V, VI, VII, VIII I have discussed various aspects of the idea that DNA could acts as a topological quantum computer using fundamental braiding operation as a universal 2-gate.

There are several grand visions about TGD Universe. One of them is as a topological quantum computer in a very general sense. This kind of visions are always oversimplifications but the extreme generality of the braiding mechanism suggest that also simpler systems than DNA might be applying tqc. The detailed model for tqc performed by DNA indeed leads to the idea that so called water memory could be realized in terms of braidings.

A. Braid strands as flux tubes of color magnetic body

The flux tubes defining braid strands carry magnetic field when the supra current is on. In TGD Universe all classical fields are expressible in terms of the four CP2 coordinates and their gradients so that em, weak, color and gravitational fields are not independent as in standard model framework. In particular, the ordinary classical em field is necessarily accompanied by a classical color field in the case of non-vacuum extremals. This predicts color and ew fields in arbitrary long scales and quantum classical correspondence forces to conclude that there exists fractal hierarchy of electro-weak and color interactions.

Since the classical color gauge field is proportional to Kähler form, its holonomy group is Abelian so that effectively U(1)× U(1)subset SU(3) gauge field is in question. The generation of color flux requires colored particles at the ends of color flux tube so that the presence of pairs of quark and antiquark assignable to the pairs of wormhole throats at the ends of the tube is unavoidable if one accepts quantum classical correspondence.

In the case of cell, a highly idealized model for color magnetic flux tubes is as flux tubes of a dipole field. The preferred axis could be determined by the position of the centrosomes forming a T shaped structure. DNA strands would define the idealized dipole creating this field: DNA is indeed negatively charged and electronic currents along DNA could create the magnetic field. The flux tubes of this field would go through nuclear and cell membrane and return back unless they end up to another cell. This is indeed required by the proposed model of tqc.

It has been assumed that the initiation of tqc means that the supra current ceases and induces the splitting of braid strands. The magnetic flux need not however disappear completely. As a matter fact, its presence forced by the conservation of magnetic flux seems to be crucial for the conservation of braiding. Indeed, during tqc magnetic and color magnetic flux could return from lipid to DNA along another space-time sheet at a distance of order CP2 radius from it. For long time ago I proposed that this kind of structures -which I christened "wormhole magnetic fields" - might play key role in living matter. The wormhole contacts having quark and antiquark at their opposite throats and coding for A, T, C, G would define the places where the current flows to the "lower" space-time sheet to return back to DNA. Quarks would also generate the remaining magnetic field and supra current could indeed cease.

The fact that classical em fields and thus classical color fields are always present for non-vacuum extremals means that also the motion of any kind of particles (space-time sheets), say water flow, induces a braiding of magnetic flux tubes associated with molecules in water if the temporary splitting of flux tubes is possible. Hence the prerequisites for tqc are met in extremely general situation and tqc involving DNA could have developed from a much simpler form of tqc performed by water giving perhaps rise to what is known as water memory (see this, this and this). This would also suggest that the braiding operation is induced by the a controlled flow of cellular water.

B. Water memory: general considerations

With few exceptions so called "serious" scientists remain silent about the experiments of Benveniste and others relating to water memory (see this, this and this) in order to avoid association with the very ugly word "homeopathy".

The Benveniste's discovery of water memory initiated quite dramatic sequence of events. The original experiment involved the homeopathic treatment of water by human antigene. This meant dilution of the water solution of antigene so that the concentration of antigene became extremely low. In accordance with homeopathic teachings human basophils reacted on this solution.

The discovery was published in Nature and due to the strong polemic raised by the publication of the article, it was decided to test the experimental arrangement. The experimental results were reproduced under the original conditions. Then it was discovered that experimenters knew which bottles contained the treated water. The modified experiment in which experimenters did not possess this information failed to reproduce the results and the conclusion was regarded as obvious and Benveniste lost his laboratory among other things. Obviously any model of the effect taking it as a real effect rather than an astonishingly simplistic attempt of top scientists to cheat should explain also this finding.

The model based on the notion of field body and general mechanism of long term memory allows to explain both the memory of water and why it failed under the conditions described.

  1. Also molecules have magnetic field bodies acting as intentional agents controlling the molecules. Nano-motors do not only look co-operating living creatures but are such. The field body of molecule contains besides the static magnetic and electric parts also dynamical parts characterized by frequencies and temporal patterns of fields. To be precise, one must speak both field and relative field bodies characterizing interactions of molecules. Right brain sings-left brain talks metaphor might generalize to all scales meaning that representations based on both frequencies and temporal pulse with single frequency could be utilized.

    The effects of complex bio-molecule to other bio-molecules (say antigene on basofil) in water could be characterized to some degree by the temporal patterns associated with the dynamical part of its field body and bio-molecules could recognize each other via these patterns. This would mean that symbolic level in interactions would be present already in the interactions of bio-molecules.

    If water is to mimic the field bodies of molecules using water molecule clusters, at least vibrational and rotational spectra, then water can produce fake copies of say antigenes recognized by basofils and reacting accordingly.

    Also the magnetic body of the molecule could mimic the vibrational and rotational spectra using harmonics of cyclotron frequencies. Cyclotron transitions could produce dark photons, whose ordinary counterparts resulting in de-coherence would have large energies due to the large value of hbar and could thus induce vibrational and rotational transitions. This would provide a mechanism by which molecular magnetic body could control the molecule. Note that also the antigenes possibly dropped to the larger space-time sheets could produce the effect on basofils.

  2. There is a considerable experimental support for the Benveniste's discovery that bio-molecules in water environment are represented by frequency patterns, and several laboratories are replicating the experiments of Benveniste as I learned from the lecture of Yolene Thomas in the 7:th European SSE Meeting held in Röros [4]. The scale of the frequencies involved is around 10 kHz and as such does not correspond to any natural molecular frequencies. Cyclotron frequencies associated with electrons or dark ions accompanying these macromolecules would be a natural identification if one accepts the notion of molecular magnetic body. For ions the magnetic fields involved would have a magnitude of order .03 Tesla if 10 kHz corresponds to scaled up alpha band. Also Josephson frequencies would be involved if one believes that EEG has fractally scaled up variants in molecular length scales.

  3. Suppose that the representations of bio-molecules in water memory rely on pulse patterns representing bit sequences. The simplest realization of bit would be as a laser like system with bit 1 represented by population inverted state and bit 0 by the ground state. Bits could be arranged in sequences spatially or by variation of zero point energy defining the frequency: for instance increase of frequency with time would define temporal bit sequence. Many-sheeted lasers are the natural candidates for laser like systems are in question since they rely on universal metabolic energy quanta. Memory recall would involve sending of negative energy phase conjugate photons inducing a partial transition to the ground state. The presence of metabolic energy feed would be necessary in order to preserve the memory representations.

C. Water memory in terms of molecular braidings

It is interesting to look water memory from the point of view of tqc. Suppose that the molecules and water particles (space-time sheet of size of say cell length scale) are indeed connected by color flux tubes defining the braid strands and that splitting of the braid strands can take place so that water flow can gives rise to a braiding pattern and tqc like process.

The shaking of the bottle containing the diluted homeopathic remedy is an essential element in the buildup of water memories also in the experiments of Benveniste. Just like the vigorous flow of sol near the inner monolayer, this process would create a water flow and this flow creates a braiding pattern which could provide a representation for the presence of the molecules in question. Note that the hardware of braiding could carry information about molecules (cyclotron frequencies for ions for instance).

The model for the formation of scaled down variants of memories in hippocampus discussed above suggests that each half period of theta rhythm corresponds to tqc followed by a non-computational period during which the outcome of tqc is expressed as 4-D nerve pulse patterns involving cyclotron frequencies and Josephson frequency. Josephson currents at the second half period would generate dark Josephson radiation communicating the outcome of the calculation to the magnetic body. Entire hierarchy of EEGs with varying frequency scale would be present corresponding to the onion like structure of magnetic body. This pattern would provide an electromagnetic representation for the presence of the antigene and could be mimicked artificially [1,2,3].

This picture might apply be the case also in the case of water memory.

  1. The shaking might drop some fraction of antigene molecules to dark space-time sheets where they generate a dark color magnetic field. Because of the large value of Planck constant super-conductivity along color flux tubes running from molecular space-time sheets could still be present.

  2. TGD based model of super conductivity involves double layered structures with same p-adic length scale scale as cell membrane (see this). The universality of p-adic length scale hierarchy this kind of structures but with a much lower voltage over the bilayer could be present also in water. Interestingly, Josephson frequency ZeV/hbar would be much lower than for cell membrane so that the time scale of memory could be much longer than for cell membrane for given value of hbar meaning longer time scale of memory recall.

  3. Also in the case of homeopathic remedy the communication of the result of tqc to the magnetic body would take place via Josephson radiation. From the point of view of magnetic body Josephson radiation resulting in shaking induced tqc induced would replace the homeopathic remedy with a field pattern. The magnetic bodies of basophils could be cheated to produce allergic reaction by mimicking the signal representing the outcome of this tqc. This kind of cheating was indeed done in the later experiments of Benveniste involving very low frequency electromagnetic fields in kHz region allowing no identification in terms of molecular transitions (magnetic body and cyclotron frequencies) [1].

D. Why experimenter had to know which bottle contained the treated water?

Why experimenter had to know which bottle contained the treated water? The role of experimenter eliminates the possibility that the (magnetic bodies of) clusters of water molecules able to mimic the (magnetic bodies of) antigene molecules electromagnetically are present in the solution at geometric now and produce the effect. The earlier explanation for experimenter's role was based on the idea that memory storage requires metabolic energy and that experimenter provides it. Tqc picture suggests a variant of this model in which experimenter makes possible the recall of memories of water represented as braiding patterns and realized via tqc.

D.1 Does experimenter provide the metabolic energy needed to store the memories of water?

What could be then the explanation for the failure of the modified experiment? Each memory recall reduces the occupation of the states representing bit 1 and a continual metabolic energy feed is needed to preserve the bit sequence representations of antibodies using laser light systems as bit. This metabolic energy feed must come from some source.

By the universality of metabolic energy currencies population inverted many-sheeted lasers in living organisms define the most natural source of the metabolic energy. Living matter is however fighting for metabolic energy so that there must be some system willing to provide it. The biological bodies of experimenters are the best candidates in this respect. In this case experimenters had even excellent motivations to provide the metabolic energy. If this interpretation is correct then Benveniste's experiment would demonstrate besides water memory also psychokinesis and direct action of desires of experimenters on physics at microscopic level. Furthermore, the mere fact that we know something about some object or direct attention to it would mean a concrete interaction of our magnetic with the object.

D.2 Does experimenter make possible long term memory recall?

The alternative explanation is that experimenter makes possible long term memory recall which also requires metabolic energy.

  1. If braiding pattern represents, the water memory the situation changes since the robustness of the braiding pattern suggests that this representation is still in the geometric past (which is replaced with a new one many times). If the dark variants of molecules created in the process are still in the water, the braid representation of water memories could be available even in the geometric now but it is better to not make this assumption. The challenge is to understand how this information can be made conscious.

  2. What is certainly needed is that the system makes the tqc again. This would mean a fractal quantum jump involving unitary U process and state function reduction leading to the generation of generalized EEG pattern. Only the sums and differences of cyclotron frequency and Josephson frequency would matter so that the details of the flow inducing braiding do not matter. The shaking process might be continuing all the subjective time in the geometric past so that the problem is how to receive information about its occurrence. Experimenter might actually help in this respect since the mechanism of intentional action initiates the action in the geometric past by a negative energy signal.

  3. If the magnetic body of the water in the geometric now can entangle with the geometric past, tqc would regenerate the experience about the presence of antigene by sharing and fusion of mental images. One can however argue that water cannot have memory recall in this time scale since water is quite simple creature and levels with large enough hbar might not be present. It would seem that here the experimenter must come in rescue.

  4. The function of experimenter's knowledge about which bottle contains the homeopathic solution could be simply to generate time-like entanglement in the required long time scale by serving as a relay station. The entanglement sequence would be water now -experimenter now - water in the past with "now" and "past" understood in the geometric sense. The crucial entanglement bridge between the magnetic body of water and experimenter would be created in the manufacturing of the homeopathic remedy.

Note that these explanations do not exclude each other. It is quite possible that experimenter provides also the metabolic energy to the bit representation of water memories possibly induced by the long term memory recall.

This picture is of course just one possible model and cannot be taken literally. The model however suggest that magnetic bodies of molecules indeed define the braiding; that the generalized EEG provides a very general representation for the outcome of tqc; that liquid flow provides the manner to build tqc programs - and also that shaking and sudden pulses is the concrete manner to induce visible-dark phase transitions. All this might be very valuable information if one some day in the distant future tries to build topological quantum computers in laboratory.

E. Little personal reminiscence about flow

I cannot resist a temptation to bore the reader with something which I have already told quite too many times. The reason why I started to seriously ponder consciousness was the wonderful experience around 1985 or so, which lasted from week two two - I do not remember precisely. To tell quite honestly and knowing the reactions induced in some hard nosed "serious" scientists: my experience was that I was enlightened. The depth and beauty of this state of consciousness was absolutely stunning and it was very hard to gradually realize that I would not get this state back.

To characterize the period of my life which I would without a hesitation choose if I had to select the most important weeks of my life, the psychologist needed only two magic words - acute psychosis. The psychologist had even firmly predicted that I would soon fall in a totally autistic state! This after some routine examinations (walking along straight line and similar tests). What incredible idiots can an uncritical belief on science make of us!

This experience made with single stroke clear that in many respects the existing psychology does not differ much from the medicine at middle ages. The benevolent people believing in this trash - modern psychologists - can cause horrible damage and suffering to their patients. As I started serious building of consciousness theory and learned neuroscience and biology, I began to grasp at more general level how insane the vision of the official neuroscience and biology about consciousness was. We laugh for the world view of people of middle ages but equally well they could laugh for our modern views about what we are.

Going back to the experience. During it I saw my thoughts as extremely vivid and colorful patterns bringing in mind paintings of Dali and Bosch. What was strange was the continual and very complex flow at the background consisting of separate little dots. I can see this flow also now by closing my eyes lightly when in a calm state of mind. I have proposed many explanations for it and tried to figure out what this flow tries to tell to me. Sounds pompous and a little bit childish in this cynic world, but this is the first time that I dare hope of having understood the deeper message I know is there.

References

[1] J. Benveniste et al (1988). Human basophil degranulation triggered by very dilute antiserum against IgE. Nature 333:816-818.

[2] J. Benveniste et al (198?). Transatlantic transfer of digitized antigen signal by telephone link. Journal of Allergy and Clinical Immunology. 99:S175 (abs.). For recent work about digital biology and further references about the work of Benveniste and collaborators see this .

[3] L. Milgrom (2001), Thanks for the memory. An article in Guardian about the work of professor M. Ennis of Queen's University Belfast supporting the observations of Dr. J. Benveniste about water memory.

[4] E. Strand (editor) (2007), Proceedings of the 7th European SSE Meeting August 17-19, 2007, Röros, Norway. Society of Scientific Exploration.

For details see the chapter DNA as Topological Quantum Computer of "Genes and Memes".

Wednesday, December 26, 2007

DNA as a topological quantum computer: VIII

In previous postings I, II, III, IV, V, VI, VII I have discussed various aspects of the idea that DNA could acts as a topological quantum computer using fundamental braiding operation as a universal 2-gate.

In the following I will consider first the realization of the basic braiding operation: this requires some facts about phospholipids which are summarized first. Also the realization of braid color is discussed. This requires the coding of the DNA color A,T,C,G to a property of braid strand which is such that it is conserved meaning that after halting of tqc only strands with same color can reconnect. This requires long range correlation between lipid and DNA nucleotide. It seems that strand color cannot be chemical. Quark color is essential element of TGD based model of high Tc superconductivity and provides a solution to the problem: the four neutral quark-antiquark pairs with quark and antiquark at the ends of color flux tube defining braid strand would provide the needed four colors.

A. Some facts about phospholipids

Phospholipids - which form about 30 per cent of the lipid content of the monolayer - contain phosphate group. The dance of lipids requires metabolic energy and the hydrophilic ends of the phospholipid could provide it. They could also couple the lipids to the flow of water in the vicinity of the lipid monolayer possibly inducing the braiding. Of course, the causal arrow could be also opposite.

The hydrophilic part of the phospholipid is a nitrogen containing alcohol such as serine, inositol or ethanolamine, or an organic compound such as choline. Phospholipids are classified into 3 kinds of phosphoglycerides and sphingomyelin.

A.1 Phosphoglycerides

In cell membranes, phosphoglycerides are the more common of the two phospholipids, which suggest that they are involved with tqc. One speaks of phosphotidyl X, where X= serine, inositol, ethanolamine is the nitrogen containing alcohol and X=Ch the organic compound. The shorthand notion OS, PI, PE, PCh is used.

The structure of the phospholipid is most easily explained using the dancer metaphor. The two fatty chains define the hydrophobic feet of the dancer, glycerol and phosphate group define the body providing the energy to the dance, and serine, inositol, ethanolamine or choline define the hydrophilic head of the dancer (perhaps "deciding" the dancing pattern).

There is a lipid asymmetry in the cell membrane. PS, PE, PI in cytoplasmic monolayer (alcohols). PC (organic) and sphingomyelin in outer monolayer. Also glycolipids are found only in the outer monolayer. The asymmetry is due to the manner that the phospholipids are manufactured.

PS in the inner monolayer is negatively charged and its presence is necessary for the normal functioning of the cell membrane. It activates protein kinase C which is associated with memory function. PS slows down cognitive decline in animals models. This encourages to think that the hydrophilic polar end of at least PS is involved with tqc, perhaps to the generation of braiding via the coupling to the hydrodynamic flow of cytoplasm in the vicinity of the inner monolayer.

A. 2. Fatty acids

The fatty acid chains in phospholipids and glycolipids usually contain an even number of carbon atoms, typically between 14 and 24 making 5 possibilities altogether. The 16- and 18-carbon fatty acids are the most common. Fatty acids may be saturated or unsaturated, with the configuration of the double bonds nearly always cis. The length and the degree of unsaturation of fatty acids chains have a profound effect on membranes fluidity as unsaturated lipids create a kink, preventing the fatty acids from packing together as tightly, thus decreasing the melting point (increasing the fluidity) of the membrane. The number of unsaturaded cis bonds and their positions besides the number of Carbon atoms characterizes the lipid. Quite generally, there are 3n Carbons after each bond. The creation of unsatured bond by removing H atom from the fatty acid could be an initiating step in the basic braiding operation creating room for the dancers. The bond should be created on both neighboring lipids simultaneously.

B. How the braiding operation could be induced?

One can imagine several models for what might happen during the braiding operation in the lipid bilayer. One such view is following.

  1. The creation of unsaturated bond and involving elimination of H atom from fatty acid would lead to cis configuration and create the room needed by dancers. This operation should be performed for both lipids participating in the braiding operation. After the braiding it might be necessary to add H atom back to stabilize the situation. The energy needed to perform either or both of these operations could be provided by the phosphate group.

  2. The hydrophilic ends of lipids couple the lipids to the surrounding hydrodynamic flow in the case that the lipids are able to move. This coupling could induce the braiding. The primary control of tqc would thus be by using the hydrodynamic flow by generating localized vortices. There is considerable evidence for water memory but its mechanism remains to be poorly understood. If also water memory is realized in terms of the braid strands connecting fluid particles, DNA tqc could have evolved from water memory.

  3. Sol-gel phase transition is conjectured to be important for the quantum information processing of cell (see this). In the transition which can occur cyclically actin filaments (also at EEG frequencies) are assembled and lead to a gel phase resembling solid. Sol phase could correspond to tqc and gel to the phase following the halting of tqc. Actin filaments might be assignable with braid strands or bundles of them and shield the braiding. Also microtubules might shield bundles of braid strands.

  4. Only inner braid strands are directly connected to DNA which also supports the view that only the inner monolayer suffers a braiding operation during tqc and that the outer monolayer should be in a "freezed" state during it. There is a net negative charge associated with the inner mono-layer possibly relating to its participation to the braiding. The vigorous hydrodynamical flows known to take place below the cell membrane could induce the braiding.

C. How braid color could be realized?

The conserved braid color is not necessary for the model but would imply genetic coding of the tqc hardware so that sexual reproduction would induce an evolution of tqc hardware. Braid color would also make the coupling of foreign DNA to the tqc performed by the organism difficult and realize an immune system at the level of quantum information processing.

The conservation of braid color poses however considerable problems. The concentration of braid strands of same color to patches would guarantee the conservation but would restrict the possible braiding dramatically. A more attractive option is that the strands of same color find each other automatically by energy minimization after the halting of tqc. Electromagnetic Coulomb interaction would be the most natural candidate for the interaction in question. Braid color would define a faithful genetic code at the level of nucleotides. It would induce long range correlation between properties of DNA strand and the dynamics of cell immediately after the halting of tqc.

C.1 Chemical realization of color is not plausible

The idea that color could be a chemical property of phospholipids does not seem plausible. The lipid asymmetry of the inner and outer monolayers excludes the assignment of color to the hyrdrophilic group PS, PI, PE, PCh. Fatty acids have N=14,...,24 carbon atoms and N=16 and 18 are the most common cases so that one could consider the possibility that the 4 most common feet pairs could correspond to the resulting combinations. It is however extremely difficult to understand how long range correlation between DNA nucleotide and fatty acid pair could be created.

C.2 Could quark pairs code for braid color?

It seems that the color should be a property of the braid strand. In TGD inspired model of high Tc super-conductivity (see this) wormhole contacts having u and dc and d and uc quarks at the two wormhole throats feed electron's gauge flux to larger space-time sheet. The long range correlation between electrons of Cooper pairs is created by color confinement for an appropriate scaled up variant of chromo-dynamics which are allowed by TGD. Hence the neutral pairs of colored quarks whose members are located the ends of braid strand acting like color flux tube connecting nucleotide to the lipid could code DNA color to QCD color.

For the pairs udc with net em charge the quark and anti-quark have same sign of em charge and tend to repel each other. Hence the minimization of electro-magnetic Coulomb energy favors the neutral configurations uuc, ddc and uuc, and ddc coding for A, G (say) and their conjugates T and C. After the halting of tqc only these pairs would form with a high probability. The reconnection of the strands would mean a formation of a short color flux tube between the strands and the annihilation of quark pair to gluon. Note that single braid strand would connect DNA color and its conjugate rather than identical colors so that braid strands connecting two DNA strands (conjugate strands) should always traverse through an even (odd) number of cell membranes.

For details see the chapter DNA as Topological Quantum Computer of "Genes and Memes".

Monday, December 24, 2007

Is the description of accelerated expansion in terms of cosmological constant possible in TGD framework?

The introduction of cosmological constant seems to be the only manner to explain accelerated expansion and related effects in the framework of General Relativity. As summarized in the previous posting, TGD allows different explanation of these effects. I will not however go to this here but represent comments about the notion of vacuum energy and the possibility to describe accelerated expansion in terms of cosmological constant in TGD framework.

The term vacuum energy density is bad use of language since De Sitter space is a solution of field equations with cosmological constant at the limit of vanishing energy momentum tensor carries vacuum curvature rather than vacuum energy. Thus theories with non-vanishing cosmological constant represent a family of gravitational theories for which vacuum solution is not flat so that Einstein's basic identification matter = curvature is given up. No wonder, Einstein regarded the introduction of cosmological constant as the biggest blunder of his life.

De Sitter space is representable as a hyperboloid a2-u2= -R2, where one has a2=t2-r2 and r2=x2+y2+z2. The symmetries of de Sitter space are maximal but Poincare group is replaced with Lorentz group of 5-D Minkowski space and translations are not symmetries. The value of cosmological constant is Λ= 3/R2. The presence of non-vanishing dimensional constant is from the point of view of conformal invariance a feature raising strong suspicions about the correctness of the underlying physics.

1. Imbedding of De Sitter space as a vacuum extremal

De Sitter Space is possible as a vacuum extremal in TGD framework. There exists infinite number of imbeddings as a vacuum extremal into M4×CP2. Take any infinitely long curve X in CP2 not intersecting itself (one might argue that infinitely long curve is somewhat pathological) and introduce a coordinate u for it such that its induced metric is ds2=du2. De Sitter space allows the standard imbedding to M4×X as a vacuum extremal. The imbedding can be written as u= ±[a2+R2]1/2 so that one has r2< t2+R2. The curve in question must fill at least 2-D submanifold of CP2 densely. An example is torus densely filled by the curve φ = αψ where α is irrational number. Note that even a slightest local deformation of this object induces an infinite number of self-intersections. Space-time sheet fills densely 5-D set in this case. One can ask whether this kind of objects might be analogs of branes in TGD framework. As a matter fact, CP2 projections of 1-D vacuum extremals could give rise to both the analogs of branes and strings connecting them if space-time surface contains both regular and "brany" pieces. It is not clear whether the 2-D Lagrangian manifolds can fill densely D> 3-dimensional submanifold of CP2.

It might be that the 2-D Lagrangian manifolds representing CP2 projection of the most general vacuum extremal, can fill densely D> 3-dimensional sub-manifold of CP2. One can imagine construction of very complex Lagrange manifolds by gluing together pieces of 2-D Lagrangian sub-manifolds by arbitrary 1-D curves. One could also rotate 2-Lagrangian manifold along a 2-torus - just like one rotates point along 2-torus in the above example - to get a dense filling of 4-D volume of CP2.

The M4 projection of the imbedding corresponds to the region a2>-R2 containing future and past lightcones. If u varies only in range [0,u0] only hyperboloids with a2 in the range [-R2,-R2+u02] are present in the foliation. In zero energy ontology the space-like boundaries of this piece of De Sitter space, which must have u02>R2, would be carriers of positive and negative energy states. The boundary corresponding to u0=0 is space-like and analogous to the orbit of partonic 2-surface. For u02<R2 there are no space-like boundaries and the interpretation as zero energy state is not possible. Note that the restriction u02>R2 plus the choice of the branch of the imbedding corresponding to future or past directed lightcone is natural in TGD framework.

2. Could negative cosmological constant make sense in TGD framework?

The questionable feature of slightly deformed De Sitter metric as a model for the accelerated expansion is that the value of R would be same order of magnitude as the recent age of the Universe. Why should just this value of cosmic time be so special? In TGD framework one could of course consider space-time sheets having De Sitter cosmology characterized by a varying value of R. Also the replacement of one spatial coordinate with CP2 coordinate implies very strong breaking of translational invariance. Hence the explanation relying on quantization of gravitational Planck constant looks more attractive to me.

It is however always useful to make an exercise in challenging the cherished beliefs.

  1. Could the complete failure of the perturbation theory around canonically imbedded M4 make De Sitter cosmology natural vacuum extremal. De Sitter space appears also in the models of inflation and long range correlations might have something to do with the intersections between distant points of 3-space resulting from very small local deformations. Could both the slightly deformed De Sitter space and quantum critical cosmology represent cosmological epochs in TGD Universe?

  2. Gravitational energy defined as a non-conserved Noether charge in terms of Einstein tensor TGD is infinite for De Sitter cosmology (Λ as characterizer of vacuum energy). If one includes to the gravitational momentum also metric tensor gravitational four-momentum density vanishes (Λ as characterizer of vacuum curvature). TGD does not involve Einstein-Hilbert action as fundamental action and gravitational energy momentum tensor should be dictated by finiteness condition so that negative cosmological constant might make sense in TGD.

  3. The imbedding of De Sitter cosmology involves the choice of a preferred lightcone as does also quantization of Planck constant. Quantization of Planck constant involves the replacement of the lightcones of M4× CP2 by its finite coverings and orbifolds glued to together along quantum critical sub-manifold. Finite pieces of De Sitter space are obtained for rational values of α and there is a covering of lightcone by CP2 points. How can I be sure that there does not exist a deeper connection between the descriptions based on cosmological constant and on phase transitions changing the value Planck constant?

Note that Anti de Sitter space having similar imbedding to 5-D Minkowski space with two time like dimensions does not possess this kind of imbedding. Very probably no imbeddings exist so that TGD would allow only imbeddings of cosmologies with correct sign of Λ whereas M-theory predicts a wrong sign for it. Note also that Anti de Sitter space appearing in AdS-CFT dualities contains closed time-like loops and is therefore also physically questionable.

For details see the chapter Quantum Astrophysics of "Classical Physics in Many-Sheeted Space-Time".

Sunday, December 23, 2007

Two stellar components in the halo of Milky Way

Bohr orbit model for astrophysical objects suggests that also galactic halo should have a modular structure analogous to that of planetary system or the rings of Saturn rather than that predicted by continuous mass distribution. Quite recently it was reported that the halo of Milky Way - earlier thought to consist of single component - seems to consist of two components (see the article of Carolle et al in Nature. See also this and this).

Even more intriguingly, the stars in these halos rotate in opposite directions. The average velocities of rotation are about 25 km/s and 50 km/s for inner and outer halos respectively. The inner halo corresponds to a range 10-15 kpc of orbital radii and outer halo to 15-20 kpc. Already the constancy of rotational velocity is strange and its increase even stranger. The orbits in inner halo are more eccentric with axial ratio rmin/rmax≈ .6. For outer halo the ratio varies in the range .9-1.0. The abundances of elements heavier than Lithium are about 3 times higher in the inner halo which suggests that it has been formed earlier.

Bohr orbit model would explain halos as being due to the concentration of visible matter around ring like structures of dark matter in macroscopic quantum state with gigantic gravitational Planck constant. This would explain also the opposite directions of rotation.

One can consider two alternative models predicting constant rotation velocity for circular orbits. The first model allows circular orbits with arbitrary plane of rotation, second model and the hybrid of these models only for the orbits in galactic plane.

  1. The original model assumes that galactic matter has resulted in the decay of cosmic string like object so that the mass inside sphere of radius R is M(R) propto R.
  2. In the second model the gravitational acceleration is due to gravitational field of a cosmic string like object transversal to the galactic plane. String creates no force parallel to string but 1/ρ radial acceleration orthogonal to the string. Of course, there is the gravitational force created by galactic matter itself. One can also associate cosmic string like objects with the circular halos themselves and it seems that this is needed in order to explain the latest findings.

The big difference in the average rotation velocities < vφ>; for inner and outer halos cannot be understood solely in terms of the high eccentricity of the orbits in the inner halo tending to reduce < vφ>. Using the conservation laws of angular momentum (L= mvminρmax) and of energy in Newtonian approximation one has < vφ>= ρmaxvmin< 1/ρ>. This gives the bounds

vmin< < vφ>< vmax= vminmaxmin]≈ 1.7 vmin .

For both models v=v0= k1/2, k=TG, (T is the effective string tension) for circular orbits. Internal consistency would require vmin<< vφ>≈.5v0<vmax≈ 1.7 vmin. On the other hand, vmax<v0 and thus vmin>.6v0 must hold true since the sign of the radial acceleration for ρmin is positive. Obviously 0.5v0>v>sub>min>.6v0 means a contradiction.

The big increase of the average rotation velocity suggests that inner and outer halos correspond to closed cosmic string like objects around which the visible matter has condensed. The inner string like object would create an additional gravitational field experienced by the stars of the outer halo. The increase of the effective string tension by factor x corresponds to the increase of < vφ> by a factor x1/2. The increase by a factor 2 plus higher eccentricity could explain the ratio of average velocities.

For details see the new chapter Quantum Astrophysics of "Classical Physics in Many-Sheeted Space-Time.

Wednesday, December 19, 2007

Experimental evidence for accelerated expansion is consistent with TGD based model

There are several pieces of evidence for accelerated expansion, which need not mean cosmological constant, although this is the interpretation adopted here. It is interesting to see whether this evidence is indeed consistent with TGD based interpretation.

A. The four pieces of evidence for accelerated expansion

A.1. Supernovas of type Ia

Supernovas of type Ia define standard candles since their luminosity varies in an oscillatory manner and the period is proportional to the luminosity. The period gives luminosity and from this the distance can be deduced by using Hubble's law: d= cz/H0, H0 Hubble's constant. The observation was that the farther the supernova was the more dimmer it was as it should have been. In other words, Hubble's constant increased with distance and the cosmic expansion was accelerating rather than decelerating as predicted by the standard matter dominated and radiation dominated cosmologies.

A.2 Mass density is critical and 3-space is flat

It is known that the contribution of ordinary and dark matter explaining the constant velocity of distance stars rotating around galaxy is about 25 per cent from the critical density. Could it be that total mass density is critical?

From the anisotropy of cosmic microwave background one can deduce that this is the case. What criticality means geometrically is that 3-space defined as surface with constant value of cosmic time is flat. This reflects in the spectrum of microwave radiation. The spots representing small anisotropies in the microwave background temperature is 1 degree and this correspond to flat 3-space. If one had dark matter instead of dark energy the size of spot would be .5 degrees!

Thus in a cosmology based on general relativity cosmological constant remains the only viable option. The situation is different in TGD based quantum cosmology based on sub-manifold gravity and hierarchy of gravitational Planck constants.

A.3 The energy density of vacuum is constant in the size scale of big voids

It was observed that the density of dark energy would be constant in the scale of 108 light years. This length scale corresponds to the size of big voids containing galaxies at their boundaries.

A.4 Integrated Sachs-Wolf effect

Also so called integrated Integrated Sachs-Wolf effect supports accelerated expansion. Very slow variations of mass density are considered. These correspond to gravitational potentials. Cosmic expansion tends to flatten them but mass accretion to form structures compensates this effect so that gravitational potentials are unaffected and there is no effect of CMB. Situation changes if dark matter is replaced with dark energy the accelerated expansion flattening the gravitational potentials wins the tendency of mass accretion to make them deeper. Hence if photon passes by an over-dense region, it receives a little energy. Similarly, photon loses energy when passing by an under-dense region. This effect has been observed.

B. Comparison with TGD

The minimum TGD based explanation for accelerated expansion involves only the fact that the imbeddings of critical cosmologies correspond to accelerated expansion. A more detailed model allows to understand why the critical cosmology appears during some periods.

B.1. Accelerated expansion in classical TGD

The first observation is that critical cosmologies (flat 3-space) imbeddable to 8-D imbedding space H correspond to negative pressure cosmologies and thus to accelerating expansion. The negativity of the counterpart of pressure in Einstein tensor is due to the fact that space-time sheet is forced to be a 4-D surface in 8-D imbedding space. This condition is analogous to a force forcing a particle at the surface of 2-sphere and gives rise to what could be called constraint force. Gravitation in TGD is sub-manifold gravitation whereas in GRT it is manifold gravitation. This would be minimum interpretation involving no assumptions about what mechanism gives rise to the critical periods.

B.2 Accelerated expansion and hierarchy of Planck constants

One can go one step further and introduce the hierarchy of Planck constants. The basic difference between TGD and GRT based cosmologies is that TGD cosmology is quantum cosmology. Smooth cosmic expansion is replaced by an expansion occurring in discrete jerks corresponding to the increase of gravitational Planck constant. At space-time level this means the replacement of 8-D imbedding space H with a book like structure containing almost-copies of H with various values of Planck constant as pages glued together along critical manifold through which space-time sheet can leak between sectors with different values of hbar. This process is the geometric correlate for the the phase transition changing the value of Planck constant.

During these phase transition periods critical cosmology applies and predicts automatically accelerated expansion. Neither genuine negative pressure due to "quintessence" nor cosmological constant is needed. Note that quantum criticality replaces inflationary cosmology and predicts a unique cosmology apart from single parameter. Criticality also explains the fluctuations in microwave temperature as long range fluctuations characterizing criticality.

B.3 Accelerated expansion and flatness of 3-cosmology

Observations 1) and 2) about super-novae and critical cosmology (flat 3-space) are consistent with this cosmology. In TGD dark energy must be replaced with dark matter because the mass density is critical during the phase transition. This does not lead to wrong sized spots since it is the increase of Planck constant which induces the accelerated expansion understandable also as a constraint force due to imbedding to H.

B.4 The size of large voids is the characteristic scale

The TGD based model in its simplest form model assigns the critical periods of expansion to large voids of size 108 ly. Also larger and smaller regions can express similar periods and dark space-time sheets are expected to obey same universal "cosmology" apart from a parameter characterizing the duration of the phase transition. Observation 3) that just this length scale defines the scale below which dark energy density is constant is consistent with TGD based model.

The basic prediction is jerkwise cosmic expansion with jerks analogous to quantum transitions between states of atom increasing the size of atom. The discovery of large voids with size of order 108 ly but age much longer than the age of galactic large voids conforms with this prediction (see this). One the other hand, it is known that the size of galactic clusters has not remained constant in very long time scale so that jerkwise expansion indeed seems to occur.

B.5 Do cosmic strings with negative gravitational mass cause the phase transition inducing accelerated expansion

Quantum classical correspondence is the basic principle of quantum TGD and suggest that the effective antigravity manifested by accelerated expansion might have some kind of concrete space-time correlate. A possible correlate is super heavy cosmic string like objects at the center of large voids which have negative gravitational mass under very general assumptions. The repulsive gravitational force created by these objects would drive galaxies to the boundaries of large voids. At some state the pressure of galaxies would become too strong and induce a quantum phase transition forcing the increase of gravitational Planck constant and expansion of the void taking place much faster than the outward drift of the galaxies. This process would repeat itself. In the average sense the cosmic expansion would not be accelerating.

For details see the chapter Quantum Astrophysics of "Classical Physics in Many-Sheeted Space-Time".

Tuesday, December 18, 2007

DNA as a topological quantum computer: VII

In previous postings I, II, III, IV, V, VI I have discussed how DNA topological quantum computation could be realized.

If Josephson current through cell membrane ceases during tqc, tqc manifests itself as the presence of only EEG rhythm characterized by an appropriate cyclotron frequency (see posting VI). Synchronous neuron firing and various rhythms dominating during sleep and meditation might therefore relate to tqc. The original idea that a phase shift of EEG is induced by the voltage initiating tqc - although wrong - was however useful in that it inspired the question whether the initiation of tqc could have something to do with what is known as a place coding by phase shifts performed by hippocampal pyramidal cells (see this and this). Playing with this idea provides important insights about the construction of quantum memories and demonstrates the amazing explanatory power of the paradigm once again.

The model also makes explicit important conceptual differences between tqc a la TGD and in the ordinary sense of word: in particular those related to different view about the relation between subjective and geometric time.

(see posting VI) could have something to do with what is known as a place coding by phase shifts performed by hippocampal pyramidal cells (see this and this). The answer turns out to be negative but playing with this idea provides important insights about the construction of quantum memories and demonstrates the amazing explanatory power of the paradigm once again.

The model also makes explicit important conceptual differences between tqc a la TGD and in the ordinary sense of word: in particular those related to different view about the relation between subjective and geometric time.

  1. In TGD tqc corresponds to the unitary process U taking place following by a state function reduction and preparation. It replaces configuration space ("world of classical worlds") spinor field with a new one. Configuration space spinor field represent generalization of time evolution of Schrödinger equation so that a quantum jump occurs between entire time evolutions. Ordinary tqc corresponds to Hamiltonian time development starting at time t=0 and halting at t=T to a state function reduction.

  2. In TGD the expression of the result of tqc is essentially 4-D pattern of gene expression (spiking pattern in the recent case). In usual tqc it would be 3-D pattern emerging as the computation halts at time t. Each moment of consciousness can be seen as a process in which a kind of 4-D statue is carved by starting from a rough sketch and proceeding to shorter details and building fractally scaled down variants of the basic pattern. Our life cycle would be a particular example of this process and would be repeated again and again but of course not as an exact copy of the previous one.

1. Empirical findings

The place coding by phase shifts was discovered by O'Reefe and Recce. Y. Yamaguchi describes the vision in which memory formation by so called theta phase coding is essential for the emergence of intelligence. It is known that hippocampal pyramidal cells have "place property" being activated at specific "place field" position defined by an environment consisting of recognizable objects serving as landmarks. The temporal change of the percept is accompanied by a sequence of place unit activities. The theta cells exhibit change in firing phase distributions relative to the theta rhythm and the relative phase with respect to theta phase gradually increases as the rat traverses the place field. In a cell population the temporal sequence is transformed into a phase shift sequence of firing spikes of individual cells within each theta cycle.

Thus a temporal sequence of percepts is transformed into a phase shift sequence of individual spikes of neurons within each theta cycle along linear array of neurons effectively representing time axis. Essentially a time compressed representation of the original events is created bringing in mind temporal hologram. Each event (object or activity in perceptive field) is represented by a firing of one particular neuron at time τn measured from the beginning of the theta cycle. τn is obtained by scaling down the real time value tn of the event. Note that there is some upper bound for the total duration of memory if scaling factor is constant.

This scaling down - story telling - seems to be a fundamental aspect of memory. Our memories can even abstract the entire life history to a handful of important events represented as a story lasting only few seconds. This scaling down is thought to be important not only for the representation of the contextual information but also for the memory storage in the hippocampus. Yamaguchi and collaborators have also found that the gradual phase shift occurs at half theta cycle whereas firings at the the other half cycle show no correlation. One should also find an interpretation for this.

2. TGD based interpretation of findings

How this picture relates to TGD based 4-D view about memory in which primary memories are stored in the brain of the geometric past?

  1. The simplest option is the initiation of tqc like process in the beginning of each theta cycle of period T and having geometric duration T/2. The transition T→ T/2 conforms nicely with the fundamental hierarchy of time scales comings as powers defining the hierarchy of measurement resolutions and associated with inclusions of HFFs. In this picture the increasing phase shift cannot correspond to the phase shift associated with the initiation of tqc. That firing is random at second half of cycle could simply mean that no tqc is performed and that the second half is used to code the actual events of "geometric now".

  2. In accordance with the vision about the hierarchy of Planck constants defining a hierarchy of time scales of long term memories and of planned action, the scaled down variants of memories would be obtained by down-wards scaling of Planck constant for the dark space-time sheet representing the original memory. In principle a scaling by any factor 1/n (actually by any rational) is possible and would imply the scaling down of the geometric time span of tqc and of light-like braids. One would have tqcs inside tqcs and braids within braids (flux quanta within flux quanta). The coding of the memories to braidings would be an automatic process as almost so also the formation of their zoomed down variants.

  3. A mapping of the time evolution defining memory to a linear array of neurons would take place. This can be understood if the scaled down variant (scaled down value of hbar) of the space-time sheet representing original memory is parallel to the linear neuron array and contains at scaled down time value tn a stimulus forcing nth neuron to fire. The 4-D character of the expression of the outcome of tqc allows to achieve this automatically without complex program structure.

To sum up, it seems that the scaling of Planck constant of time like braids provides a further fundamental mechanism not present in standard tqc allowing to build fractally scaled down variants of not only memories but tqc:s in general. The ability to simulate in shorter time scale is a certainly very important prerequisite of intelligent and planned behavior. This ability has also a space-like counterpart: it will be found that the scaling of Planck constant associated with space-like braids connecting bio-molecules might play a fundamental role in DNA replication, control of transcription by proteins, and translation of mRNA to proteins. A further suggestive conclusion is that the period T associated with a given EEG rhythm defines a sequence of tqc:s having geometric span T/2 each: the rest of the period would be used to perceive the environment of the geometric now. The fractal hierarchy of EEGs would mean that there are tqcs within tqcs in a very wide range of time scales.

For details see the new chapter DNA as Topological Quantum Computer of "Genes and Memes".

Monday, December 17, 2007

DNA as topological quantum computer: VI

In previous postings I have discussed how DNA topological quantum computation could be realized (see this, this, this , this, and this). A more detailed model for braid strands leads to the understanding of how high Tc super conductivity assigned with cell membrane (see this) could relate to tqc.

1. Are space-like braids A-braids or B-braids or hybrids of these?

If space-like braid strands are identified as idealized structures obtained from 3-D tube like structures by replacing them with 1-D strands, one can regard the braiding as a purely geometrical knotting of braid strands.

The simplest realization of the braid strand would be as a hollow cylindrical surface connecting conjugate DNA nucleotide to cell membrane and going through 5- and/or 6- cycles associated with the sugar backbone of conjugate DNA nucleotides. The free electron pairs associated with the aromatic cycles would carry the current creating the magnetic field needed.

There are two extreme options. For B-option magnetic field is parallel to the strand and vector potential rotates around it. For A-option vector potential is parallel to the strand and magnetic field rotates around it. The general case corresponds to the hybrid of these options and involves helical magnetic field, vector potential, and current.

  1. For B-option current flowing around the cylindrical tube in the transversal direction would generate the magnetic field. The splitting of the flux tube would require that magnetic flux vanishes requiring that the current should go to zero in the process. This would make possible selection of a part of DNA strand participating to tqc.

  2. For A-option the magnetic field lines of the braid would rotate around the cylinder. This kind of field is created by a current in the direction of cylinder. In the beginning of tqc the strand would split and the current of electron pairs would stop flowing and the magnetic field would disappear. Also now the initiation of computation would require stopping of the current and should be made selectively at DNA.

The general conclusion would be that the control of the tqc should rely on currents of electron pairs (perhaps Cooper pairs) associated with the braid strands.

Supra currents would have quantized values and are therefore very attractive candidates. The (supra) currents could also bind lipids to pairs so that they would define single dynamical unit in 2-D hydrodynamical flow. One can also think that Cooper pairs with electrons assignable to different members of lipid pair bind it to single dynamical unit.

2. Do supra currents generate the magnetic fields?

Energetic considerations favor the possibility that supra currents create the magnetic fields associated with the braid strands. Supra current would be created by a voltage pulse Δ V, which gives rise to a constant supra current after it has ceased. Supra current would be destroyed by a voltage pulse of opposite sign. Therefore voltage pulses could define an elegant fundamental control mechanism allowing to select the parts of genome participating to tqc. This kind of voltage pulse could be collectively initiated at cell membrane or at DNA. Note that constant voltage gives rise to an oscillating supra current.

Josephson current through the cell membrane would be also responsible for dark Josephson radiation determining that part of EEG which corresponds to the correlate of neuronal activity (see this). Note that TGD predicts a fractal hierarchy of EEGs and that ordinary EEG is only one level in this hierarchy. The pulse initiating or stopping tqc would correspond in EEG to a phase shift by a constant amount

Δ Φ= ZeΔ VT/hbar ,

where T is the duration of pulse and Δ V its magnitude.

The contribution of Josephson current to EEG responsible for beta and theta bands interpreted as satellites of alpha band should be absent during tqc and only EEG rhythm would be present. The periods dominated by EEG rhythm should be observed as EEG correlates for problem solving situations (say mouse in a maze) presumably involving tqc. The dominance of slow EEG rhythms during sleep and meditation would have interpretation in terms of tqc.

3. Topological considerations

The existence of supra current for A- or B-braid requires that the flow allows a complex phase exp(iΨ) such that supra current is proportional to grad Ψ. This requires integrability in the sense that one can assign to the flow lines of A or B (combination of them in the case of A-B braid) a coordinate variable Ψ varying along the flow lines. In the case of a general vector field X this requires grad Ψ= Φ X giving rot X= -grad Φ/Φ as an integrability condition. This condition defines what is known as Beltrami flow (see this).

The perturbation of the flux tube, which spoils integrability in a region covering the entire cross section of flux tube means either the loss of super-conductivity or the disappearance of the net supra current. In the case of the A-braid, the topological mechanism causing this is the increase the dimension of the CP2 projection of the flux tube so that it becomes 3-D (see this), where I have also considered the possibility that 3-D character of CP2 projection is what transforms the living matter to a spin glass type phase in which very complex self-organization patterns emerge. This would conform with the idea that in tqc takes place in this phase.

For details see the new chapter DNA as Topological Quantum Computer of "Genes and Memes".

Friday, December 14, 2007

Are the abundances of heavier elements determined by cold fusion in interstellar medium?

According to the standard model, elements not heavier than Li were created in Big Bang. Heavier elements were produced in stars by nuclear fusion and ended up to the interstellar space in super-nova explosions and were gradually enriched in this process. Lithium problem forces to take this theoretical framework with a grain of salt.

The work of Kervran [1] suggests that cold nuclear reactions are occurring with considerable rates, not only in living matter but also in non-organic matter. Kervran indeed proposes that also the abundances of elements at Earth and planets are to high degree determined by nuclear transmutations and discusses some examples. For instance, new mechanisms for generation of O and Si would change dramatically the existing views about evolution of planets and prebiotic evolution of Earth.

This inspires the question whether elements heavier than Li could be produced in interstellar space by cold nuclear reactions. In the following I consider a model for this. The basic prediction is that the abundances of heavier elements should not depend on time if the interstellar production dominates. The prediction is consistent with the recent experimental findings challenging seriously the standard model.

1. Are heavier nuclei produced in the interstellar space?

TGD based model for cold fusion by plasma electrolysis and using heavy water explains many other anomalies: for instance, H1.5 anomaly of water and Lithium problem of cosmology (the amount of Li is considerably smaller than predicted by Big Bang cosmology and the explanation is that part of it transforms to dark Li with larger value of hbar and present in water). The model allows to understand the surprisingly detailed discoveries of Kervran about nuclear transmutations in living matter (often by bacteria) by possible slight modifications of mechanisms proposed by Kervran.

If this picture is correct, it would have dramatic technological implications. Cold nuclear reactions could provide not only a new energy technology but also a manner to produce artificially various elements, say metals. The treatment of nuclear wastes might be carried out by inducing cold fissions of radioactive heavy nuclei to stable products by allowing them to collide with dark Lithium nuclei in water so that Coulomb wall is absent. Amazingly, there are bacteria which can live in the extremely harsh conditions provided by nuclear reactor were anything biological should die. Perhaps these bacteria carry out this process in their own body.

The model also encourages to consider a simple model for the generation of heavier elements in interstellar medium: what is nice that the basic prediction differentiating this model from standard model is consistent with the recent experimental findings. The assumptions are following.

  1. Dark nuclei X(3k, n), that is nuclear strings of form Li(3,n), C(6,n), F(9,n), Mg(12,n), P(15,n), A(18,n), etc..., form as a fusion of Li strings. n=Z,Z+1 is the most plausible value of n. There is also 4He present but as a noble gas it need not play an important role in condensed matter phase (say interstellar dust). The presence of water necessitates that of Li(3,n) if one accepts the proposed model as such.

  2. The resulting nuclei are in general stable against spontaneous fission by energy conservation. The binding energy of He(2,2) is however exceptionally high so that alpha decay can occur in dark nuclear reactions between X(3k,n) allowed by the considerable reduction of the Coulomb wall. The induced fissions X(3k,n)→ X(3k-2,n-2)+He(2,2) produces nuclei with atomic number Z mod 3= 1 such as Be(4,5), N(7,7), Ne(10,10), Al(13,14), S(16,16), K(19,20),... Similar nuclear reactions make possible a further alpha decay of Z mod 3=1 nuclei to give nuclei with Z mod 2 such as B(5,6), O(8,8), Na(11,12), Si(14,14), Cl(17,18), Ca(20,20),... so that most stable isotopes of light nuclei could result in these fissions.

  3. The dark nuclear fusions of already existing nuclei can create also heavier Fe. Only the gradual decrease of the binding energy per nucleon for nuclei heavier than Fe poses restrictions on this process.

2. The abundances of nuclei in interstellar space should not depend on time

The basic prediction of TGD inspired model is that the abundances of the nuclei in the interstellar space should not depend on time if the rates are so high that equilibrium situation is reached rapidly. The hbar increasing phase transformation of the nuclear space-time sheet determines the time scale in which equilibrium sets on. Standard model makes different prediction: the abundances of the heavier nuclei should gradually increase as the nuclei are repeatedly re-processed in stars and blown out to the interstellar space in super-nova explosion.

Amazingly, there is empirical support for this highly non-trivial prediction [2]. Quite surprisingly, the 25 measured elemental abundances (elements up to Sn(50,70) (tin) and Pb(82,124) (lead)) of a 12 billion years old galaxy turned out to be very nearly the same as those for Sun. For instance, oxygen abundance was 1/3 from that from that estimated for Sun. Standard model would predict that the abundances should be .01-.1 from that for Sun as measured for stars in our galaxy. The conjecture was that there must be some unknown law guaranteing that the distribution of stars of various masses is time independent. The alternative conclusion would be that heavier elements are created mostly in interstellar gas and dust.

3. Could also "ordinary" nuclei consist of protons and negatively charged color bonds?

The model would strongly suggest that also ordinary stable nuclei consist of protons with proton and negatively charged color bond behaving effectively like neutron. Note however that I have also consider the possibility that neutron halo consists of protons connected by negatively charged color bonds to main nucleus. The smaller mass of proton would favor it as a fundamental building block of nucleus and negatively charged color bonds would be a natural manner to minimizes Coulomb energy. The fact that neutron does not suffer a beta decay to proton in nuclear environment provided by stable nuclei would also find an explanation.

  1. Ordinary shell model of nucleus would make sense in length scales in which proton plus negatively charged color bond looks like neutron.

  2. The strictly nucleonic strong nuclear isospin is not vanishing for the ground state nuclei if all nucleons are protons. This assumption of the nuclear string model is crucial for quantum criticality since it implies that binding energies are not changed in the scaling of hbar if the length of the color bonds is not changed. The quarks of charged color bond however give rise to a compensating strong isospin and color bond plus proton behaves in a good approximation like neutron.

  3. Beta decays might pose a problem for this model. The electrons resulting in beta decays of this kind nuclei consisting of protons should come from the beta decay of the d-quark neutralizing negatively charged color bond. The nuclei generated in high energy nuclear reactions would presumably contain genuine neutrons and suffer beta decay in which d quark is nucleonic quark. The question is whether how much the rates for these two kinds of beta decays differ and whether existing facts about beta decays could kill the model.

References

[1] C. L. Kervran (1972), Biological transmutations, and their applications in chemistry, physics, biology, ecology, medicine, nutrition, agriculture, geology, Swan House Publishing Co.

[2] J. Prochaska, J. C. Howk, A. M. Wolfe (2003),{\em The elemental abundance pattern in a galaxy at z = 2.626}, Nature 423, 57-59 (2003). See also Distant elements of surprise.

For details see the chapter Nuclear String Hypothesis of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Wednesday, December 12, 2007

DNA as topological quantum computer: V

In previous postings I have discussed how DNA topological quantum computation could be realized (see this, this , this, and this). It is useful to try to imagine how gene expression might relate to the halting of tqc. There are of course myriads of alternatives for detailed realizations, and one can only play with thoughts to build a reasonable guess about what might happen.

1. Qubits for transcription factors and other regulators

Genetics is consistent with the hypothesis that genes correspond to those tqc moduli whose outputs determine whether genes are expressed or not. The naive first guess would be that the value of single qubit determines whether the gene is expressed or not. Next guess replaces " is " with " can be".

Indeed, gene expression involves promoters, enhancers and silencers (see this). Promoters are portions of the genome near genes and recognized by proteins known as transcription factors. Transcription factors bind to the promoter and recruit RNA polymerase, an enzyme that synthesizes RNA. In prokaryotes RNA polymerase itself acts as the transcription factor. For eukaryotes situation is more complex: at least seven transcription factors are involved with the recruitment of the RNA polymerase II catalyzing the transcription of the messenger RNA. There are also transcription factors for transcription factors and transcription factor for the transcription factor itself.

The implication is that several qubits must have value "Yes" for the actual expression to occur since several transcription factors are involved with the expression of the gene in general. In the simplest situation this would mean that the computation halts to a measurement of single qubit for subset of genes including at least those coding for transcription factors and other regulators of gene expression.

2. Intron-exon qubit

Genes would have very many final states since each nucleotide is expected to correspond to at least single qubit. Without further measurements that state of nucleotides would remain highly entangled for each gene. Also these other qubits are expected to become increasingly important during evolution.

For instance, eukaryotic gene expression involves a transcription of RNA and splicing out of pieces of RNA which are not translated to amino-acids (introns). Also the notion of gene is known to become increasingly dynamical during the evolution of eukaryotes so that the expressive power of genome increases. A single qubit associated with each codon telling whether it is spliced out or not would allow maximal flexibility. Tqc would define what genes are and the expressive power of genes would be due to the evolution of tqc programs: very much like in the case of ordinary computers. Stopping sign codon and starting codon would automatically tell where the gene begins and ends if the corresponding qubit is "Yes". In this picture the old fashioned static genes of prokaryotes without splicings would correspond to tqc programs for which the portions of genome with a given value of splicing qubit are connected.

3. What about braids between DNA, RNA, tRNA and aminoacids

This simplified picture might have created the impression that aminoacids are quantum outsiders obeying classical bio-chemistry. For instance, transcription factors would in this picture end up to the promoter by a random process and "Print" would only increase the density of the transcription factor. If DNA is able to perform tqc, it would however seem very strange if it would be happy with this rather dull realization of other central functions of the genetic apparatus.

One can indeed consider besides dark braids connecting DNA and its conjugate - crucial for the success of replication - also braids connecting DNA to mRNA and other forms of RNA, mRNA to tRNA, and tRNA to aminoacids. These braids would provide the topological realization of the genetic code and would increase dramatically the precision and effectiveness of the transcription and translation if these processes correspond to quantum transitions at the level of dark matter leading more or less deterministically to the desired outcome at the level of visible matter be it formation of DNA doublet strand, of DNA-mRNA association, of mRNA-tRNA association or tRNA-aminoacid association.

For instance, a temporary reduction of the value of Planck constant for these braids would contract these to such a small size that these associations would result with a high probability. The increase of Planck constant for braids could in turn induce the transfer of mRNA from the nucleus, the opening of DNA double strand during transcription and mitosis.

Also DNA-aminoacid braids might be possible in some special cases. The braiding between regions of DNA at which proteins bind could be a completely general phenomenon. In particular, the promoter region of gene could be connected by braids to the transcription factors of the gene and the halting of tqc computation to printing command could induce the reduction of Planck constant for these braids inducing the binding of the transcription factor binds to the promoter region. In a similar manner, the region of DNA at which RNA polymerase binds could be connected by braid strands to the RNA polymerase.

For details see the new chapter DNA as Topological Quantum Computer of "Genes and Memes".

Tuesday, December 11, 2007

One-element fields as Galois fields associated with infinite primes?

Kea mentioned John Baez's This Week's Finds 259, where John talked about one-element field - a notion inspired by the q=exp(i2π/n)→1 limit for quantum groups. This limit suggests that the notion of one-element field for which 0=1 - a kind of mathematical phantom for which multiplication and sum should be identical operations - could make sense. Physicist might not be attracted by this kind of identification.

In the following I want to articulate some comments from the point of view of quantum measurement theory and its generalization to q-measurement theory which I proposed for some years ago (see this).

I also consider and alternative interpretation in terms of Galois fields assignable to infinite primes which form an infinite hierarchy. These Galois fields have infinite number of elements but the map to the real world effectively reduces the number of elements to 2: 0 and 1 remain different.

1. q→ 1 limit as transition from quantum physics to effectively classical physics?

The q→limit of quantum groups at q-integers become ordinary integers and n-D vector spaces reduce to n-element sets. For quantum logic the reduction would mean that 2N-D spinor space becomes 2N-element set. N qubits are replaced with N bits. This brings in mind what happens in the transition from wave mechanism to classical mechanics. This might relate in interesting manner to quantum measurement theory.

Strictly speaking, q→1 limit corresponds to the limit q=exp(i2π/n), n→∞ since only roots of unity are considered. This also correspond to Jones inclusions at the limit when the discrete group Zn or or its extension-both subgroups of SO(3)- to contain reflection has infinite elements. Therefore this limit where field with one element appears might have concrete physical meaning. Does the system at this limit behave very classically?

In TGD framework this limit can correspond to either infinite or vanishing Planck constant depending on whether one consider orbifolds or coverings. For the vanishing Planck constant one should have classicality: at least naively! In perturbative gauge theory higher order corrections come as powers of g2/4πhbar so that also these corrections vanish and one has same predictions as given by classical field theory.

2. Q-measurement theory and q→ 1 limit.

Q-measurement theory differs from quantum measurement theory in that the coordinates of the state space, say spinor space, are non-commuting. Consider in the sequel q-spinors for simplicity.

Since the components of quantum spinor do not commute, one cannot perform state function reduction. One can however measure the modulus squared of both spinor components which indeed commute as operators and have interpretation as probabilities for spin up or down. They have a universal spectrum of eigen values. The interpretation would be in terms of fuzzy probabilities and finite measurement resolution but may be in different sense as in case of HFF:s. Probability would become the observable instead of spin for q not equal to 1.

At q→ 1 limit quantum measurement becomes possible in the standard sense of the word and one obtains spin down or up. This in turn means that the projective ray representing quantum states is replaced with one of n possible projective rays defining the points of n-element set. For HFF:s of type II1 it would be N-rays which would become points, N the included algebra. One might also say that state function reduction is forced by this mapping to single object at q→ 1 limit.

On might say that the set of orthogonal coordinate axis replaces the state space in quantum measurement. We do this replacement of space with coordinate axis all the time when at blackboard. Quantum consciousness theorist inside me adds that this means a creation of symbolic representations and that the function of quantum classical correspondences is to build symbolic representations for quantum reality at space-time level.

q→ 1 limit should have space-time correlates by quantum classical correspondence. A TGD inspired geometro-topological interpretation for the projection postulate might be that quantum measurement at q→1 limit corresponds to a leakage of 3-surface to a dark sector of imbedding space with q→ 1 (Planck constant near to 0 or ∞ depending on whether one has n→∞ covering or division of M4 or CP2 by a subgroup of SU(2) becoming infinite cyclic - very roughly!) and Hilbert space is indeed effectively replaced with n rays. For q not equal to 1 one would have only probabilities for different outcomes since things would be fuzzy.

In this picture classical physics and classical logic would be the physical counterpart for the shadow world of mathematics and would result only as an asymptotic notion.

3. Could 1-element fields actually correspond to Galois fields associated with infinite primes?

Finite field Gp corresponds to integers modulo p and product and sum are taken only modulo p. An alternative representation is in terms of phases exp(ik2π/p), k=0,...,p-1 with sum and product performed in the exponent. The question is whether could one define these fields also for infinite primes (see this) by identifying the elements of this field as phases exp(ik2π/Π) with k taken to be finite integer and Π an infinite prime (recall that they form infinite hierarchy). Formally this makes sense. 1-element field would be replaced with infinite hierarchy of Galois fields with infinite number of elements!

The probabilities defined by components of quantum spinor make sense only as real numbers and one can indeed map them to real numbers by interpreting q as an ordinary complex number. This would give same results as q→ 1 limit and one would have effectively 1-element field but actually a Galois field with infinite number of elements.

If one allows k to be also infinite integer but not larger than than Π in real sense, the phases exp(k2π/Π) would be well defined as real numbers and could differ from 1. All real numbers in the range [-1,1] would be obtained as values of cos(k2π/Π) so that this limit would effectively give real numbers.

This relates also interestingly to the question whether the notion of p-adic field makes sense for infinite primes. The p-adic norm of any infinite-p p-adic number would be power of π either infinite, zero, or 1. Excluding infinite normed numbers one would have effectively only p-adic integers in the range 1,...Π-1 and thus only the Galois field GΠ representable also as quantum phases.

I conclude with a nice string of text from John'z page:

What's a mathematical phantom? According to Wraith, it's an object that doesn't exist within a given mathematical framework, but nonetheless "obtrudes its effects so convincingly that one is forced to concede a broader notion of existence".

and unashamedely propose that perhaps Galois fields associated with infinite primes might provide this broader notion of existence! In equally unashamed tone I ask whether there exists also hierarchy of conscious entities at q=1 levels in real sense and whether we might identify ourselves as this kind of entities? Note that if cognition corresponds to p-adic space-time sheets, our cognitive bodies have literally infinite geometric size in real sense.