https://matpitka.blogspot.com/2010/12/

Tuesday, December 28, 2010

The revolution taking place in genetics

I received an extremely interesting popular article (thanks to Kalle) about the profound revolution taking place in genetics. See this.

It is fair to say that genetic determinism is falling down and the revolution that is waiting just around the corner will be more profound that anything that has taken place before this in biology. The term "genome's dark matter" expresses what has been discovered during last years. The motivation for the term is the strong analogy with the dark matter of physics. In TGD framework this analogy might be much more than analogy.

The basic anomalies discussed in the article are following.

  1. Trans-generational inheritance. The stretches of DNA which were present in parent's or grand parents' genome but are not present in the genome of offspring affect the traits of offspring.
  2. Context sensitivity of gene's effect: the effect of gene is highly sensitive on its environment in DNA.
  3. Genes explain in many cases only 10 percent of the disease's inheritability: this is "missing heritability" problem.

What makes this so interesting from TGD point of view is that for a few years ago I developed a model of DNA as topological quantum computer (see this). I summarize the basic building bricks of the model before relating it to the Mendelian anomalies.

The notion of magnetic body

The notion of magnetic body as intentional agent using biological body as a motor instrument and sensory receptor with communications taking place in terms of fractal generalization of EEG is the key idea. Each physical system consisting of matter has magnetic body. Magnetic body of given living organism has a fractal onion like structure with layer sizes varying from sub-cellular scales to the scales assignable to EEG frequencies (Earth size) and even above up to the scale of light-life and maybe beyond to scales characterizing the evolution of species.

Immediate implications are the notion of collective DNA expression made possible by the interaction of DNA strands so that they belong to magnetic flux sheets: in this manner not only DNAs of cells and organelles, organs, single organism but also groups of organisms can form coherent structures expressing themselves in synchronous manner. This is a testable prediction.

DNA as topological quantum computer

Topological quantum computation is based on braiding: various braiding patters for braid strands define the tqc programs. There are two types of braids: time-like and space-like.

  1. Cell membrane is 2-D liquid and the flow of lipids affected by the flow of cellular liquid and also by nerve pulse patterns in case of neurons induce braiding. This braiding takes place dynamically at the 2-D parquette defined by the cell membrane in time direction and dance metaphor applies to it. Running tqc program can be seen as dancing.
  2. The magnetic flux tubes connecting DNA nucleotides to the lipids of nuclear membrane and cell membrane and possibly also to membranes of other cells define the space-like braid strands. Since the flux tubes connected to DNA strands are like threads connecting the feet of dancers to the walls of dance hall the resulting space-like braiding codes tqc program to memory, which is highly robust as a topological invariant.

There is a kind of duality between time-like and space-like braidings. This is a new element to the conventional quantum computation paradigm. Combined with the idea that memories are stored in geometric past in zero energy ontology this gives an extremely elegant memory storage mechanism.

Implications for genetics

This vision has profound implications for genetics.

  1. Genes define only the hardware of tqc. Software is defined by the braidings. Introns whose portion steadily increases as the evolutionary level becomes higher and is more than 95 per cent in humans, have been traditionally interpreted as junk DNA (this is true, believe or not!;-)). In this framework introns correspond naturally to that part of genome specialized in tqc: from the point of view of tqc it does matter much whether the intronic portions correspond to repeating sequences (interpreted as a signal for junk-ness) or not.
  2. The evolution of topological quantum computation programs would be far more important than the evolution of genome and the huge differences between species with almost the same genome (such as we and our cousins) could be understood in terms of cultural evolution due to the evolution of topological quantum computer programs. The evolution would have been for a long time evolution of tqc programs rather than that of hardware as the fact that the size of genome and details of does not matter much suggests. This suggests that the appearance of of prokaryotes (and multi-cellulars) meant the emergence of introns and perhaps also cultural evolution as the evolution of quantum software and collective magnetic bodies.

Implications for Mendelian anomalies

This vision also suggests how to understand the origin of the Mendelian anomalies.

  1. Trans-generational inheritance might be understood as an inheritance of tqc programs carrying indirectly information also about the genome of parents. If one accepts TGD vision about organisms as 4-D structures, one must of course be ready to even ask whether genetic effects could be also take place via the mediation of the magnetic bodies assignable to structures formed by several generations.
  2. The context sensitivity of the effect of particular gene could be understood in this picture since the programs are determined not only by a single gene but longer portions of DNA. Individual genes do not matter much when one tries to understand genetic correlates for autism, schizophrenia, and other complex diseases related to functions rather than mere structure. If one speaks about structure, such as the color of flowers situation is of course very simple and Mendelian approach works well. An interesting question is how closely the structure-function dichotomy, exon-intron dichotomy and hardware-software dichotomy correspond to each other.
  3. High level diseases would be much more programming errors than hardware problems. This would solve "missing heritability" problem.

What is amusing, that the physicist's dark matter would indeed be behind "genome's dark matter': magnetic flux tubes are indeed assumed to be carriers of dark matter- dark quarks in fact. In the proposed model quarks with large Planck constant meaning that their Compton length scales is scaled up and gives them size scale of order cell at least are in key role!

See the chapter DNA as Topological Quantum Computer of "Genes and Memes" for a discussion containing also additional references.

Monday, December 27, 2010

The arrow of time and self referentiality of consciousness

The understanding of the relationship between experienced time whose chronon is identified as quantum and geometric time has remained one of the most difficult challenges of TGD inspired theory of consciousness. Second difficult problem is self referentiablity of consciouness.

One should understand the asymmetry between positive and negative energies and between two directions of geometric time at the level of conscious experience, the correspondence between experienced and geometric time, and the emergence of the arrow of time. One should explain why human sensory experience is about a rather narrow time interval of about .1 seconds and why memories are about the interior of much larger CD with time scale of order life time. One should have a vision about the evolution of consciousness: how quantum leaps leading to an expansion of consciousness occur.

Negative energy signals to geometric past - about which phase conjugate laser light represents an example - provide an attractive tool to realize intentional action as a signal inducing neural activities in the geometric past (this would explain Libet's classical findings), a mechanism of remote metabolism, and the mechanism of declarative memory as communications with geometric past. One should understand how these signals are realized in zero energy ontology and why their occurrence is so rare.

In the following I try to demonstrate that TGD inspired theory of consciousness and quantum TGD proper indeed are in tune. I have talked about these problems already earlier and the motivation for this posting is that the discussions with Stephen Paul King in Time discussion group led to a further progress in the understanding of this issues. What I understand now much better is how the self referentiality of consciousness is realized.

Space-time and imbedding space correlates for selves

Quantum jump as a moment of consciousness, self as a sequence of quantum jumps integrating to self, and self hierarchy with sub-selves experienced as mental images, are the basic notions of TGD inspired theory of consciousness. In the most ambitious vision self hierarchy reduces to a fractal hierarchy of quantum jumps within quantum jumps. Quantum classical correspondence demands selves to have space-time correlates both at the level of space-time and imbedding space.

At the level of space-time the first guess for the correlates is as light-like or space-like 3-surfaces. If one believes on effective 2-dimensionality and quantum holography, partonic 2-surfaces plus their 4-D tangent space distribution would code the information about the space-time correlates. By quantum classical correspondence one can also identify space-time sheets as the correlates modulo the gauge degeneracy implied by super-conformal symmetries.

It is natural to interpret CDs as correlates of selves at the level of the imbedding space. CDs can be interpreted either as subsets of the generalized imbedding space or as sectors of WCW. Accordingly, selves correspond to CDs of the generalized imbedding space or sectors of WCW, literally separate interacting quantum Universes. The spiritually oriented reader might speak of Gods. Sub-selves correspond to sub-CDs geometrically. The contents of consciousness of self is about the interior of the corresponding CD at the level of imbedding space. For sub-selves the wave function for the position of tip of CD brings in the delocalization of sub-WCW.

The fractal hierarchy of CDs within CDs is the geometric counterpart for the hierarchy of selves: the quantization of the time scale of planned action and memory as T(k)= 2kT0 suggest an interpretation for the fact that we experience octaves as equivalent in music experience.

Why sensory experience is about so short time interval?

CD picture implies automatically the 4-D character of conscious experience and memories form part of conscious experience even at elementary particle level. Amazingly, the secondary p-adic time scale of electron is T=0.1 seconds defining a fundamental time scale in living matter. The problem is to understand why the sensory experience is about a short time interval of geometric time rather than about the entire personal CD with temporal size of order life-time. The explanation would be that sensory input corresponds to subselves (mental images) with T≈ .1 s at the upper light-like boundary of CD in question. This requires a strong asymmetry between upper and lower light-like boundaries of CDs.

The localization of the contents of the sensory experience to the upper light-cone boundary and local arrow of time could emerge as a consequence of self-organization process involving conscious intentional action. Sub-CDs would be in the interior of CD and self-organization process would lead to a distribution of CDs concentrated near the upper or lower boundary of CD. The local arrow of geometric time would depend on CD and even differ for CD and sub-CDs.

  1. The localization of contents of sensory experience to a narrow time interval would be due to the concentration of sub-CDs representing mental images near the either boundary of CD representing self.

  2. Phase conjugate signals identifiable as negative energy signals to geometric past are important when the arrow of time differs from the standard one in some time scale. If the arrow of time establishes itself as a phase transition, this kind of situations are rare. Negative energy signals as a basic mechanism of intentional action and transfer of metabolic energy would explain why living matter is so special.

  3. Geometric memories would correspond to subselves in the interior of CD, the oldest of them to the regions near "lower" boundaries of CD. Since the density of sub-CDs is small there geometric memories would be rare and not sharp. A temporal sequence of mental images, say the sequence of digits of a phone number, would correspond to a temporal sequence of sub-CDs.

  4. Sharing of mental images corresponds to a fusion of sub-selves/mental images to single sub-self by quantum entanglement: the space-time correlate could be flux tubes connecting space-time sheets associated with sub-selves represented also by space-time sheets inside their CDs.

Arrow of time

TGD forces a new view about the relationship between experienced and geometric time. Although the basic paradox of quantum measurement theory disappears the question about the arrow of geometric time remains. There are actually two times involved. The geometric time assignable to the space-time sheets and the M4 time assignable to the imbedding space.

Consider first the the geometric time assignable to the space-time sheets.

  1. Selves correspond to CDs. The CDs and their projections to the imbedding space do not move anywhere. Therefore the standard explanation for the arrow of geometric time cannot work.

  2. The only plausible interpretation at classical level relies on quantum classical correspondence and the fact that space-times are 4-surfaces of the imbedding space. If quantum jump corresponds to a shift for a quantum superposition of space-time sheets towards geometric past in the first approximation (as quantum classical correspondence suggests), one can understand the arrow of time. Space-time surfaces simply shift backwards with respect to the geometric time of the imbedding space and therefore to the 8-D perceptive field defined by the CD. This creates in the materialistic mind a temporal variant of train illusion. Space-time as 4-surface and macroscopic and macro-temporal quantum coherence are absolutely essential for this interpretation to make sense.

Why this shifting should always take place to the direction of geometric past of the imbedding space? Does it so always? The proposed mechanism for the localization of sensory experience to a short time interval suggests an explanation in terms of intentional action.

  1. CD defines the perceptive field for self. Negentropy Maximization Principle (NMP) or its strenghtened form could be used to justify the hypothesis that selves quite universally love to gain information about the un-known. In other words, they are curious to know about the space-time sheets outside their perceptive field (the future). Therefore they perform quantum jumps tending to shift the superposition of the space-time sheets so that unknown regions of space-time sheets emerge to the perceptive field. Either the upper or lower boundary of CD wins in the competition and the arrow of time results as a spontaneous symmetry breaking. The arrow of time can depend on CD but tends to be the same for CD and its sub-CDs. Global arrow of time could establish itself by a phase transitions establishing the same arrow of time globally by a mechanism analogous to percolation phase transition.

  2. Since the news come from the upper boundary of CD, self concentrates its attention to this region and improves the resolution of sensory experience. The sub-CDs generated in this manner correspond to mental images with contents about this region. Hence the contents of conscious experience, in particular sensory experience, tends to be about the region near the upper boundary.
  3. Note that the space-time sheets need not to continue outside the CD of self but self does not know this and believes that there is something there to be curious about. The quantum jumps inducing what reduces to a shift in region sufficiently far from upper boundary of CD creates a new piece of space-time surface! The non-continuation of the space-time sheet outside CD would be a correlate for the fact that subjective future does not exist.

The emergence of the arrow of time at the level of imbedding space reduces to a modification of the oldest TGD based argument for the arrow of time which is wrong as such. If physical objects correspond to 3-surfaces inside future directed light-cone then the sequence of quantum jumps implies a diffusion to the direction of increasing value of light-cone propert time. The modification of the argument goes as follows.

  1. CDs are characterized by their moduli. In particular, the relative coordinate for the tips of CD has values in past light cone M4- if the future tip is taken as the reference point. An attractive interpretation for the proper time of M4- is as cosmic time having quantized values. Quantum states correspond to wave functions in the modular degrees of freedom and each U process creates a non-localized wave function of this kind. Suppose that state function reduction implies a localization in the modular degrees of freedom so that CD is fixed completely apart from its center of mass position to which zero four-momentum constant plane wave is assigned. One can expect that in average sense diffuction occurs in M4- so that the size of CD tends to increase and that the most distant geometric past defined by the past boundary of CD recedes. This is nothing but cosmic expansion. This provides a formulation for the flow of time in terms of a cosmic redshift. This argument applies also to the positions of the sub-CDs inside CD. Also their proper time distance from the tip of CD is expected to increase.

  2. One can argue that one ends up with contradiction by changing the roles of upper and lower tips. In the case of CD itself is only the proper time distance between the tips which increases and speaking about "future" and "past" tips is only a convention. For sub-CDs of CD the argument would imply that the sub-CDs drifting from the opposite tips tend to concentrate in the middle region of CD unless either tip is in a preferred position. This requires a spontaneous selection of the arrow of time. One could say that the cosmic expansion implied by the drift in M4- "draws" the space-time sheet with it to the geometric past. The spontaneous generation of the asymmetry between the tips might require the "curious" conscious entities.

The mechanism of self reference

Self reference is perhaps the most mysterious aspect of conscious experience. When formulated in somewhat loose manner self reference states that self can be conscious about being conscious of something. When trying to model this ability in say computer paradigm one is easily led to infinite regress. In TGD framework a weaker form of self referentiality holds true: self can become conscious that it was conscious of something in previous quantum jump(s). Self reference therefore reduces to memory. Infinite regress is replaced with evolution recreating Universe again and again and adding new reflective levels of consciousness. It is however essential to have also the experience that memory is in question in order to have self reference. This knowledge implies that a reflective level is in question.

The mechanism of self reference would reduce to the ability to code information about quantum jump into the geometry and topology of the space-time surface. This representation defines an analog of written text which can be read if needed: memory recall is this reading process. The existence of this kind of representations means quantum classical correspondence in a generalized sense: not only quantum states but also quantum jump sequences responsible for conscious experience can be coded to the space-time geometry. The reading of this text induces self-organization process re-generating the original conscious experience or at least some aspects of it (say verbal representation of it). The failure of strict classical determinism for Kähler action is absolutely essential for the possibility to realize quantum classical correspondence in this sense.

Consider now the problem of coding conscious experience to space-time geometry and topology so that it can be read again in memory recall. Let us first list what I believe to know about memories.

  1. In TGD framework memories corresponds to sub-CDs inside CDs (causal diamonds defined as intersections of future and past directed light-cones) and are located in geometric past. This means fundamental difference from neuroscience view according to which memories are in the geometric now. Note that standard physicist would argue that this does not make sense: by the determinism of field equations one cannot think 4-dimensionally. In TGD however field equations fail to be deterministic in the standard sense: this actually led to the introduction of zero energy ontology.

  2. The reading wakes up mental images which are essentially 4-D self-organization patterns inside sub-CDs in the geometric past. Metabolic energy is needed to achieve this wake up. What is needed is generation of space-time sheets representing the potential images making possible memories.

This picture combined with the mechanism for generating the arrow of phychological time and explaining why sensory experience is located to so short time interval as it is (.1 second, the time scale of CD associated with electron by p-adic length scale hypothesis) allows to understand the mechanism of self reference. It deserves to be mentioned that the discussion with Stephen Paul King in Time discussion group served as the midwife for this step of progress.

  1. When the film makes a shift to the direction of geometric past in quantum jump subselves representing mental images representing the reaction to the "news" are generated. These correspond to sub-CDs contains space-time surfaces as correlates of subselves created and the information contents of immediate conscious experiences is about this region of space-time and imbedding space. They are like additional comment marks on the film giving information about what feelings the news from the geometric future stimulated.

  2. In subsequent quantum jumps film moves downwards towards geometric past and markings defined in terms of space-time correlates for mental images are shifted backwards with the film and define the coding of information about previous conscious experience. In memory recall metabolic energy is feeded to these subsystems and they wake up and regenerate the mental images about the remembered aspect sof the previous conscious experience. This would not be possible in positive energy ontology and if determinism in strict sense of the world would hold true.

  3. Something must bring in the essential information that these experiences are memories rather than genuine sensory experiences (say). Something must distinguish between genuine experiences and memories about them. The space-time sheets representing self reference define cognitive representations. If the space-time sheets representing the correlates for self-referential mental images are p-adic, this distinction emerges naturally. That these space-time sheets are in the intersection of real and p-adic worlds is actually enough and also makes possible negentropic entanglement carrying the conscious information. In TGD inspired quantum biology this property is indeed the defining characteristic of life.

  4. There is quite concrete mechanism for the realization of memories in terms of braidings of magnetic flux tubes discussed here.

Background material can be found in the chapter About the Nature of Time of "TGD Inspired Theory of Consciousness".

Tuesday, December 21, 2010

TGD based explanation for the soft photon anomaly of hadron physics

There is quite a recent article entitled Study of the Dependence of Direct Soft Photon Production on the Jet Characteristics in Hadronic Z0 Decays discussing one particular manifestation of an anomaly of hadron physics known for two decades: the soft photon production rate in hadronic reactions is by an averge factor of about four higher than expected. In the article soft photons assignable to the decays of Z0 to quark-antiquark pairs. This anomaly has not reached the attention of particle physics which seems to be the fate of anomalies quite generally nowadays: large extra dimensions and blackholes at LHC are much more sexy topics of study than the anomalies about which both existing and speculative theories must remain silent.

TGD leads to an explanation of anomaly in terms of the basic differences between TGD and QCD.

  1. The first difference is due to induced gauge field concept: both classical color gauge fields and the U(1) part of electromagnetic field are proportional to induced Kähler form. Second difference is topological field quantization meaning that electric and magnetic fluxes are associated with flux tubes. Taken together this means that for neutral hadrons color flux tubes and electric flux tubes can be and will be assumed to be one and same thing. In the case of charged hadrons the em flux tubes must connect different hadrons: this is essential for understanding why neutral hadrons seem to contribute much more effectively to the brehmstrahlung than charged hadrons- which is just the opposite for the prediction of hadronic inner bremsstrahlung model in which only charged hadrons contribute. Now all both sea and valence quarks of neutral hadrons contribute but in the case of charged hadrons only valence quarks do so.
  2. Sea quarks of neutral hadrons seem to give the largest contribution to bremsstrahlung. p-Adic length scale hypothesis predicting that quarks can appear in several mass scales represents the third difference and the experimental findings suggest that sea quarks are by a factor of 1/2 lighter than valence quarks implying that brehmstrahlung for given sea quark is by a factor 4 more intense than for corresponding valence quark.

I do not bother to type further and give a link to a pdf file explaining the model. The model can be found also from the chapter p-Adic Mass calculations :New Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Sunday, December 19, 2010

Model for the findings about hologram generating properties of DNA

The findings of Peter Gariaev and his collaborators have provided a test bed for many basic ideas in TGD inspired biology. We worked out with Peter a model for some particular findings of his group providing support for the notion of magnetic body. The interpretation of data is in terms of a photograph of the magnetic body of DNA sample and therefore also of dark matter at it. The model provides also a more detailed model for how living systems could build holograms about themselves and environment and read them. The article will be published in the first issue of the newly found journal DNADJ (DNA Decipher Journal) appearing in January. The preprint can be found at Scireprints and also at my homepage.

A TGD inspired model for the strange replica structures observed when DNA sample is radiated by red, IR, and UV light using two methods by Peter Gariaev and collaborators. The first method produces what is tentatively interpreted as replica images of either DNA sample or of five red lamps used to irradiate the sample. Second method produce replica image of environment with replication in horizontal direction but only at the right hand side of the apparatus. Also a white phantom variant of the replica trajectory observed in the first experiment is observed and has in vertical direction the size scale of the apparatus.

The model is developed in order to explain the characteric features of the replica patterns. The basic notions are magnetic body, massless extremal (topological light ray), the existence of Bose-Einstein condensates of Cooper pairs at magnetic flux tubes, and dark photons with large value of Planck constant for which macroscopic quantum coherence is possible. The hypothesis is that the first method makes part of the magnetic body of DNA sample visible whereas method II would produce replica hologram of environment using dark photons and produce also a phantom image of the magnetic tubes becoming visible by method I. Replicas would result as mirror hall effect in the sense that the dark photons would move back and forth between the part of magnetic body becoming visible by method I and serving as a mirror and the objects of environment serving also as mirrors. What is however required is that not only the outer boundaries of objects visible via ordinary reflection act as mirrors but also the parts of the outer boundary not usually visible perform mirror function so that an essentially 3-D vision providing information about the geometry of the entire object would be in question. Many-sheeted space-time allows this.

The presence of the hologram image for method II requires the self-sustainment of the reference beam only whereas the presence of phantom DNA image for method I requires the self-sustainment of both beams. Non-linear dynamics for the energy feed from DNA to the magnetic body could make possible self-sustainment for both beams simultaneously. Non-linear dynamics for beams themselves could allow for the self-sustainment of reference beam and/or reflected beam. The latter option is favored by data.

Wednesday, December 15, 2010

Preferred extremals of Kähler action and perfect fluids

Lubos Motl had an interesting article about Perfect fluids, string theory, and black holes. It of course takes some self discipline to get over the M-theory propaganda without getting very angry. Indeed, the article starts with

The omnipresence of very low-viscosity fluids in the observable world is one of the amazing victories of string theory. The value of the minimum viscosity seems to follow a universal formula that can be derived from quantum gravity - i.e. from string theory.

The first sentence is definitely something which surpasses all records in the recorded history of super string hype (for records see Not-Even Wrong). At the end of the propaganda strike Lubos however explains in an enjoyable manner some basic facts about perfect fluids, super-fluids, and viscosity and mentions the effective absence of non-diagonal components of stress tensor as a mathematical correlate for the absence of shear viscosity often identified as viscosity. This comment actually stimulated this posting.

In any case, almost perfect fluids seems to be abundant in Nature. For instance, QCD plasma was originally thought to behave like gas and therefore have a rather high viscosity to entropy density ratio x= η/s. Already RHIC found that it however behaves like almost perfect fluid with x near to the minimum predicted by AdS/CFT. The findings from LHC gave additional conform the discovery (see this). Also Fermi gas is predicted on basis of experimental observations to have at low temperatures a low viscosity roughly 5-6 times the minimal value (see this). This behavior is of course not a prediction of superstring theory but only demonstrates that AdS/CFT correspondence applying to conformal field theories as a new kind of calculational tool allows to make predictions in such parameter regions where standard methods fail. This is fantastic but has nothing to do with predictions of string theory.

In the following the argument that the preferred extremals of Kähler action are perfect fluids apart from the symmetry breaking to space-time sheets is developed. The argument requires some basic formulas summarized first.

The physics oriented reader not working with hydrodynamics and possibly irritated from the observation that after all these years he actually still has a rather tenuous understanding of viscosity as a mathematical notion and willing to refresh his mental images about concrete experimental definitions as well as tensor formulas, can look the Wikipedia article about viscosity. Here one can find also the definition of the viscous part of the stress energy tensor linear in velocity (oddness in velocity relates directly to second law). The symmetric part of the gradient of velocity gives the viscous part of the stress-energy tensor as a tensor linear in velocity. This term decomposes to bulk viscosity and shear viscosity. Bulk viscosity gives a pressure like contribution due to friction. Shear viscosity corresponds to the traceless part of the velocity gradient often called just viscosity. This contribution to the stress tensor is non-diagonal.

  1. The symmetric part of the gradient of velocity gives the viscous part of the stress-energy tensor as a tensor linear in velocity. Velocity gardient decomposes to a term traceless tensor term and a term reducing to scalar.

    ivj+∂jvi= (2/3)∂kvkgij+ (∂ivj+∂jvi-(2/3)∂kvkgij).

    The viscous contribution to stress tensor is given in terms of this decomposition as

    σvisc,ij= ζ∂kvkgij+η (∂ivj+∂jvi-(2/3)∂kvkgij).

    From dFi= TijSj it is clear that bulk viscosity ζ gives to energy momentum tensor a pressure like contribution having interpretation in terms of friction opposing. Shear viscosity η corresponds to the traceless part of the velocity gradient often called just viscosity. This contribution to the stress tensor is non-diagonal and corresponds to momentum transfer in directions not parallel to momentum and makes the flow rotational. This term is essential for the thermal conduction and thermal conductivity vanishes for ideal fluids.

  2. The 3-D total stress tensor can be written as

    σij= ρ vivj-pgijvisc,ij.

    The generalization to a 4-D relativistic situation is simple. One just adds terms corresponding to energy density and energy flow to obtain

    Tαβ= (ρ-p) uα uβ+pgαβviscαβ .

    Here uα denotes the local four-velocity satisfying uαuα=1. The sign factors relate to the concentions in the definition of Minkowski metric ((1,-1,-1,-1)).

  3. If the flow is such that the flow parameters associated with the flow lines integrate to a global flow parameter one can identify new time coordinate t as this flow parametger. This means a transition to a coordinate system in which fluid is at rest everywhere (comoving coordinates in cosmology) so that energy momentum tensor reduces to a diagonal term plus viscous term.

    Tαβ= (ρ-p) gtt δtα δtβ+pgαβviscαβ .

    In this case the vanishing of the viscous term means that one has perfect fluid in strong sense.

    The existence of a global flow parameter means that one has

    vi= Ψ ∂iΦ .

    Ψ and Φ depend on space-time point. The proportionality to a gradient of scalar Φ implies that Φ can be taken as a global time coordinate. If this condition is not satisfied, the perfect fluid property makes sense only locally.

AdS/CFT correspondence allows to deduce a lower limit for the coefficient of shear viscosity as

x= η/s≥ hbar/4π .

This formula holds true in units in which one has kB=1 so that temperature has unit of energy.

What makes this interesting from TGD view is that in TGD framework perfect fluid property in approriately generalized sense indeed characterizes locally the preferred extremals of Kähler action defining space-time surface.

  1. Kähler action is Maxwell action with U(1) gauge field replaced with the projection of CP2 Kähler form so that the four CP2 coordinates become the dynamical variables at QFT limit. This means enormous reduction in the number of degrees of freedom as compared to the ordinary unifications. The field equations for Kähler action define the dynamics of space-time surfaces and this dynamics reduces to conservation laws for the currents assignable to isometries. This means that the system has a hydrodynamic interpretation. This is a considerable difference to ordinary Maxwell equations. Notice however that the "topological" half of Maxwell's equations (Faraday's induction law and the statement that no non-topological magnetic are possible) is satisfied.

  2. Even more, the resulting hydrodynamical system allows an interpretation in terms of a perfect fluid. The general ansatz for the preferred extremals of field equations assumes that various conserved currents are proportional to a vector field characterized by so called Beltrami property. The coefficient of proportionality depends on space-time point and the conserved current in question. Beltrami fields by definition is a vector field such that the time parameters assignable to its flow lines integrate to single global coordinate. This is highly non-trivial and one of the implications is almost topological QFT property due to the fact that Kähler action reduces to a boundary term assignable to wormhole throats which are light-like 3-surfaces at the boundaries of regions of space-time with Euclidian and Minkowskian signatures. The Euclidian regions (or wormhole throats, depends on one's tastes ) define what I identify as generalized Feynman diagrams.

    Beltrami property means that if the time coordinate for a space-time sheet is chosen to be this global flow parameter, all conserved currents have only time component. In TGD framework energy momentum tensor is replaced with a collection of conserved currents assignable to various isometries and the analog of energy momentum tensor complex constructed in this manner has no counterparts of non-diagonal components. Hence the preferred extremals allow an interpretation in terms of perfect fluid without any viscosity.

This argument justifies the expectation that TGD Universe is characterized by the presence of low-viscosity fluids. Real fluids of course have a non-vanishing albeit small value of x. What causes the failure of the exact perfect fluid property?

  1. Many-sheetedness of the space-time is the underlying reason. Space-time surface decomposes into finite-sized space-time sheets containing topologically condensed smaller space-time sheets containing.... Only within given sheet perfect fluid property holds true and fails at wormhole contacts and because the sheet has a finite size. As a consequence, the global flow parameter exists only in given length and time scale. At imbedding space level and in zero energy ontology the phrasing of the same would be in terms of hierarchy of causal diamonds (CDs).

  2. The so called eddy viscosity is caused by eddies (vortices) of the flow. The space-time sheets glued to a larger one are indeed analogous to eddies so that the reduction of viscosity to eddy viscosity could make sense quite generally. Also the phase slippage phenomenon of super-conductivity meaning that the total phase increment of the super-conducting order parameter is reduced by a multiple of 2π in phase slippage so that the average velocity proportional to the increment of the phase along the channel divided by the length of the channel is reduced by a quantized amount.

    The standard arrangement for measuring viscosity involves a lipid layer flowing along plane. The velocity of flow with respect to the surface increases from v=0 at the lower boundary to vupper at the upper boundary of the layer: this situation can be regarded as outcome of the dissipation process and prevails as long as energy is feeded into the system. The reduction of the velocity in direction orthogonal to the layer means that the flow becomes rotational during dissipation leading to this stationary situation.

    This suggests that the elementary building block of dissipation process corresponds to a generation of vortex identifiable as cylindrical space-time sheets parallel to the plane of the flow and orthogonal to the velocity of flow and carrying quantized angular momentum. One expects that vortices have a spectrum labelled by quantum numbers like energy and angular momentum so that dissipation takes in discrete steps by the generation of vortices which transfer the energy and angular momentum to environment and in this manner generate the velocity gradient.

  3. The quantization of the parameter x is suggestive in this framework. If entropy density and viscosity are both proportional to the density n of the eddies, the value of x would equal to the ratio of the quanta of entropy and kinematic viscosity η/n for single eddy if all eddies are identical. The quantum would be hbar/4π in the units used and the suggestive interpretation is in terms of the quantization of angular momentum. One of course expects a spectrum of eddies so that this simple prediction should hold true only at temperatures for which the excitation energies of vortices are above the thermal energy. The increase of the temperature would suggest that gradually more and more vortices come into play and that the ratio increases in a stepwise manner bringing in mind quantum Hall effect. In TGD Universe the value of hbar can be large in some situations so that the quantal character of dissipation could become visible even macroscopically. Whether this situation with large hbar is encountered even in the case of QCD plasma is an interesting question.

The following poor man's argument tries to make the idea about quantization a little bit more concrete.

  1. The vortices transfer momentum parallel to the plane from the flow. Therefore they must have momentum parallel to the flow given by the total cm momentum of the vortex. Before continuing some notations are needed. Let the densities of vortices and absorbed vortices be n and nabs respectively. Denote by vpar resp. vperp the components of cm momenta parallel to the main flow resp. perpendicular to the plane boundary plane. Let m be the mass of the vortex. Denote by S are parallel to the boundary plane.

  2. The flow of momentum component parallel to the main flow due to the absorbed at S is

    nabs m vpar vperp S .

    This momentum flow must be equal to the viscous force

    Fvisc = η (vpar/d)× S .

    From this one obtains

    η= nabsm vperpd .

    If the entropy density is due to the vortices, it equals apart from possible numerical factors to

    s= n

    so that one has

    η/s=mvperpd .

    This quantity should have lower bound x=hbar/4π and perhaps even quantized in multiples of x, Angular momentum quantization suggests strongly itself as origin of the quantization.

  3. Local momentum conservation requires that the comoving vortices are created in pairs with opposite momenta and thus propagating with opposite velocities vperp. Only one half of vortices is absorbed so that one has nabs=n/2. Vortex has quantized angular momentum associated with its internal rotation. Angular momentum is generated to the flow since the vortices flowing downwards are absorbed at the boundary surface.

    Suppose that the distance of their center of mass lines parallel to plane is D=ε d, ε a numerical constant not too far from unity. The vortices of the pair moving in opposite direction have same angular momentum mvperpD/2 relative to their center of mass line between them. Angular momentum conservation requires that the sum these relative angular momenta cancels the sum of the angular momenta associated with the vortices themselves. Quantization for the total angular momentum for the pair of vortices gives

    η/s= nhbar/ε

    Quantization condition would give

    ε =4π .

    One should understand why D=4π d - four times the circumference for the largest circle contained by the boundary layer- should define the minimal distance between the vortices of the pair. This distance is larger than the distance d for maximally sized vortices of radius d/2 just touching. This distance obviously increases as the thickness of the boundary layer increasess suggesting that also the radius of the vortices scales like d.

  4. One cannot of course take this detailed model too literally. What is however remarkable that quantization of angular momentum and dissipation mechanism based on vortices identified as space-time sheets indeed could explain why the lower bound for the ratio η/s is so small.

For background see the chapter Does the Modified Dirac Equation Define the Fundamental Action Principle? of "Quantum TGD as Infinite-dimensional Spinor Geometry".

Tuesday, December 07, 2010

A possible explanation of Shnoll effect

I have already earlier mentioned the work of Russian scientist Shnoll concerning random fluctuations. This work spans four decades and has finally started to gain recognition also in west. By a good luck I found from web an article of Shnoll about strange regularities of what should be random fluctuations. Then Dainis Zeps provided me with a whole collection of similar articles! Thank you Dainis!

The findings of Shnoll led within few days to a considerable progress in the understanding of the relation between p-adic and real probability concepts, the relationship between p-adic physics and quantum groups emerging naturally in TGD based view about finite measurement resolution, the relationship of the hierarchy of Planck constants (in particular the gigantic gravitational Planck constant assignable to the space-time sheets mediating gravitation) and small-p p-adicity, and also with the understanding of the experimental implications of many-sheetedness of space-time in concrete measurement situations in which the measurement apparatus also means non-trivial topology of the space-time.

The most important conclusion is that basic vision about TGD Universe seems manifest itself directly in quantum fluctuations due to quantum coherence in astrophysical scales in practically all kinds of experiments- even in the distributions of financial variables! Needless to tell how far reaching the implications are for quantum gravity. This is one of the biggest surprises of my un-paid professional life which outshines even the repeated surprises caused by the incredibly unintelligent response of most colleagues to my work! I glue below an abstract of a brand new preprint titled A possible Explanation of Shnoll Effect. I attach below the abstract.

Shnoll and collaborators have discovered strange repeating patterns of random fluctuations of physical observables such as the number n of nuclear decays in a given time interval. Periodically occurring peaks for the distribution of the number N(n) of measurements producing n events in a series of measurements as a function of n is observed instead of a single peak. The positions of the peaks are not random and the patterns depend on position and time varying periodically in time scales possibly assignable to Earth-Sun and Earth-Moon gravitational interaction.

These observations suggest a modification of the expected probability distributions but it is very difficult to imagine any physical mechanism in the standard physics framework. Rather, a universal deformation of predicted probability distributions would be in question requiring something analogous to the transition from classical physics to quantum physics.

The hint about the nature of the modification comes from the TGD inspired quantum measurement theory proposing a description of the notion of finite measurement resolution in terms of inclusions of so called hyper-finite factors of type II1 (HFFs) and closely related quantum groups. Also p-adic physics -another key element of TGD- is expected to be involved. A modification of a given probability distribution P(n| λi) for a positive integer valued variable n characterized by rational-valued parameters λi is obtained by replacing n and the integers characterizing λi with so called quantum integers depending on the quantum phase qm=exp(i2π/m). Quantum integer nq must be defined as the product of quantum counterparts pq of the primes p appearing in the prime decomposition of n. One has pq= sin(2π p/m)/sin(2π/m) for p ≠ P and pq=P for p=P. m must satisfy m≥ 3, m≠ p, and m≠ 2p.

The quantum counterparts of positive integers can be negative. Therefore quantum distribution is defined first as p-adic valued distribution and then mapped by so called canonical identification I to a real distribution by the map taking p-adic -1 to P and powers Pn to P-n and other quantum primes to themselves and requiring that the mean value of n is for distribution and its quantum variant. The map I satisfies I(∑ Pn)=∑ I(Pn). The resulting distribution has peaks located periodically with periods coming as powers of P. Also periodicities with peaks corresponding to n=n+n-, n+q>0 with fixed n-q< 0.

The periodic dependence of the distributions would be most naturally assignable to the gravitational interaction of Earth with Sun and Moon and therefore to the periodic variation of Earth-Sun and Earth-Moon distances. The TGD inspired proposal is that the p-dic prime P and integer m characterizing the quantum distribution are determined by a process analogous to a state function reduction and their most probably values depend on the deviation of the distance R through the formulas Δ p/p≈ kpΔ R/R and Δ m/m≈ kmΔ R/R. The p-adic primes assignable to elementary particles are very large unlike the primes which could characterize the empirical distributions. The hierarchy of Planck constants allows the gravitational Planck constant assignable to the space-time sheets mediating gravitational interactions to have gigantic values and this allows p-adicity with small values of the p-adic prime P.

For detals see the new chapter A Possible Explanation of Shnoll Effect of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".