Tuesday, December 28, 2010

The revolution taking place in genetics

I received an extremely interesting popular article (thanks to Kalle) about the profound revolution taking place in genetics. See this.

It is fair to say that genetic determinism is falling down and the revolution that is waiting just around the corner will be more profound that anything that has taken place before this in biology. The term "genome's dark matter" expresses what has been discovered during last years. The motivation for the term is the strong analogy with the dark matter of physics. In TGD framework this analogy might be much more than analogy.

The basic anomalies discussed in the article are following.

  1. Trans-generational inheritance. The stretches of DNA which were present in parent's or grand parents' genome but are not present in the genome of offspring affect the traits of offspring.
  2. Context sensitivity of gene's effect: the effect of gene is highly sensitive on its environment in DNA.
  3. Genes explain in many cases only 10 percent of the disease's inheritability: this is "missing heritability" problem.

What makes this so interesting from TGD point of view is that for a few years ago I developed a model of DNA as topological quantum computer (see this). I summarize the basic building bricks of the model before relating it to the Mendelian anomalies.

The notion of magnetic body

The notion of magnetic body as intentional agent using biological body as a motor instrument and sensory receptor with communications taking place in terms of fractal generalization of EEG is the key idea. Each physical system consisting of matter has magnetic body. Magnetic body of given living organism has a fractal onion like structure with layer sizes varying from sub-cellular scales to the scales assignable to EEG frequencies (Earth size) and even above up to the scale of light-life and maybe beyond to scales characterizing the evolution of species.

Immediate implications are the notion of collective DNA expression made possible by the interaction of DNA strands so that they belong to magnetic flux sheets: in this manner not only DNAs of cells and organelles, organs, single organism but also groups of organisms can form coherent structures expressing themselves in synchronous manner. This is a testable prediction.

DNA as topological quantum computer

Topological quantum computation is based on braiding: various braiding patters for braid strands define the tqc programs. There are two types of braids: time-like and space-like.

  1. Cell membrane is 2-D liquid and the flow of lipids affected by the flow of cellular liquid and also by nerve pulse patterns in case of neurons induce braiding. This braiding takes place dynamically at the 2-D parquette defined by the cell membrane in time direction and dance metaphor applies to it. Running tqc program can be seen as dancing.
  2. The magnetic flux tubes connecting DNA nucleotides to the lipids of nuclear membrane and cell membrane and possibly also to membranes of other cells define the space-like braid strands. Since the flux tubes connected to DNA strands are like threads connecting the feet of dancers to the walls of dance hall the resulting space-like braiding codes tqc program to memory, which is highly robust as a topological invariant.

There is a kind of duality between time-like and space-like braidings. This is a new element to the conventional quantum computation paradigm. Combined with the idea that memories are stored in geometric past in zero energy ontology this gives an extremely elegant memory storage mechanism.

Implications for genetics

This vision has profound implications for genetics.

  1. Genes define only the hardware of tqc. Software is defined by the braidings. Introns whose portion steadily increases as the evolutionary level becomes higher and is more than 95 per cent in humans, have been traditionally interpreted as junk DNA (this is true, believe or not!;-)). In this framework introns correspond naturally to that part of genome specialized in tqc: from the point of view of tqc it does matter much whether the intronic portions correspond to repeating sequences (interpreted as a signal for junk-ness) or not.
  2. The evolution of topological quantum computation programs would be far more important than the evolution of genome and the huge differences between species with almost the same genome (such as we and our cousins) could be understood in terms of cultural evolution due to the evolution of topological quantum computer programs. The evolution would have been for a long time evolution of tqc programs rather than that of hardware as the fact that the size of genome and details of does not matter much suggests. This suggests that the appearance of of prokaryotes (and multi-cellulars) meant the emergence of introns and perhaps also cultural evolution as the evolution of quantum software and collective magnetic bodies.

Implications for Mendelian anomalies

This vision also suggests how to understand the origin of the Mendelian anomalies.

  1. Trans-generational inheritance might be understood as an inheritance of tqc programs carrying indirectly information also about the genome of parents. If one accepts TGD vision about organisms as 4-D structures, one must of course be ready to even ask whether genetic effects could be also take place via the mediation of the magnetic bodies assignable to structures formed by several generations.
  2. The context sensitivity of the effect of particular gene could be understood in this picture since the programs are determined not only by a single gene but longer portions of DNA. Individual genes do not matter much when one tries to understand genetic correlates for autism, schizophrenia, and other complex diseases related to functions rather than mere structure. If one speaks about structure, such as the color of flowers situation is of course very simple and Mendelian approach works well. An interesting question is how closely the structure-function dichotomy, exon-intron dichotomy and hardware-software dichotomy correspond to each other.
  3. High level diseases would be much more programming errors than hardware problems. This would solve "missing heritability" problem.

What is amusing, that the physicist's dark matter would indeed be behind "genome's dark matter': magnetic flux tubes are indeed assumed to be carriers of dark matter- dark quarks in fact. In the proposed model quarks with large Planck constant meaning that their Compton length scales is scaled up and gives them size scale of order cell at least are in key role!

See the chapter DNA as Topological Quantum Computer of "Genes and Memes" for a discussion containing also additional references.

Monday, December 27, 2010

The arrow of time and self referentiality of consciousness

The understanding of the relationship between experienced time whose chronon is identified as quantum and geometric time has remained one of the most difficult challenges of TGD inspired theory of consciousness. Second difficult problem is self referentiablity of consciouness.

One should understand the asymmetry between positive and negative energies and between two directions of geometric time at the level of conscious experience, the correspondence between experienced and geometric time, and the emergence of the arrow of time. One should explain why human sensory experience is about a rather narrow time interval of about .1 seconds and why memories are about the interior of much larger CD with time scale of order life time. One should have a vision about the evolution of consciousness: how quantum leaps leading to an expansion of consciousness occur.

Negative energy signals to geometric past - about which phase conjugate laser light represents an example - provide an attractive tool to realize intentional action as a signal inducing neural activities in the geometric past (this would explain Libet's classical findings), a mechanism of remote metabolism, and the mechanism of declarative memory as communications with geometric past. One should understand how these signals are realized in zero energy ontology and why their occurrence is so rare.

In the following I try to demonstrate that TGD inspired theory of consciousness and quantum TGD proper indeed are in tune. I have talked about these problems already earlier and the motivation for this posting is that the discussions with Stephen Paul King in Time discussion group led to a further progress in the understanding of this issues. What I understand now much better is how the self referentiality of consciousness is realized.

Space-time and imbedding space correlates for selves

Quantum jump as a moment of consciousness, self as a sequence of quantum jumps integrating to self, and self hierarchy with sub-selves experienced as mental images, are the basic notions of TGD inspired theory of consciousness. In the most ambitious vision self hierarchy reduces to a fractal hierarchy of quantum jumps within quantum jumps. Quantum classical correspondence demands selves to have space-time correlates both at the level of space-time and imbedding space.

At the level of space-time the first guess for the correlates is as light-like or space-like 3-surfaces. If one believes on effective 2-dimensionality and quantum holography, partonic 2-surfaces plus their 4-D tangent space distribution would code the information about the space-time correlates. By quantum classical correspondence one can also identify space-time sheets as the correlates modulo the gauge degeneracy implied by super-conformal symmetries.

It is natural to interpret CDs as correlates of selves at the level of the imbedding space. CDs can be interpreted either as subsets of the generalized imbedding space or as sectors of WCW. Accordingly, selves correspond to CDs of the generalized imbedding space or sectors of WCW, literally separate interacting quantum Universes. The spiritually oriented reader might speak of Gods. Sub-selves correspond to sub-CDs geometrically. The contents of consciousness of self is about the interior of the corresponding CD at the level of imbedding space. For sub-selves the wave function for the position of tip of CD brings in the delocalization of sub-WCW.

The fractal hierarchy of CDs within CDs is the geometric counterpart for the hierarchy of selves: the quantization of the time scale of planned action and memory as T(k)= 2kT0 suggest an interpretation for the fact that we experience octaves as equivalent in music experience.

Why sensory experience is about so short time interval?

CD picture implies automatically the 4-D character of conscious experience and memories form part of conscious experience even at elementary particle level. Amazingly, the secondary p-adic time scale of electron is T=0.1 seconds defining a fundamental time scale in living matter. The problem is to understand why the sensory experience is about a short time interval of geometric time rather than about the entire personal CD with temporal size of order life-time. The explanation would be that sensory input corresponds to subselves (mental images) with T≈ .1 s at the upper light-like boundary of CD in question. This requires a strong asymmetry between upper and lower light-like boundaries of CDs.

The localization of the contents of the sensory experience to the upper light-cone boundary and local arrow of time could emerge as a consequence of self-organization process involving conscious intentional action. Sub-CDs would be in the interior of CD and self-organization process would lead to a distribution of CDs concentrated near the upper or lower boundary of CD. The local arrow of geometric time would depend on CD and even differ for CD and sub-CDs.

  1. The localization of contents of sensory experience to a narrow time interval would be due to the concentration of sub-CDs representing mental images near the either boundary of CD representing self.

  2. Phase conjugate signals identifiable as negative energy signals to geometric past are important when the arrow of time differs from the standard one in some time scale. If the arrow of time establishes itself as a phase transition, this kind of situations are rare. Negative energy signals as a basic mechanism of intentional action and transfer of metabolic energy would explain why living matter is so special.

  3. Geometric memories would correspond to subselves in the interior of CD, the oldest of them to the regions near "lower" boundaries of CD. Since the density of sub-CDs is small there geometric memories would be rare and not sharp. A temporal sequence of mental images, say the sequence of digits of a phone number, would correspond to a temporal sequence of sub-CDs.

  4. Sharing of mental images corresponds to a fusion of sub-selves/mental images to single sub-self by quantum entanglement: the space-time correlate could be flux tubes connecting space-time sheets associated with sub-selves represented also by space-time sheets inside their CDs.

Arrow of time

TGD forces a new view about the relationship between experienced and geometric time. Although the basic paradox of quantum measurement theory disappears the question about the arrow of geometric time remains. There are actually two times involved. The geometric time assignable to the space-time sheets and the M4 time assignable to the imbedding space.

Consider first the the geometric time assignable to the space-time sheets.

  1. Selves correspond to CDs. The CDs and their projections to the imbedding space do not move anywhere. Therefore the standard explanation for the arrow of geometric time cannot work.

  2. The only plausible interpretation at classical level relies on quantum classical correspondence and the fact that space-times are 4-surfaces of the imbedding space. If quantum jump corresponds to a shift for a quantum superposition of space-time sheets towards geometric past in the first approximation (as quantum classical correspondence suggests), one can understand the arrow of time. Space-time surfaces simply shift backwards with respect to the geometric time of the imbedding space and therefore to the 8-D perceptive field defined by the CD. This creates in the materialistic mind a temporal variant of train illusion. Space-time as 4-surface and macroscopic and macro-temporal quantum coherence are absolutely essential for this interpretation to make sense.

Why this shifting should always take place to the direction of geometric past of the imbedding space? Does it so always? The proposed mechanism for the localization of sensory experience to a short time interval suggests an explanation in terms of intentional action.

  1. CD defines the perceptive field for self. Negentropy Maximization Principle (NMP) or its strenghtened form could be used to justify the hypothesis that selves quite universally love to gain information about the un-known. In other words, they are curious to know about the space-time sheets outside their perceptive field (the future). Therefore they perform quantum jumps tending to shift the superposition of the space-time sheets so that unknown regions of space-time sheets emerge to the perceptive field. Either the upper or lower boundary of CD wins in the competition and the arrow of time results as a spontaneous symmetry breaking. The arrow of time can depend on CD but tends to be the same for CD and its sub-CDs. Global arrow of time could establish itself by a phase transitions establishing the same arrow of time globally by a mechanism analogous to percolation phase transition.

  2. Since the news come from the upper boundary of CD, self concentrates its attention to this region and improves the resolution of sensory experience. The sub-CDs generated in this manner correspond to mental images with contents about this region. Hence the contents of conscious experience, in particular sensory experience, tends to be about the region near the upper boundary.
  3. Note that the space-time sheets need not to continue outside the CD of self but self does not know this and believes that there is something there to be curious about. The quantum jumps inducing what reduces to a shift in region sufficiently far from upper boundary of CD creates a new piece of space-time surface! The non-continuation of the space-time sheet outside CD would be a correlate for the fact that subjective future does not exist.

The emergence of the arrow of time at the level of imbedding space reduces to a modification of the oldest TGD based argument for the arrow of time which is wrong as such. If physical objects correspond to 3-surfaces inside future directed light-cone then the sequence of quantum jumps implies a diffusion to the direction of increasing value of light-cone propert time. The modification of the argument goes as follows.

  1. CDs are characterized by their moduli. In particular, the relative coordinate for the tips of CD has values in past light cone M4- if the future tip is taken as the reference point. An attractive interpretation for the proper time of M4- is as cosmic time having quantized values. Quantum states correspond to wave functions in the modular degrees of freedom and each U process creates a non-localized wave function of this kind. Suppose that state function reduction implies a localization in the modular degrees of freedom so that CD is fixed completely apart from its center of mass position to which zero four-momentum constant plane wave is assigned. One can expect that in average sense diffuction occurs in M4- so that the size of CD tends to increase and that the most distant geometric past defined by the past boundary of CD recedes. This is nothing but cosmic expansion. This provides a formulation for the flow of time in terms of a cosmic redshift. This argument applies also to the positions of the sub-CDs inside CD. Also their proper time distance from the tip of CD is expected to increase.

  2. One can argue that one ends up with contradiction by changing the roles of upper and lower tips. In the case of CD itself is only the proper time distance between the tips which increases and speaking about "future" and "past" tips is only a convention. For sub-CDs of CD the argument would imply that the sub-CDs drifting from the opposite tips tend to concentrate in the middle region of CD unless either tip is in a preferred position. This requires a spontaneous selection of the arrow of time. One could say that the cosmic expansion implied by the drift in M4- "draws" the space-time sheet with it to the geometric past. The spontaneous generation of the asymmetry between the tips might require the "curious" conscious entities.

The mechanism of self reference

Self reference is perhaps the most mysterious aspect of conscious experience. When formulated in somewhat loose manner self reference states that self can be conscious about being conscious of something. When trying to model this ability in say computer paradigm one is easily led to infinite regress. In TGD framework a weaker form of self referentiality holds true: self can become conscious that it was conscious of something in previous quantum jump(s). Self reference therefore reduces to memory. Infinite regress is replaced with evolution recreating Universe again and again and adding new reflective levels of consciousness. It is however essential to have also the experience that memory is in question in order to have self reference. This knowledge implies that a reflective level is in question.

The mechanism of self reference would reduce to the ability to code information about quantum jump into the geometry and topology of the space-time surface. This representation defines an analog of written text which can be read if needed: memory recall is this reading process. The existence of this kind of representations means quantum classical correspondence in a generalized sense: not only quantum states but also quantum jump sequences responsible for conscious experience can be coded to the space-time geometry. The reading of this text induces self-organization process re-generating the original conscious experience or at least some aspects of it (say verbal representation of it). The failure of strict classical determinism for Kähler action is absolutely essential for the possibility to realize quantum classical correspondence in this sense.

Consider now the problem of coding conscious experience to space-time geometry and topology so that it can be read again in memory recall. Let us first list what I believe to know about memories.

  1. In TGD framework memories corresponds to sub-CDs inside CDs (causal diamonds defined as intersections of future and past directed light-cones) and are located in geometric past. This means fundamental difference from neuroscience view according to which memories are in the geometric now. Note that standard physicist would argue that this does not make sense: by the determinism of field equations one cannot think 4-dimensionally. In TGD however field equations fail to be deterministic in the standard sense: this actually led to the introduction of zero energy ontology.

  2. The reading wakes up mental images which are essentially 4-D self-organization patterns inside sub-CDs in the geometric past. Metabolic energy is needed to achieve this wake up. What is needed is generation of space-time sheets representing the potential images making possible memories.

This picture combined with the mechanism for generating the arrow of phychological time and explaining why sensory experience is located to so short time interval as it is (.1 second, the time scale of CD associated with electron by p-adic length scale hypothesis) allows to understand the mechanism of self reference. It deserves to be mentioned that the discussion with Stephen Paul King in Time discussion group served as the midwife for this step of progress.

  1. When the film makes a shift to the direction of geometric past in quantum jump subselves representing mental images representing the reaction to the "news" are generated. These correspond to sub-CDs contains space-time surfaces as correlates of subselves created and the information contents of immediate conscious experiences is about this region of space-time and imbedding space. They are like additional comment marks on the film giving information about what feelings the news from the geometric future stimulated.

  2. In subsequent quantum jumps film moves downwards towards geometric past and markings defined in terms of space-time correlates for mental images are shifted backwards with the film and define the coding of information about previous conscious experience. In memory recall metabolic energy is feeded to these subsystems and they wake up and regenerate the mental images about the remembered aspect sof the previous conscious experience. This would not be possible in positive energy ontology and if determinism in strict sense of the world would hold true.

  3. Something must bring in the essential information that these experiences are memories rather than genuine sensory experiences (say). Something must distinguish between genuine experiences and memories about them. The space-time sheets representing self reference define cognitive representations. If the space-time sheets representing the correlates for self-referential mental images are p-adic, this distinction emerges naturally. That these space-time sheets are in the intersection of real and p-adic worlds is actually enough and also makes possible negentropic entanglement carrying the conscious information. In TGD inspired quantum biology this property is indeed the defining characteristic of life.

  4. There is quite concrete mechanism for the realization of memories in terms of braidings of magnetic flux tubes discussed here.

Background material can be found in the chapter About the Nature of Time of "TGD Inspired Theory of Consciousness".

Tuesday, December 21, 2010

TGD based explanation for the soft photon anomaly of hadron physics

There is quite a recent article entitled Study of the Dependence of Direct Soft Photon Production on the Jet Characteristics in Hadronic Z0 Decays discussing one particular manifestation of an anomaly of hadron physics known for two decades: the soft photon production rate in hadronic reactions is by an averge factor of about four higher than expected. In the article soft photons assignable to the decays of Z0 to quark-antiquark pairs. This anomaly has not reached the attention of particle physics which seems to be the fate of anomalies quite generally nowadays: large extra dimensions and blackholes at LHC are much more sexy topics of study than the anomalies about which both existing and speculative theories must remain silent.

TGD leads to an explanation of anomaly in terms of the basic differences between TGD and QCD.

  1. The first difference is due to induced gauge field concept: both classical color gauge fields and the U(1) part of electromagnetic field are proportional to induced Kähler form. Second difference is topological field quantization meaning that electric and magnetic fluxes are associated with flux tubes. Taken together this means that for neutral hadrons color flux tubes and electric flux tubes can be and will be assumed to be one and same thing. In the case of charged hadrons the em flux tubes must connect different hadrons: this is essential for understanding why neutral hadrons seem to contribute much more effectively to the brehmstrahlung than charged hadrons- which is just the opposite for the prediction of hadronic inner bremsstrahlung model in which only charged hadrons contribute. Now all both sea and valence quarks of neutral hadrons contribute but in the case of charged hadrons only valence quarks do so.
  2. Sea quarks of neutral hadrons seem to give the largest contribution to bremsstrahlung. p-Adic length scale hypothesis predicting that quarks can appear in several mass scales represents the third difference and the experimental findings suggest that sea quarks are by a factor of 1/2 lighter than valence quarks implying that brehmstrahlung for given sea quark is by a factor 4 more intense than for corresponding valence quark.

I do not bother to type further and give a link to a pdf file explaining the model. The model can be found also from the chapter p-Adic Mass calculations :New Physics of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Sunday, December 19, 2010

Model for the findings about hologram generating properties of DNA

The findings of Peter Gariaev and his collaborators have provided a test bed for many basic ideas in TGD inspired biology. We worked out with Peter a model for some particular findings of his group providing support for the notion of magnetic body. The interpretation of data is in terms of a photograph of the magnetic body of DNA sample and therefore also of dark matter at it. The model provides also a more detailed model for how living systems could build holograms about themselves and environment and read them. The article will be published in the first issue of the newly found journal DNADJ (DNA Decipher Journal) appearing in January. The preprint can be found at Scireprints and also at my homepage.

A TGD inspired model for the strange replica structures observed when DNA sample is radiated by red, IR, and UV light using two methods by Peter Gariaev and collaborators. The first method produces what is tentatively interpreted as replica images of either DNA sample or of five red lamps used to irradiate the sample. Second method produce replica image of environment with replication in horizontal direction but only at the right hand side of the apparatus. Also a white phantom variant of the replica trajectory observed in the first experiment is observed and has in vertical direction the size scale of the apparatus.

The model is developed in order to explain the characteric features of the replica patterns. The basic notions are magnetic body, massless extremal (topological light ray), the existence of Bose-Einstein condensates of Cooper pairs at magnetic flux tubes, and dark photons with large value of Planck constant for which macroscopic quantum coherence is possible. The hypothesis is that the first method makes part of the magnetic body of DNA sample visible whereas method II would produce replica hologram of environment using dark photons and produce also a phantom image of the magnetic tubes becoming visible by method I. Replicas would result as mirror hall effect in the sense that the dark photons would move back and forth between the part of magnetic body becoming visible by method I and serving as a mirror and the objects of environment serving also as mirrors. What is however required is that not only the outer boundaries of objects visible via ordinary reflection act as mirrors but also the parts of the outer boundary not usually visible perform mirror function so that an essentially 3-D vision providing information about the geometry of the entire object would be in question. Many-sheeted space-time allows this.

The presence of the hologram image for method II requires the self-sustainment of the reference beam only whereas the presence of phantom DNA image for method I requires the self-sustainment of both beams. Non-linear dynamics for the energy feed from DNA to the magnetic body could make possible self-sustainment for both beams simultaneously. Non-linear dynamics for beams themselves could allow for the self-sustainment of reference beam and/or reflected beam. The latter option is favored by data.

Wednesday, December 15, 2010

Preferred extremals of Kähler action and perfect fluids

Lubos Motl had an interesting article about Perfect fluids, string theory, and black holes. It of course takes some self discipline to get over the M-theory propaganda without getting very angry. Indeed, the article starts with

The omnipresence of very low-viscosity fluids in the observable world is one of the amazing victories of string theory. The value of the minimum viscosity seems to follow a universal formula that can be derived from quantum gravity - i.e. from string theory.

The first sentence is definitely something which surpasses all records in the recorded history of super string hype (for records see Not-Even Wrong). At the end of the propaganda strike Lubos however explains in an enjoyable manner some basic facts about perfect fluids, super-fluids, and viscosity and mentions the effective absence of non-diagonal components of stress tensor as a mathematical correlate for the absence of shear viscosity often identified as viscosity. This comment actually stimulated this posting.

In any case, almost perfect fluids seems to be abundant in Nature. For instance, QCD plasma was originally thought to behave like gas and therefore have a rather high viscosity to entropy density ratio x= η/s. Already RHIC found that it however behaves like almost perfect fluid with x near to the minimum predicted by AdS/CFT. The findings from LHC gave additional conform the discovery (see this). Also Fermi gas is predicted on basis of experimental observations to have at low temperatures a low viscosity roughly 5-6 times the minimal value (see this). This behavior is of course not a prediction of superstring theory but only demonstrates that AdS/CFT correspondence applying to conformal field theories as a new kind of calculational tool allows to make predictions in such parameter regions where standard methods fail. This is fantastic but has nothing to do with predictions of string theory.

In the following the argument that the preferred extremals of Kähler action are perfect fluids apart from the symmetry breaking to space-time sheets is developed. The argument requires some basic formulas summarized first.

The physics oriented reader not working with hydrodynamics and possibly irritated from the observation that after all these years he actually still has a rather tenuous understanding of viscosity as a mathematical notion and willing to refresh his mental images about concrete experimental definitions as well as tensor formulas, can look the Wikipedia article about viscosity. Here one can find also the definition of the viscous part of the stress energy tensor linear in velocity (oddness in velocity relates directly to second law). The symmetric part of the gradient of velocity gives the viscous part of the stress-energy tensor as a tensor linear in velocity. This term decomposes to bulk viscosity and shear viscosity. Bulk viscosity gives a pressure like contribution due to friction. Shear viscosity corresponds to the traceless part of the velocity gradient often called just viscosity. This contribution to the stress tensor is non-diagonal.

  1. The symmetric part of the gradient of velocity gives the viscous part of the stress-energy tensor as a tensor linear in velocity. Velocity gardient decomposes to a term traceless tensor term and a term reducing to scalar.

    ivj+∂jvi= (2/3)∂kvkgij+ (∂ivj+∂jvi-(2/3)∂kvkgij).

    The viscous contribution to stress tensor is given in terms of this decomposition as

    σvisc,ij= ζ∂kvkgij+η (∂ivj+∂jvi-(2/3)∂kvkgij).

    From dFi= TijSj it is clear that bulk viscosity ζ gives to energy momentum tensor a pressure like contribution having interpretation in terms of friction opposing. Shear viscosity η corresponds to the traceless part of the velocity gradient often called just viscosity. This contribution to the stress tensor is non-diagonal and corresponds to momentum transfer in directions not parallel to momentum and makes the flow rotational. This term is essential for the thermal conduction and thermal conductivity vanishes for ideal fluids.

  2. The 3-D total stress tensor can be written as

    σij= ρ vivj-pgijvisc,ij.

    The generalization to a 4-D relativistic situation is simple. One just adds terms corresponding to energy density and energy flow to obtain

    Tαβ= (ρ-p) uα uβ+pgαβviscαβ .

    Here uα denotes the local four-velocity satisfying uαuα=1. The sign factors relate to the concentions in the definition of Minkowski metric ((1,-1,-1,-1)).

  3. If the flow is such that the flow parameters associated with the flow lines integrate to a global flow parameter one can identify new time coordinate t as this flow parametger. This means a transition to a coordinate system in which fluid is at rest everywhere (comoving coordinates in cosmology) so that energy momentum tensor reduces to a diagonal term plus viscous term.

    Tαβ= (ρ-p) gtt δtα δtβ+pgαβviscαβ .

    In this case the vanishing of the viscous term means that one has perfect fluid in strong sense.

    The existence of a global flow parameter means that one has

    vi= Ψ ∂iΦ .

    Ψ and Φ depend on space-time point. The proportionality to a gradient of scalar Φ implies that Φ can be taken as a global time coordinate. If this condition is not satisfied, the perfect fluid property makes sense only locally.

AdS/CFT correspondence allows to deduce a lower limit for the coefficient of shear viscosity as

x= η/s≥ hbar/4π .

This formula holds true in units in which one has kB=1 so that temperature has unit of energy.

What makes this interesting from TGD view is that in TGD framework perfect fluid property in approriately generalized sense indeed characterizes locally the preferred extremals of Kähler action defining space-time surface.

  1. Kähler action is Maxwell action with U(1) gauge field replaced with the projection of CP2 Kähler form so that the four CP2 coordinates become the dynamical variables at QFT limit. This means enormous reduction in the number of degrees of freedom as compared to the ordinary unifications. The field equations for Kähler action define the dynamics of space-time surfaces and this dynamics reduces to conservation laws for the currents assignable to isometries. This means that the system has a hydrodynamic interpretation. This is a considerable difference to ordinary Maxwell equations. Notice however that the "topological" half of Maxwell's equations (Faraday's induction law and the statement that no non-topological magnetic are possible) is satisfied.

  2. Even more, the resulting hydrodynamical system allows an interpretation in terms of a perfect fluid. The general ansatz for the preferred extremals of field equations assumes that various conserved currents are proportional to a vector field characterized by so called Beltrami property. The coefficient of proportionality depends on space-time point and the conserved current in question. Beltrami fields by definition is a vector field such that the time parameters assignable to its flow lines integrate to single global coordinate. This is highly non-trivial and one of the implications is almost topological QFT property due to the fact that Kähler action reduces to a boundary term assignable to wormhole throats which are light-like 3-surfaces at the boundaries of regions of space-time with Euclidian and Minkowskian signatures. The Euclidian regions (or wormhole throats, depends on one's tastes ) define what I identify as generalized Feynman diagrams.

    Beltrami property means that if the time coordinate for a space-time sheet is chosen to be this global flow parameter, all conserved currents have only time component. In TGD framework energy momentum tensor is replaced with a collection of conserved currents assignable to various isometries and the analog of energy momentum tensor complex constructed in this manner has no counterparts of non-diagonal components. Hence the preferred extremals allow an interpretation in terms of perfect fluid without any viscosity.

This argument justifies the expectation that TGD Universe is characterized by the presence of low-viscosity fluids. Real fluids of course have a non-vanishing albeit small value of x. What causes the failure of the exact perfect fluid property?

  1. Many-sheetedness of the space-time is the underlying reason. Space-time surface decomposes into finite-sized space-time sheets containing topologically condensed smaller space-time sheets containing.... Only within given sheet perfect fluid property holds true and fails at wormhole contacts and because the sheet has a finite size. As a consequence, the global flow parameter exists only in given length and time scale. At imbedding space level and in zero energy ontology the phrasing of the same would be in terms of hierarchy of causal diamonds (CDs).

  2. The so called eddy viscosity is caused by eddies (vortices) of the flow. The space-time sheets glued to a larger one are indeed analogous to eddies so that the reduction of viscosity to eddy viscosity could make sense quite generally. Also the phase slippage phenomenon of super-conductivity meaning that the total phase increment of the super-conducting order parameter is reduced by a multiple of 2π in phase slippage so that the average velocity proportional to the increment of the phase along the channel divided by the length of the channel is reduced by a quantized amount.

    The standard arrangement for measuring viscosity involves a lipid layer flowing along plane. The velocity of flow with respect to the surface increases from v=0 at the lower boundary to vupper at the upper boundary of the layer: this situation can be regarded as outcome of the dissipation process and prevails as long as energy is feeded into the system. The reduction of the velocity in direction orthogonal to the layer means that the flow becomes rotational during dissipation leading to this stationary situation.

    This suggests that the elementary building block of dissipation process corresponds to a generation of vortex identifiable as cylindrical space-time sheets parallel to the plane of the flow and orthogonal to the velocity of flow and carrying quantized angular momentum. One expects that vortices have a spectrum labelled by quantum numbers like energy and angular momentum so that dissipation takes in discrete steps by the generation of vortices which transfer the energy and angular momentum to environment and in this manner generate the velocity gradient.

  3. The quantization of the parameter x is suggestive in this framework. If entropy density and viscosity are both proportional to the density n of the eddies, the value of x would equal to the ratio of the quanta of entropy and kinematic viscosity η/n for single eddy if all eddies are identical. The quantum would be hbar/4π in the units used and the suggestive interpretation is in terms of the quantization of angular momentum. One of course expects a spectrum of eddies so that this simple prediction should hold true only at temperatures for which the excitation energies of vortices are above the thermal energy. The increase of the temperature would suggest that gradually more and more vortices come into play and that the ratio increases in a stepwise manner bringing in mind quantum Hall effect. In TGD Universe the value of hbar can be large in some situations so that the quantal character of dissipation could become visible even macroscopically. Whether this situation with large hbar is encountered even in the case of QCD plasma is an interesting question.

The following poor man's argument tries to make the idea about quantization a little bit more concrete.

  1. The vortices transfer momentum parallel to the plane from the flow. Therefore they must have momentum parallel to the flow given by the total cm momentum of the vortex. Before continuing some notations are needed. Let the densities of vortices and absorbed vortices be n and nabs respectively. Denote by vpar resp. vperp the components of cm momenta parallel to the main flow resp. perpendicular to the plane boundary plane. Let m be the mass of the vortex. Denote by S are parallel to the boundary plane.

  2. The flow of momentum component parallel to the main flow due to the absorbed at S is

    nabs m vpar vperp S .

    This momentum flow must be equal to the viscous force

    Fvisc = η (vpar/d)× S .

    From this one obtains

    η= nabsm vperpd .

    If the entropy density is due to the vortices, it equals apart from possible numerical factors to

    s= n

    so that one has

    η/s=mvperpd .

    This quantity should have lower bound x=hbar/4π and perhaps even quantized in multiples of x, Angular momentum quantization suggests strongly itself as origin of the quantization.

  3. Local momentum conservation requires that the comoving vortices are created in pairs with opposite momenta and thus propagating with opposite velocities vperp. Only one half of vortices is absorbed so that one has nabs=n/2. Vortex has quantized angular momentum associated with its internal rotation. Angular momentum is generated to the flow since the vortices flowing downwards are absorbed at the boundary surface.

    Suppose that the distance of their center of mass lines parallel to plane is D=ε d, ε a numerical constant not too far from unity. The vortices of the pair moving in opposite direction have same angular momentum mvperpD/2 relative to their center of mass line between them. Angular momentum conservation requires that the sum these relative angular momenta cancels the sum of the angular momenta associated with the vortices themselves. Quantization for the total angular momentum for the pair of vortices gives

    η/s= nhbar/ε

    Quantization condition would give

    ε =4π .

    One should understand why D=4π d - four times the circumference for the largest circle contained by the boundary layer- should define the minimal distance between the vortices of the pair. This distance is larger than the distance d for maximally sized vortices of radius d/2 just touching. This distance obviously increases as the thickness of the boundary layer increasess suggesting that also the radius of the vortices scales like d.

  4. One cannot of course take this detailed model too literally. What is however remarkable that quantization of angular momentum and dissipation mechanism based on vortices identified as space-time sheets indeed could explain why the lower bound for the ratio η/s is so small.

For background see the chapter Does the Modified Dirac Equation Define the Fundamental Action Principle? of "Quantum TGD as Infinite-dimensional Spinor Geometry".

Tuesday, December 07, 2010

A possible explanation of Shnoll effect

I have already earlier mentioned the work of Russian scientist Shnoll concerning random fluctuations. This work spans four decades and has finally started to gain recognition also in west. By a good luck I found from web an article of Shnoll about strange regularities of what should be random fluctuations. Then Dainis Zeps provided me with a whole collection of similar articles! Thank you Dainis!

The findings of Shnoll led within few days to a considerable progress in the understanding of the relation between p-adic and real probability concepts, the relationship between p-adic physics and quantum groups emerging naturally in TGD based view about finite measurement resolution, the relationship of the hierarchy of Planck constants (in particular the gigantic gravitational Planck constant assignable to the space-time sheets mediating gravitation) and small-p p-adicity, and also with the understanding of the experimental implications of many-sheetedness of space-time in concrete measurement situations in which the measurement apparatus also means non-trivial topology of the space-time.

The most important conclusion is that basic vision about TGD Universe seems manifest itself directly in quantum fluctuations due to quantum coherence in astrophysical scales in practically all kinds of experiments- even in the distributions of financial variables! Needless to tell how far reaching the implications are for quantum gravity. This is one of the biggest surprises of my un-paid professional life which outshines even the repeated surprises caused by the incredibly unintelligent response of most colleagues to my work! I glue below an abstract of a brand new preprint titled A possible Explanation of Shnoll Effect. I attach below the abstract.

Shnoll and collaborators have discovered strange repeating patterns of random fluctuations of physical observables such as the number n of nuclear decays in a given time interval. Periodically occurring peaks for the distribution of the number N(n) of measurements producing n events in a series of measurements as a function of n is observed instead of a single peak. The positions of the peaks are not random and the patterns depend on position and time varying periodically in time scales possibly assignable to Earth-Sun and Earth-Moon gravitational interaction.

These observations suggest a modification of the expected probability distributions but it is very difficult to imagine any physical mechanism in the standard physics framework. Rather, a universal deformation of predicted probability distributions would be in question requiring something analogous to the transition from classical physics to quantum physics.

The hint about the nature of the modification comes from the TGD inspired quantum measurement theory proposing a description of the notion of finite measurement resolution in terms of inclusions of so called hyper-finite factors of type II1 (HFFs) and closely related quantum groups. Also p-adic physics -another key element of TGD- is expected to be involved. A modification of a given probability distribution P(n| λi) for a positive integer valued variable n characterized by rational-valued parameters λi is obtained by replacing n and the integers characterizing λi with so called quantum integers depending on the quantum phase qm=exp(i2π/m). Quantum integer nq must be defined as the product of quantum counterparts pq of the primes p appearing in the prime decomposition of n. One has pq= sin(2π p/m)/sin(2π/m) for p ≠ P and pq=P for p=P. m must satisfy m≥ 3, m≠ p, and m≠ 2p.

The quantum counterparts of positive integers can be negative. Therefore quantum distribution is defined first as p-adic valued distribution and then mapped by so called canonical identification I to a real distribution by the map taking p-adic -1 to P and powers Pn to P-n and other quantum primes to themselves and requiring that the mean value of n is for distribution and its quantum variant. The map I satisfies I(∑ Pn)=∑ I(Pn). The resulting distribution has peaks located periodically with periods coming as powers of P. Also periodicities with peaks corresponding to n=n+n-, n+q>0 with fixed n-q< 0.

The periodic dependence of the distributions would be most naturally assignable to the gravitational interaction of Earth with Sun and Moon and therefore to the periodic variation of Earth-Sun and Earth-Moon distances. The TGD inspired proposal is that the p-dic prime P and integer m characterizing the quantum distribution are determined by a process analogous to a state function reduction and their most probably values depend on the deviation of the distance R through the formulas Δ p/p≈ kpΔ R/R and Δ m/m≈ kmΔ R/R. The p-adic primes assignable to elementary particles are very large unlike the primes which could characterize the empirical distributions. The hierarchy of Planck constants allows the gravitational Planck constant assignable to the space-time sheets mediating gravitational interactions to have gigantic values and this allows p-adicity with small values of the p-adic prime P.

For detals see the new chapter A Possible Explanation of Shnoll Effect of "p-Adic Length Scale Hypothesis and Dark Matter Hierarchy".

Monday, November 22, 2010

Concentric circles in WMAP and Penrose's vision about pre Big-Bang

Phil Gibbs told in Vixra log about the new paper by Penrose and Gurzadyan entitled Concentric circles in WMAP data may provide evidence of violent pre-Big-Bang activity. I do not really understood how the circles of enhanced coherence for temperature fluctuations could follow from Penrose's theory. Probably so because I do not even understand Penrose's theory;-). Ulla asked in Vixra log about the possibility that they could have Big-Bang origin. One can also ask whether they could have post Big-Bang origin. The following is the typo free version of my comment in Vixra log with some little additions.

If one allows more general 1-D curves as curves of coherence one can perhaps consider something like this.

  1. These curves should correspond to intersections of 2-D surfaces at which the inhomogenities of mass density causing the temperature fluctuations tend to be concentrated. There would be enhanced coherence along these surfaces if mass density at these surfaces is reasonably constant.

  2. The intersections of these 2-D surfaces with the sphere from which the radiation comes from for a given red-shift would define 1-D circles of exceptional coherence. Concentric circles of coherence look something too specific.

  3. More concretely, what comes in mind at the level of experimental facts are the large voids of scale 108 light years having galaxies at their boundaries. If one believes in fractal universe these kind of honeycomb like structures should appear in a hierarchy of discrete length scales..

One can continue even further to the murky depths of TGD.

  1. Suppose that one believes that the flatness of 3-space is not due to inflation but due to a period of quantum criticality (dark matter energy as phases with gigantic value of Planck constant). Criticality is characterized by long range fluctuations and fractality. Therefore one indeed expects a fractal honeycomb manifesting itself as a hierarchy of coherence circles in WMAP.

  2. Diving still deeper and taking seriously p-adic length scale hypothesis, zero energy ontology, and causal diamonds, one would end up with the possibility that the fractal honeycomb with preferred size scales of cells coming as half octaves or powers of two or subset of them. This fractal structure would expand in various scales by discrete jumps involving perhaps the increase of gravitational hbar as the size of CD is scaled up by two.

  3. We are now very deep -near to the bottom- and there is still some oxygen left which we can waste to our last speculations. Could it be that these astro quantum jumps have occurred even at the planetary level: could large hbar quantum version of Expanding Earth Theory make sense;-). Now we have no oxygen anymore. I am sorry. Maybe I should have warned;-).
Speaking seriously, one can ask whether these reckless speculations could share something with Penrose's theory. Minkowskian conformal invariance is an essential element of Penrose's theory. Minkowskian conformal symmetry is important also in the recent proposal for how twistors diagrams generalize in TGD framework so that a connection at some abstract level can be imagined.

  1. Causal diamonds are defined as intersections of future and past directed light-cones and bring very strongly in mind Penrose diagrams. They indeed define causal units and thus serve as their analogs with upper and lower light-like surfaces of CD serving as the analogs of the conformal infinities at which the positive and negative energy parts of zero energy states reside. Negative/positive energy part of zero energy state looks big crunch or big bang depending on which time direction one looks.

  2. There would be fractal hierarchy of these CDs within CDs - a fractal hierarchy of cosmologies within cosmologies. Note however that the notion of cosmology is somewhat generalized to include also elementary particles at the lower end of hierarchy;-). For electron with ordinary value of Planck constant this sub-cosmology has duration of .1 seconds in Minkowski time, which happens to correspond to the fundamental bio-rhythm.

Addition: I listened a popular science program about inflation yesterday in radio. These programs are very conservative as is science in Finland but of high quality, which cannot be said about many popular science TV programs made with big money at the other side of ocean.

The possible failure of Gaussianity - meaning a complete absence of long range correlations in the spectrum of temperature (mass density) fluctuations - is the hot topic of cosmology nowadays and the data from Planck satellite might resolve the problem within two years. The claim of Penrose and Gurzadyan is that these correlations are visible already in WMAP data. Note that the appearance of bon-Gaussian correlations is different from what one would expect from field theory models on basis of correlation functions since the prediction is correlations on 2-D surfaces. An interesting possibility is that holography allows ot interpret these surfaces as structures assignable to partonic 2-surfaces.

I learned that the presence of these correlations would kill all inflationary models with only single inflaton field. Theoreticians are however skilled and in the worst horror scenario this could lead to a decades of resuscitation attempts by bringing in more and more complexity. Already now there exists huge number of almost working inflationary scenarios.

Also TGD -or at least a big and beautiful part of it- is in a danger zone. The absence of long range correlations would kill the explanation of flatness of three-space as a consequence of quantum criticality - long range correlations in large range of scales are after all the basic prediction of criticality.

Addition: Lubos Motl wrote a second posting about the finding of Penrose and Gurzadyan. Lubos claims that Penrose and Gurzadyan have rediscovered the already known WMAP excess at L=40 partial wave, which indeed represents deviation from the predictions of the inflationary scenario. This additional exceptionally large harmonic would produce the circles but what is the origin of this harmonics? But could it be just what Penrose and Gurzadyan are claiming or something or just single partial wave which is especially strong and what does this mean? Concentric surfaces intersection the z=constant sphere?

Addition: The preprints of Moss et al and Wehus et al claim that the concentric circles of Penrose and Gurzadyan can be reproduced in simulations. Penrose and Gurzadyan have responded the critics and claim that the simulations cannot reproduce the entire series of circles and that there are also other delicate misinterpretations. Personally I remain confused.

I would be happy if someone would list the reasons for believing that exponential expansion is the only explanation for the flatness of 3-space. Is there convincing evidence for exponential expansion? Or is it only flatness which is seen as evidence for the inflation?

Of course, exponential expansion would smooth out the inhomogenities but Lorentz invariance of the space-time sheet representing cosmological evolution could alone explain this since ground states tend to be highly symmetric. There is also a sequence quantum jumps replacing quantum superposition of cosmologies in 4-D sense (making sense in TGD framework where strong form of holography holds true) with a new one in each quantum jump and this evolution could lead to a situation in which the superposition involves only small deformations of Lorentz invariant cosmology. Kind of approach to highly symmetric ground state but in 4-D sense.

Friday, November 19, 2010

Mickelson-Morley experiment revisited again

For almost year ago I told about a variant of Mickelson Morley experiment performed by Martin Grusenick. He found that the interference pattern changed during vertical rotation. The experiment generated enthusiasm in people taking seriously the notion of aether although the explanation in terms of aether was excluded by very simple considerations. I proposed a possible explanation of the effect (assuming it is real) in terms of new gravitational physics possibly provided by TGD. The model involved one parameter whose size determined whether the effect is there or not.

I am grateful for Frank Pearce for informing me almost year later (20.11. 2010) that he has carried out the Grusenick experiment again. There is small movement of the interference pattern during vertical rotation but nothing comparable to that detected by Grusenick so that the effect is very probably an artifact due to instabilities associated with the central mirror. An excellent video Vertical Michelson Morley Interferometer Experiment 11 12 2010 about Pearce's version of the experiment can be found here.

Thursday, November 18, 2010

What might be the origin of Fermi bubbles?

Fermi-LAT satellite has product new interesting data about Milky Way central region commented already by Lubos. The following is the abstract of the article Giant Gamma-ray Bubbles from Fermi-LAT: AGN Activity or Bipolar Galactic Wind? by Meng Su, Tracy R. Slatyer and Douglas P. Finkbeiner.

Data from the Fermi-LAT reveal two large gamma-ray bubbles, extending 50 degrees above and below the Galactic center, with a width of about 40 degrees in longitude. The gamma-ray emission associated with these bubbles has a significantly harder spectrum (dN/dE ≈&E-2) than the IC emission from electrons in the Galactic disk, or the gamma-rays produced by decay of pions from proton-ISM collisions. There is no significant spatial variation in the spectrum or gamma-ray intensity within the bubbles, or between the north and south bubbles. The bubbles are spatially correlated with the hard-spectrum microwave excess known as the WMAP haze; the edges of the bubbles also line up with features in the ROSAT X-ray maps at 1.5-2 keV. We argue that these Galactic gamma-ray bubbles were most likely created by some large episode of energy injection in the Galactic center, such as past accretion events onto the central massive black hole, or a nuclear starburst in the last ≈&10 Myr. Dark matter annihilation/decay seems unlikely to generate all the features of the bubbles and the associated signals in WMAP and ROSAT; the bubbles must be understood in order to use measurements of the diffuse gamma-ray emission in the inner Galaxy as a probe of dark matter physics. Study of the origin and evolution of the bubbles also has the potential to improve our understanding of recent energetic events in the inner Galaxy and the high-latitude cosmic ray population.

The article contains a long highly technical description of findings. I of course do not have real understanding about this side. The results however seem to be rather clear. There is bubble and its mirror image (bringing in the mind of Lubos infinity symbol) with a center in the galactic nucleus. In these regions there is no significang spatial variation of the gamma ray intensity. The radius of the bubbles is about 5 kparsecs. The X-ray maps in energy range 1.5-2 keV suggest features consistent with the identification as Fermi bubbles. Also the WMAP microwave excess known as microwave haze is consistent with Fermi bubbles.

What could be the mechanism producing the gamma rays?

  1. I do not regard the explanation in terms of a catastrophic event terribly convincing since I find it difficult to understand the symmetry with respect to the galactic plane for this option. Rather, the notion of many-sheeted space-time would suggest that quasi-static structures are involved.

  2. Fractality of TGD Universe would encourage to consider analogies with the physics of stellar objects, maybe even planets. The nearest and best known example is Earth itself with its magnetic fields and associated van Allen radiation belts. Could Fermi bubbles correspond to a dipole magnetic field with dipole in the direction of galactic plane? Could the dipole correspond to the galactic bar? The estimates for its size vary in surprisingly large range of 1-5 kparsecs but are reasonably near to the size scales of the bubbles which is around 10 kparsecs.

  3. One could perhaps imagine the analogs of van Allen radiation belts with very energetic particles originating from the galactic bar a nd traveling along flux quanta (back and forth?) and radiating gamma rays, whose energies would vary up to 100 GeV at least. In the magnetohydrodynamic approximation the charged particles would move along the field lines of the magnetic field. There is however energy conserving Lorentz force present (plus possibly the Coulomb force caused by electric field) and this causes the emission of radiation.

  4. In TGD framework the decay of string like object defined by bar could generate extremely energetic particles -say electrons- propagating along the flux quanta. The emission of gamma rays would be especially intensive near turning points where the direction of motion of charged particle is changing most intensely. This would occur near galactic plane where the direction of the magnetic field is changing.

One could bring more TGD to this picture.

  1. In TGD vision galaxies are like pearls in the necklace represented by long cosmic strings. These cosmic strings are not the cosmic strings are in GUTs but string like 3-surfaces predicted by TGD dominating the very early cosmolgy and gradually developing an increasing size for their Minkowski space projection with in the ideal case is 2-D. These cosmic strings would be responsible for the magnetic fields filling the Universe but very poorly understood in the standard cosmology. The jet orthogonal to the plane of galaxy would be associated with the "big cosmic string" defining the necklace.

  2. Also the galaxies themselves could correspond to decay producs of cosmic strings. They could be closed or open: maybe the two cases correspond to elliptic and spiral galaxies. The galactic bar for spiral galaxies could correspond to decay products of open cosmic string. A straight cosmic string with length L corresponds to an object with mass which is roughly 10-4 times the mass of black hole with radius L. If the cosmic string is highly entangled, its mass can approach to that of black hole and supermassive galactic black hole with a size of about 150 light seconds could correspond to a highly entangled piece of cosmic string.

  3. Open cosmic strings could decay to ordinary matter at their ends. During quasar phase this process would be especially effective and could be partially responsible for gamma ray beams in the direction of the string and producing gamma ray bursts. Kind of cosmic fire crackers would be in question and galactic bar could represent remnants of this kind of fire-cracker still emitting matter travelling along the flux quanta of the galactic magnetic field and producing the gamma radiation.

  4. There should be strong magnetic fields associated with these objects. In TGD Universe they would correspond to space-time sheets defining flux tubes or flux sheets of magnetic field. Charged particles would travel along these flux sheets and dissipate their energy by emitting gamma rays.

An interesting question is whether it makes sense to speak about cyclotron states at the flux tubes of the magnetic field and assign the X rays or microwaves with the cyclotron emission.

  1. The d'Alembertian for a relativistic motion in constant magnetic field reduces to free motion in longitudinal direction and harmonic oscillator in the transversal directions. By generalizing the non-relativistic formula one one obtain the dispersion relation

    E = (p2+2n × h× eB)1/2 .

    Here p is the momentum parallel to the flux tube. The magnetic contribution to the energy is completely neglibible in gamma ray range unless magnetic field is very strong.

  2. The unit of transversal magnetic energy corresponding to Δ n=1 in the formula for the energy is h× eB/E obtained by replacing mass with energy in the classical formula. For relativistic energies these energies form a continuum. Magnetic confinement suggests that the value of the galactic magnetic field should be such that the transversal contribution to the energy is comparable to the longitudinal energy. For magnetic field of 1 Tesla L= sqrt(h/eB) corresponds to magnetic length of order nm which corresponds to energy of order 102 eV. For ordinary value of Planck constant 100 GeV would require a magnetic field with magnitude of order 109 Tesla. 108 Tesla is the value of B assigned with typical pulsars but there is also a class of pulsars with 1000 stronger magnetic field. One might expect magnetic fields with at strength of at least 108 Tesla in the galactic nucleus.

  3. How the scaling of scaling of Planck constant for dark matter affects the situation? Large values of Planck constant correspond to integer multiples h=nh0 of ordinary Planck constants and coverings of the imbedding space. This would suggest that the total magnetic flux remains constant and is only divided between N sheets so that the formula for the energy is unaffected for a given sheet of the covering.

Monday, November 15, 2010

Octonions at Institute of Advanced Study

Atiyah should have given last week a talk entitled Quantum Gravity and the Riemann Hypothesis in Instutute of Advance Study. This talk was canceled and Kea tells that the title of the talk was Octonions and the four forces of Physics. In the conference program the title of his talk held at Friday November fifth was Exploring the Geometry Behind the Quantum Universe. So what Atyiah did really talk about? Kea also informs that Atiyah concluded his talk with

You can regard what I say as nonsense, or you can claim that you know it already, but you cannot make these two claims together.

This does not give any clue.

Why I am worrying about what Atiyah really did at Friday November fifth is very TGD-centered. As you should already know, TGD is more or less what I am - or more precisely, what has been left of me under the squeeze of cruel academic forces. If Atiyah indeed talked about octonions, I have the courage (in my pitifully crackpottish manner of course) to wonder whether my humble and shy octonionic message from the bottomest bottom of the hierarchy could have reached the Olympian heights so that even Atiyah gets interested;-). In my heart I of course know that I must understand that I am just a poor classless pariah as compared to these Brahmins of Science and it is incredibly blasphemous to even imagine that they might be interested on something that I have said!

The basic problem with octonions and quaternions has been how to bring them to physics in a manner consistent with what we already know.

  1. One should build a connection with standard model quantum numbers. Here the solution comes from the observation that SU(3) subgroup of octonion automorphisms can take the role of color group. This observation leads in TGD to what I call M8-M4× CP2 duality giving M4× CP2 and therefore standard model symmetries a unique number theoretic status.

  2. The signature of the imbedding space metric and space-time remains the problem and here complexified octonions and hyper-octonionic sub-spaces with Minkowskian signature is here a way out and means the replacement of number field with algebra. Same about hyper-quaternions.

  3. Non-associativity is the third tough problem: the ideas about octonionic and quaternionic quantum do not work. Here however a simple solution suggests itself: associativity as the number theoretical realization of the fundamental variational principle selecting the 4-D space-time surface as quaternionic and thus associative sub-manifolds of an 8-D space-time with octonionic structure. Everything would reduce to number theory: space-time dimension, standard model symmetries, dynamics, and much more.

  4. A further tough problem is what octonionic and quaternionic structure really means. The attempts to make sense of the notions of quaternion/ octonion analyticity (restricted to real analyticity) lead to conflict with what we know about wave equations. The notion of (hyper-) quaternionic and (hyper-)octonionic representations of gamma matrices turned out to be the optimal solution to the question what one means with quaternionic and octonionic structures. The first guess is that induced gamma matrices span quaternionic subspace at each point of space-time surface.

    This works if volume defines the action behind space-time dynamics but not for Kähler action. One must replace the induced gammas with modified gammas. Modified gamma matrices would span a hyper-quaternionic sub-space of octonionic gammas at each point of space-time surface. Do these surfaces define preferred extremals of Kähler action (does the same holds for any general coordinate invariance action principle)? This is the question.

Addition. From Gil Kalai's blog I learned that Atyiah indeed talked about classical number fields and physics and proposed that the four number fields could correspond to fundamental interactions with gravitation assigned with octonions. This idea looks to me more like a numerology. Notice however that other interactions than gravitation could be described in field theory framework using 4-D Minkowski space which can be interpreted in terms of hyper-quaternionic flat space. When gravitation comes into play one must have more general hyper-quaternionic sub-manifolds of hyper-octonionic space with hyper-quaternionicity defined in terms of modified gamma matrices and their octonionic representation.

Tuesday, November 09, 2010

Entropic gravity again

There has been a lot of talk about Verlinde's entropic gravity. The fashion and fancy hysteria familiar from the breakthough of super string models has begun. The enthusiasm is certainly partly because there are only few formulas and it is easy to tinker with them. I am also afraid that often the real motivation is the hope to deduce a formula giving a place in the history of theoretical physics with a minimum amount of hard work. Verlinde has received two million euro funding for the entropic gravity program. I hope that something emerges from entropic gravity but the horror scenario is that the story of superstrings repeats itself.

Some comments about the lectures if Verlinde

I listened the lecture of Verlinde and I must say that they failed to make me enthusiastic about the idea. I try to articulate the reasons for the lack of enthusiasm.

  1. I see the identification of gravitation as a force of any kind as something horribly ugly. Everyone in the field one course knows that the realization that gravitational force is not actually a force at all was the fantastic discovery of Einstein which led to general relativity whose super-symmetric version promises to be the second candidate for the UV finite quantum field theories ever discovered. For me it is horribly light-hearted to give up General Coordinate Invariance and Equivalence Principle and replace them with some thermodynamical analogies and hopelessly fuzzy notion of emergent space.

  2. In any case, the priority number one would be the formulation of entropic gravity in a general coordinate invariant manner or finding whether this is possible at all. Can the thermodynamics of holographic screens indeed lead to a genuine emergence of space-time with metric? Or is this notion of emergence actually similar self deception as the emergence of continuous space-time from something discrete?

    How to formulate thermodynamics treating space-time coordinates as macroscopic thermodynamical parameters such that general coordinate invariance for these parameters emerges. What dictates what thermodynamical parameters playing the role of space-time coordinates are allowed? What distinguishes between the space-time coordinates and other thermodynamical parameters? Why don't we experience any generalized thermodynamical coordinate as a coordinate analogous to space-time coordinate? What distinguishes between coordinate of screen and the coordinates of space-time interior? Why the dimension of the screen should be just two?

    How the space-time metric defining the distance between space-time points appearing as thermodynamical parameters emerges from thermodynamics in the case that this notion has some meaning? How can one formulate the theory at practical level without relying again and again on the basic notions of special and general relativity making the arguments hopelessly circular?

  3. At least on the basis the lectures I got the impression that no dramatic progress in answering these questions has been made yet. Of course it could be that the lectures are "popular" and for this reason so fuzzy. In some arguments one takes black holes as starting point. One also brings stuff from M-theory suggesting that gravitational force between branes emerges from the interactions mediated by strings. At the same time the basic idea is however that no quantization of gravitation is needed since gravitation is indeed entropic force. The formula for the entropic force in the thermodynamics of polymers is discussed, there is good old Newton's formula for the gravitational force, and there is also Schwartschild metric and black hole horizon. How on Earth all this more or less contradictory ideas can be mutually consistent? Please, do not try to tell me that some duality brings in the harmony.

New theories have always emerged from a genuinely new ideas and new concepts. Formulas are the final outcome. To my humble opinion the tinkering with the formulas of existing theories is trying to bring life to dead bones and is dangerous because one forgets that the formulas make sense only in some context. The basic problem of the physics after the first super string revolution has been the increasing loss of conceptual economy. The loose use of dualities has led to a final loss of intellectual control and the field seems to have collapsed to a copious production of loose arguments. A thorough turnout is unavoidable sooner or later and I am afraid that both M-theory and entropic gravity end up to the recycle bin in this process.

Is 4-D holography enough?

The approach involves also holography in strong form and this is something beautiful. I see no need to complicate things by introducing fuzzy ideas about gravitation as entropic force. I have long time ago developed a beautiful theory in which space-time dimension four is completely unique.

  1. The strong form of General Coordinate Invariance (GCI) plus sub-manifold gravity are all that is needed. GCI alone implies strong form of holography meaning that either light-like 3-surfaces or space-like 3-surfaces at boundaries of causal diamonds (CDs) defined as intersections of future and past directed light-cones) as basic dynamical objects. This implies effective 2-dimensionality: 2-D partonic surfaces and their 4-D tangent space data at boundaries of CDs dictate quantum physics.

  2. Space-time interior defining the analog of Bohr orbit realizes quantum classical correspondence. This connection of GCI with quantum theory was something totally unexpected to say nothing about geometrization of fermion statistics in terms of gamma matrices of the world of classical worlds (the space of 3-surfaces).

  3. In this framework it is also to see what goes wrong with the entropic gravity. In TGD Universe all interactions -also gravitation- can be described in terms of generalized Feynman graphs having as lines light-like 3-surfaces. The classical fields-including induced metric- at space-time surfaces provide classical correlates of these interactions required by quantum classical correspondence. The mere realization of the necessity of quantum classical correspondence might have saved us from the idea that gravitation is nothing but a macroscopic entropic force.

One might think that these discoveries alone could have some effect on colleagues but it seems that they are completely deaf to anything which does not come from the mouths of names. This opportunistic attitude is second basic disease of theoretical physics of today: it does not matter what you say, what matters who you are.

Constraint force instead of entropic force?

Entropic force does not solve the problems of general relativity based cosmology and it is only a matter of time when the claim that there is no microscopic gravitation will be shown to be wrong. There is also an article arguing that entropic gravity is in conflict with the behavior of ultracold neutrons in the gravitational field of Earth (see this) but this kind of voices are probably not heard by young career builders.

TGD however predicts a force which resolves the big problems of general relativity both at classical and quantum level. This force is the constraint force due to the condition that space-time surfaces are sub-manifolds of M4× CP2. It is somewhat abstract force since it acts in the world of classical worlds. This force should replace entropic force as the hot topic of theoretical physics. As a matter fact, it should have become the hot topic already decades ago. Sub-manifold gravitation leads also to the geometrization of elementary particle quantum numbers and geometrization of classical gauge fields. Both the condition that standard model quantum numbers are obtained and number theoretic vision fix the imbedding space uniquely to M4× CP2.

The huge number of unphysical degrees of freedom is the reason to the problems of both general relativity and M-theory and sub-manifold gravitation implies a huge reduction of degrees of freedom as compared to Einstein's theory. Let me represent some examples.

  1. Sub-manifold constraint resolves the basic difficulty of GRT based cosmologies posed by the estimate for the natural value of cosmological constant which is by a factor of order 10120 too large: the solutions with infinite duration are sub-critical simply by the embeddability condition. Critical and sub-critical solutions are determined apart from parameter coding for the finite duration of this kind of cosmology.

  2. The mere quantum criticality requiring flatness of 3-space in TGD inspired cosmology replaces inflation whose failure was also basically due to the exponential increase of non-physical degrees of freedom. Quantum criticality also implies accelerating expansion during critical periods: the negative pressure term is essentially due to the sub-manifold constraint. As already noticed, criticality fixes the Robertson-Walker cosmology apart from a parameter characterizing its duration.

  3. The landscape catastrophe of M-theory is also due to the inflationary growth of unobserved and very probably non-existing degrees of freedom.
    1. What one started from 26 years ago was string theory in 10-D background giving a nice description of what was believed to represent gravitonic scattering amplitudes in terms of Feynman diagrams.
    2. The problem was that our space-time is 4-dimensional. Instead of asking whether one could replace strings with 3-surfaces- something very natural and done by me 6 years before the first superstring revolution- the idea of spontaneous compactification was introduced. Besides stringy gravitation one had now also the classical 10-D gravitation instead of having just the 4-D gravitation of Einstein's theory. Impressive! The price to be paid for these additional degrees of freedom was that one had to understand why space-time looks 4-dimensional. Still no one knows the answer and the dreams about TOE have been buried long time ago!
    3. But even this did not work! One had to introduce also branes and the result was super-exponential increase of unobserved degrees of freedom and landscape catastrophe.
    4. Did you think that this was enough? No! It seems that F-theory with 12-D target space might give some hopes about reproducing standard model quantum number spectrum?
    God grief! Is it possible that no-one in the hegemony did realize what was happening?!

  4. I hope that reader could get some impression about the deep frustration that I have felt during these years as I have been witnessing this odyssey from something might-be-reasonable to completely obvious non-sense. But even this is not enough! Gods really hate me! It is quite possible that I must wittness also the success of these so called phenomenological approaches to gravitation. Maybe the vision about gravitation as entropic force is some day the only game in the town! In any case, anyone need not come to me and tell that I did not warn!

Friday, November 05, 2010

Why positrons are so shy?

There is really dramatic news in New Scientist. Positrium atoms consisting of positron and electron scatter particles almost as if they were lonely electrons! This is called cloaking effect for positron (the article is here). If this is not a bad joke, this is something totally devastating from the point of view of QED and all that we have believed until this day;-).

I have said the words "many-sheeted space-time" and "dark matter hierarchy" so many times that it should be easy to guess that the following arguments will be attempts to understand the cloaking of positron in terms of these notions.

  1. Let us start with the erratic argument since it comes first in mind. If positron and electron correspond to different space-time sheets and if the scattered particles are at the space-time sheet of electron then they do not see positron's Coulombic field at all. The objection is obvious. If positron interacts with the electron with its full electromagnetic charge to form a bound state, the corresponding electric flux at electron's space-time sheet is expected to combine with the electric flux of electron so that positronium would look like neutral particle after all. Does the electric flux of positron return back to the space-time sheet of positronium at some distance larger than the radius of atom? Why should it do this? No obvious answer.

  2. Assume that positron dark but still interacts classically with electron via Coulomb potential. In TGD Universe darkness means that positron has large hbar and Compton size much larger than positronic wormhole throat (actually wormhole contact but this is a minor complication) would have more or less constant wave function in the volume of this larger space-time sheet characterized by zoomed up Compton length of electron. The scattering particle would see pointlike electron plus background charge diffused in a much larger volume. If hbar is large enough, the effect of this constant charge density to the scattering is small and only electron would be seen.

  3. As a matter fact, I have proposed this kind of mechanism to explain how the Coulomb wall which is the basic argument against cold fusion could be overcome by the incoming deuteron nucleus (see this). Some fraction of deuteron nuclei in the palladium target would be dark and have large size just as positron in the above example. It is also possible that only the protons of these nuclei are dark. I have also proposed that dark protons explain the effective chemical formula H1.5O of water in the scattering by neutrons and electrons in attosecond time scale (see this). The connection with cloaked positrons is highly suggestive.

  4. Also one of TGD inspired proposals for the absence of antimatter is that antiparticles reside at different space-time sheets as dark matter and are apparently absent (see this). Also the modified Dirac equation with measurement interaction term suggests that fermions and antifermions reside at different space-time sheets, in particulart that bosons correspond to wormhole contacts (see this). Cloaking positrons (shy as also their discoverer Dirac!) might provide an experimental supports for these ideas.

The recent view about the detailed structure of elementary particles forces to consider the above proposal in more detail.

  1. According to this view all particles are weak string like objects having wormhole contacts at its ends and magnetically charged wormhole throats (four altogether) at the ends of the string like objects with length given by the weak length cale connected by a magnetic flux tube at both space-time sheets. Topological condensation means that these structures in turn are glued to larger space-time sheets and this generates one or more wormhole contacts for which also particle interpretation is highly suggestive and could serve as space-time correlate for interactons described in terms of particle exchanges. As far electrodynamics is considered, the second ends of weak strings containing neutrino pairs are effectively non-existing. In the case of fermions also only the second wormhole throat carrying the fermion number is effectively present so that for practical purposes weak string is only responsible for the massivation of the fermions. In the case of photons both wormhole throats carry fermion number.

  2. An interesting question is whether the formation of bound states of two charged particles at the same space-time sheet could involve magnetic flux tubes connecting magnetically charged wormhole throats associated with the two particles. If so, Kähler magnetic monopoles would be part of even atomic and molecular physics. I have proposed already earlier that gravitational interaction in astrophysical scales involves magnetic flux tubes. These flux tubes would have o interpretation as analogs of say photons responsible for bound state energy. In principle it is indeed possible that the energies of the two wormhole throats are of opposite sign for topological sum contact so that the net energy of the wormhole contact pair responsible for the interaction could be negative.

  3. Also the interaction of positron and electron would be based on topological condensation at the same space-time sheet and the formation of wormhole contacts mediating the interaction. Also now bound states could be glued together by magnetically charged wormhole contacts. In the case of dark positron, the details of the interaction are rather intricate since dark positron would correspond to a multi-sheeted structure analogous to Riemann surface with different sheets identified in terms of the roots of the equation relating generalized velocities defined by the time derivatives of the imbedding space coordinates to corresponding canonical momentum densities.

Friday, October 29, 2010

Magnetic monopoles at sixties

Old age is usually associated with wisdom and similar virtues. In my case this association unfortunately fails and therefore the first morning at sixties gives me authority to free associations about everything under the heaven, and magnetic monopoles are a good place to start from. The evidence for condensed matter monopoles is accumulating (see this and this) and the question is whether they really represent some new physics. Perhaps this is the case.

Dirac monopoles are mathematically singular and cannot be therefore tolerated in elite circles of theoretical physics appreciating good manners coded by gauge invariance. Since I frantically want to belong to the elite, I am happy that TGD provides me with homological monopoles, which can exist gracefully because of the homological non-triviality of CP2. Homological non-triviality means that CP2 has non-contractible 2-surfaces such as spheres: this does not mean that it would have a hole as a lazy popularizer usually says. Rather, this kind of sphere is a 2-dimensional analog of a circle around torus not allowing contraction to a point without cutting. The imbedding of CP2 to some higher dimensional space would contain a hole in some sense.

The weak form of electric-magnetic duality- a purely TGD based notion- implies that all elementary particles correspond to pairs of wormhole contacts. Each contact has two throats carrying magnetic monopole charge and each throat is connected to the corresponding throat of second contact. This makes altogether four wormhole throats so that graviton can be constructed in this manner. The length of the magnetic flux tube defining string like object corresponds to the weak length scale about 10-17 m. All particles would be this kind of string like objects, "weak" strings.

Emergence gives excellent hopes about the realization of exact Yangian invariance and twistor Grassmannian program without infrared and UV divergences (see this). Emergence states that at the fundamental level there are only massless(!) wormhole throats carrying many-fermion states identifiable in terms of representations for the analog of space-time super-symmetry algebra with generators identified as fermionic oscillator operators. Masslessness applies also to the building blocks of virtual particles meaning a totally new interpretation of loop corrections and manifest UV finiteness. Also the vibrational degrees of freedom of partonic 2-surfaces are present as bosonic degrees of freedom and correspond to orbital degrees of freedom for the spinor fields of world of classical worlds (WCW) whereas fermionic degrees of freedom define WCW spin degrees of freedom. The dark variants of the elementary particles having large value of hbar have zoomed up size and in living matter these scaled up elementary particles would be the key players in the drama of life.

Quite recently I realized that dark variants of elementary particles identified in this manner are more or less the same thing as the wormhole magnetic fields that I introduced for more than decade ago (see this) and suggested that their Bose-Einstein condensates and coherent states could be crucial for understanding living matter. At that time I did not of course realize the connection with ordinary elementary particle physics and proposed these exotics as new kind of particle like objects. These flux tubes have become the basic structures in TGD inspired quantum biology. For instance, the model for DNA as topological quantum computer assumes that the nucleotides of DNA and lipids of cell membrane are connected by this kind of flux tubes with quarks at their ends and the braiding of the flux tubes codes for topological quantum computations.

If this picture is correct, quantum biology might be to high degree a collection of zoomed up variants of elementary particle physics at very high density! Also the super-partners and scaled up hadrons would be present. If this is true we would be able to study elementary particle interiors by zooming up them to the scales of living matter! There would be no need for the followers of LHC! Living matter could be an enormous particle physics laboratory providing physicists with incredibly refined research facilities;-). But are these facilities meant for us after we finally have realized that we ourselves are the most refined laboratory? Or are the physicists already there? If so, who these physicists from higher levels of self hierarchy might be;-)?

By the way, this crazy speculation might have been inspired also by the earlier finding that the model of dark nucleons allows to map the spectrum of nucleon states to RNA, DNA, tRNA triplets and aminoacids and also reproduces vertebrate genetic code in a very natural manner (see this and this).