Thursday, September 18, 2014

Is cosmic expansion a mere coordinate effect?

There is a very interesting article about cosmic expansion or rather a claim about the absence of cosmic expansion.

The argument based on the experimental findings of a team of astrophysicists led by Eric Lerner goes as follows. In non-expanding cosmology and also in the space around us (Earth, Solar system, Milky Way), as similar objects go further away, they look fainter and smaller. Their surface brightness remains constant. In Big Bang theory objects actually should appear fainter but bigger. Therefore the surface brightness- total luminosity per area - should decrease with distance. Besides this cosmic redshift would be dimming the light.

Therefore in expanding Universe the most distant galaxies should have hundreds of times dimmer surface brightness since the surface are is larger and total intensity of light emitted more or less the same. Unless of course, the total luminosity increases to compensate this: this would be of course total adhoc connection between dynamics of stars and cosmic expansion rate.

This is not what observations tell. Therefore one could conclude that Universe does not expand and Big Bang theory is wrong.

The conclusion is of course wrong. Big Bang theory certainly explains a log of things. I try to summarize what goes wrong.

  1. It is essential to make clear what time coordinate one is using. When analyzing motions in Solar System and Milky Way, one uses flat Minkowski coordinates of Special Relativity. In this framework one observes no expansion.

  2. In cosmology one uses Robertson-Walker coordinates (a,r, θ,φ). a and r a the relevant ones. In TGD inspired cosmology R-W coordinates relate to the spherical variant (t,rM,θ,φ) of Minkowski coordinates by formulas

    a2= t2-rM2, rM= a×r.

    The line element of metric is

    ds2= gaada2 -a2[dr2/(1+r2)+r22]

    and at the limit of empty cosmology one has gaa=1.

    In these coordinates the light-cone of empty Minkowski space looks like expanding albeit empty cosmology! a is just the light-cone proper time. The reason is that cosmic time coordinate labels the a=constant hyperboloids (hyperbolic spaces) rather than M4 time=constant snapshots. This totally trivial observation is extremely important concerning the interpretation of cosmic expansion. Often however trivial observations are the most difficult ones to make.

Cosmic expansion would to high extend a coordinate effect but why should one then use R-W coordinates in cosmic scales? Why not Minkowski coordinates?
  1. In Zero Energy Ontology (ZEO) - something very specific to TGD - the use of these coordinates is natural since zero energy states are pairs of positive and negative energy states localized about boundaries of causal diamonds (CD), which are intersections of future and past directed light-cones having pieces of light-cone boundary as their boundaries. The geometry of CD suggests strongly the use of R-W coordinates associated with either boundary of CD. The question "Which boundary?" would lead to digression to TGD inspired theory of consciousness. I have talked about this in earlier postings.

  2. Thus the correct conclusion is that local objects such as stars and galaxies and even large objects do not participate in the expansion when one looks the situation in local Minkowski coordinates - which by the way are uniquely defined in TGD framework since space-time sheets are surfaces in M4×CP2. In General Relavity the identification of the local Minkowski coordinates could be highly non-trivial challenge.

    In TGD framework local systems correspond to their own space-time sheets and Minkowski coordinates are natural for the description of the local physic since space-time sheet is by definition a space-time region allowing a representation as a graph of a map from M4 to CP2. The effects caused by the CD inside which the space-time surfaces in question belong to the local physics are negligible. Cosmic expansion is therefore not a mere coordinate effect but directly reflects the underlying ZEO.

  3. In General Relativity one cannot assume imbeddability of the generic solution of Einstein's equations to M4 × CP2 and this argument does not work. The absence of local expansion have been known for a long time and Swiss Cheese cosmology has been proposed as a solution. Non-expanding local objects of constant size would be the holes of Swiss Cheese and the cheese around them would expand. The holes of cheese would correspond to space-time sheets in TGD framework. All space-time sheets can be in principle non-expanding and they have suffered topological condensation to large space-time sheets.

One should also make clear GRT space-time is only an approximate concept in TGD framework.
  1. Einstein-Yang-Mills space-time is obtained from the many-sheeted space-time of TGD by lumping together the sheets and describing it as a region of Minkowski space endowed with an effective metric which is sum of flat Minkowski metric and deviations of the metrics of sheets from Minkowski metric. Same procedure is applied to gauge potentials.

  2. The motivation is that test particle topologically condenses at all space-time sheets present in given region of M4 and and the effects of the classical fields at these sheets superpose. Thus superposition of fields is replaced with superposition of their effects and linear superposition with set theoretic union of space-time sheets. TGD inspired cosmology assumes that the effective metric obtained in this manner allows imbedding as vacuum extremal of Kähler action. The justification of this assumption is that it solves several key problems of
    GRT based cosmology.


  3. The number of field patterns in TGD Universe is extremely small - given by preferred extremals - and the relationship of TGD to GRT and YM theories is like that of atomic physics to condensed matter physics. In the transition to GRT-Yang-Mills picture one gets rid of enormous topological complexity but the extreme simplicity at the level of fields is lost. Only four CP2 coordinates appear in the role of fields in TGD framework and at GRT Yang-Mills limit they are replaced with a large number of classical fields.

Is evolution 3-adic?

I received an interesting email from Jose Diez Faixat giving a link to his blog. The title of the blog is "Bye-bye Darwin" and tells something about his proposal. The sub-title "The Hidden rhythm of evolution" tells more. Darwinian view is that evolution is random and evolutionary pressures select the randomly produced mutations. Rhythm does
not fit with this picture.

The observation challenging Darwinian dogma is that the moments for evolutionary breakthroughs - according to Faixat's observation - seems to come in powers of 3 for some fundamental time scale. There would be precise 3-fractality and accompanying cyclicity - something totally different from Darwinian expectations.

By looking at the diagrams demonstrating the appearance of powers of 3 as time scales, it became clear that the interpretation int terms of underlying 3-adicity could make sense. I have speculated with the possibility of small-p p-adicity. In particular, p-adic length scale hypothesis stating that primes near powers of 2 are especially important physically could reflect underlying 2-adicity. One can indeed have for each p entire hierarchy of p-adic length scales coming as powers of p1/2. p=2 would give p-adic length scale hypothesis. The observations of Faixat suggest that also powers p=3 are important - at least in evolutionary time scales.

Note: The p-adic primes characterizing elementary particles are gigantic. For instance, Mersenne prime M127 = 2127 -1 characterizes electron. This scale could relate to the 2-adic scale L2(127)= 2127/2× L2(1). The hierarchy of Planck constants coming as heff=n× h also predicts that the p-adic length scale hierarchy has scaled up versions obtained by scaling it by n.

The interpretation would be in terms of p-adic topology as an effective topology in some discretization defined by the scale of resolution. In short scales there would be chaos in the sense of real topology: this would correspond to Darwinian randomness. In long scales p-adic continuity would imply fractal periodicities in powers of p and possibly its square root. The reason is that in p-adic topology system's states at t and t+kpn, k=0,1,...p-1, would not differ much for large values of n.

The interpretation relies on p-adic fractality . p-Adic fractals are obtained by assigning to real function its p-adic counterpart by mapping real point by canonical identification

SUMn xn pn → SUM n xn p-n

to p-adic number, assigning to it the value of p-adic variant of real function with a similar analytic form and mapping the value of this function to a real number by the inverse of the canonical identification, the powers of p correspond to a fractal hierarchy of discontinuities.

A possible concrete interpretation is that the moments of evolutionary breakthroughs correspond to criticality and the critical state is universal and very similar for moments which are p-adically near each other.

The amusing co-incidence was that I have been working with a model for 12-note scale based on the observation that icosahedron is a Platonic solid containing 12 vertices. The scale is represented as a closed non-self-intersecting curve - Hamiltonian cycle - connecting all 12 vertices: octave equivalence is the motivation for closedness. The cycle consists of edges connecting two neighboring vertices identified as quints - scalings of fundamental by 3/2 in Platonic scale. What is amusing that scale is obtained essentially powers of 3 are in question scaled down (octave equivalence) to the basic octave by a suitable power of 2. The faces of icosahedron are triangles and define 3-chords. Triangle can contain either 0,1, 2 edges of the cycle meaning that the 3-chords defined by faces and defining the notion of harmonic contain 0,1, or 2 quints. One obtains large number of different harmonies partially characterized by the numbers of 0-, 1-, and 2-quint icosahedral triangles.

The connection with 3-adicity comes from the fact that Pythagorean quint cycle is nothing but scaling by powers of 3 followed by suitable downwards scaling by 2 bringing the frequency to the basic octave so that 3-adicity might be realized also at the level of music!

There is also another strange co-incidence. Icosahedron has 20 faces, which is the number of amino-acids. This suggests a connection between fundamental biology and 12-note scale. This leads to a concrete geometric model for amino-acids as 3-chords and for proteins as music consisting of sequences of 3-chords. Amino-acids can be classified into 3 classes using polarity and basic - acid/neutrality character of side chain as basic criteria. DNA codons would would define the notes of this music with 3-letter codons coding for 3-chords. One ends up also to a model of genetic code relying on symmetries of icosahedron from some intriguing observations about the symmetries of the code table.

At the level of details the icosahedral model is able to predict genetic code correctly for 60 codons only, and one must extend it by fusion it with a tetrahedral code. The fusion of the two codes corresponds geometrically to the fusion of icosahedron with tetrahedron along common face identified as "empty" amino-acid and coded by 2 stopping codons in icosahedral code and 1 stopping codon in tetrahedral code. Tetrahedral code brings in 2 additional amino-acids identified as so called 21st and 22nd amino-acid discovered for few years ago and coded by stopping codons. These stopping codons certainly differ somehow from the ordinary one - it is thought that context defines somehow the difference. In TGD framework magnetic body of DNA could define the context.

The addition of tetrahedron brings one additional vertex, which correlates with the fact that rational scale does not quite closed. 12 quints gives a little bit more than 7 octaves and this forces to introduce 13 note for instance, Ab and G# could differ slightly. Also micro-tubular geometry involves number 13 in an essential manner.

DNAs humble beginnings as nutrient carrier; humble - really?

I received from Ulla an interesting link about the indications that DNA may have had rather humble beginnings: it would have served as a nutrient carrier. Each nuclteotide in the phosphate-deoxiribose backbone corresponds to a phosphate and nutrient refers to phosphate assumed to carry metabolic energy in high energy phosphate bond.

In AXP, X=M,D,T the number of phosphates is 1,2,3. When ATP transforms to ADP it gives away one phosphate to the acceptor molecule which receives thus metabolic energy. For DNA there is one phosphate per nucleotide and besides A also T, G, and C are possible.


The attribute "humble" reflects of course the recent view about the role of nutrients and metabolic energy. It is just ordered energy what they are carrying. TGD view about life suggest that "humble" is quite too humble an attribute.

  1. The basic notion is potentially conscious information. This is realized as negentropic entanglement for which entanglement probabilities must be rational numbers (or possibly also algebraic numbers in some algebraic extension of rationals) so that their p-adic norms make sense. The entanglement entropy associated with the density matrix characterizing entanglement is defined by a modification of Shannon formula by replacing the probabilities in the argument of the logarithm with their p-adic norms and finding the prime for which the entropy is smallest. The entanglement entropy defined in this manner can be and is negative unlike the usual Shannon entropy. The interpretation is as information associated with entanglement. Second law is not violated since the information is 2-particle property whereas as Shannon entropy is single particle property characterizing average particle.

    The interpretation of negentropic entanglement is as potentially conscious information: the superposition of pairs of states would represent abstraction or rule whose instances would be the pairs of states. The large the number of pairs, the higher the abstraction level.

  2. The consistency with standard quantum measurement theory gives strong constraints on the form of the negentropic entanglement. The key notion is that if density matrix is proportional to unit matrix, standard measurement theory says nothing about the outcome of measurement and entanglement can be preserved. Otherwise the reduction occurs to one of the states involved. This situation could correspond to negentropic 2-particle entanglement. For several subsystems each sub-system-complement pair would have similar density matrix. There is also a connection with dark matter identified as phases with non-standard value heff=n×h of Planck constant. n defines the dimension of the density matrix. Thus dark matter at magnetic flux quanta would make living matter living.

    In 2-particle case the entanglement coefficients form a unitary matrix typically involved with quantum computing systems. DNA-cell membrane system is indeed assumed to form a topological quantum computer in TGD framework. The braiding of magnetic flux tubes connecting nucleotides with lipids of the cell membrane defines topological quantum computer program and its time evolution is induced by the flow of lipids forming a 2-D liquid crystal. This flow can be induced by nearby events and also by nerve pulses.

    Sidestep: Actually pairs of flux tubes are involved to make high temperature super-conductivity possible with members of Cooper pairs at flux tubes with same or opposite directions of spins depending on the direction of magnetic field and thus in spin S=0 or S=1 state. For large value of Planck constant heff=n×h the spin-spin interaction energy is large and could correspond in living matter to energies of visible light.


  3. Negentropy Maximization Principle (NMP) is the basic variational principle of TGD inspired theory of consciousness. NMP states that the gain of negentropic entanglement is maximal in state function reduction so that negentropic entanglement can be stable.

  4. NMP guarantees that during evolution by quantum jumps recreating the Universe (and sub-Universes assignable to causal diamonds (CDs)) the information resources of Universe increase. Just to irritate skeptics I have spoken about "Akashic records". Akashic records would form books in a universal library and could be read by interaction free quantum measurement preserving entanglement but generating secondary generating state function reductions providing conscious information about Akashic records defining also a model of self.

    Sidestep: Self can be identified as a sequence of state function for which only first quantum is non-trivial at second boundary of CD whereas other quantum jumps induce change of superposition of CDs at the opposite boundary and states at them). Essentially a discretized counterpart of unitary time development would be in question. This allows to understand how the arrow of psychological time emerges and why the contents of sensory experience is about so narrow a time interval. Act of free will corresponds to the first state function reduction at opposite boundary and thus involves change of the arrow of psychological time at some level of self hierarchy: this prediction is consistent with the Libet's findings that conscious decision implies neural activity initiated before the decision ("before" with respect to geometric time, not subjective time).


In this framework the phosphates could be seen as ends of magnetic flux tubes connecting DNA to cell membrane and mediating negentropic entanglement with the cell membrane. DNA as topological quantum computer vision conforms with the interpretation DNA-cell membrane system as "Akaschic records". This role of DNA-cell membrane system would have emerged already before the metabolic machinery, whose function would be to transfer the entanglement of nutrient molecules with some bigger system X to that between biomolecules and X. Some intriguing numerical co-incidences suggest that X could be gravitational Mother Gaia and flux tubes mediating gravitational interaction with nutrient molecules and gravitational Mother Gaia could be in question. This brings in mind Penrose's proposal about the role of quantum gravity. TGD is indeed a theory of quantum gravity predicting that gravitation is quantal in astroscopic length scales.

Friday, August 29, 2014

About time delays related to passive aspects of consciousness: once again

Libet's experiments about the strange time delays related to the passive aspects of consciousness serve as a continual source of inspiration and headache. Every time one reads again about these experiments, one feels equally confused and must start explanations from scratch. The following explanation is based on the model of the sensory representations on the magnetic canvas outside the body and having size measured by typical EEG wave lengths.

The basic argument leading to this model is the observation that although our brain changes its position and orientation, the mental image of the external world is not experienced to move: as if we were looking some kind of sensory canvas inside cortex from outside so that the motion of canvas does not matter. Or equivalently: the ultimate sensory representation is outside brain at a fixed sensory canvas. In this model the objects of the perceptive field are represented on the magnetic canvas. The direction of the object is coded by the direction of ME located on brain whereas its distance is coded by the dominating frequency of ME which corresponds to a magnetic transition frequency which varies along the radial magnetic flux tubes slowly so that place coding by magnetic frequency results.

According to the summary of Penrose in his book 'Emperor's New Mind' these experiments tell the following.

  1. With respect to the psychological time of the external observer subject person becomes conscious about the electric stimulation of skin in about .5 seconds. This leaves a considerable amount of time for the construction of the sensory representations.


  2. What is important is that subject person feels no time delay. For instance she can tell the time clock shows when the stimulus starts. This can be understood if the sensory representation which is basically a geometric memory takes care that the clock of the memory shows correct time: this requires backwards referral of about .5 seconds. Visual and tactile sensory inputs enter into cortex essentially simultaneously so that this is possible. The projection to the magnetic canvas and the generation of the magnetic quantum phase transition might quite well explain the time lapse of .5 seconds.


  3. One can combine an electric stimulation of skin with the stimulation of the cortex. The electric stimulation of the cortex requires a duration longer than .5 seconds to become conscious. This suggests that the cortical mental image (sub-self) is created only after this critical period of stimulation. A possible explanation is that the stimulation generates quantum phase transition "waking up" the mental image so that threshold is involved.


  4. If the stimulation of the cortex begins (with respect to the psychological time of the observer) for not more than .5 seconds before the stimulation of the skin starts, both the stimulation of the skin and cortex are experienced separately but their time ordering is experienced as being reversed!

    A crucial question is whether the ordering is changed with respect to the subjective or geometric time of the subject person. If the ordering is with respect to the subjective time of the subject person, as it seems, the situation becomes puzzling. The only possibility seems to be that the cortical stimulus generates a sensory mental image about touch only after it has lasted for .5 seconds.

    In TGD framework sensory qualia are at the level of of sensory organs so that the sensation of touch assignable to cortical stimulation requires back-projection from cortex to the skin (presumably using dark photons producing biophotons as decay products). The mental images generated by direct stimulation of cortex could be called cognitive this is created first and takes some time. If the construction of cognitive mental images about cortical stimulation and the formation of back projection takes at least about .5 seconds the observations can be understood. Genuine sensory stimulus starts to build cortical mental image almost immediately: this mental image is then communicated to magnetic body.

    For instance, assume that the preparation of cognitive mental image at cortex takes something like .4 seconds and its communication to magnetic body about .1 seconds and that back projection is possible only after that and takes roughly the same time to the sensory organs at skin and back. This would explain the change of time order of mental images.


  5. If the stimulation of the cortex begins in the interval T ∈ [.25-.5] seconds after the stimulation of the skin, the latter is not consciously perceived. This effect - known as backward masking - looks really mysterious. It would be interesting to know whether also in this case there is a lapse of .5 seconds before the cortical stimulation is felt.

    If the construction of cognitive mental image about direct stimulation of cortex takes about .4 second, it does not allow the buildup of cognitive mental image associated with the stimulation of skin. Hence the stimulation of skin does not create conscious cognitive or sensory mental image communicated to magnetic body.

Questions and answers about time

Answering to question is the best possible manner to develop ideas in more comprehensible form. In this respect the questions of Hamed at my blog have been especially useful. Many questions below are originally made by him and inspired the objections, many of them discussed also in previous discussions. The answers to these questions have changed during latest years as the views about self and the relation between experienced time and geometric time have developed. The following answers are the most recent ones.

Question: The minimalistic option suggests very strongly that our sensory perception can be identified as quantum measurement assignable to state function reductions for upper or lower boundaries of our personal CD. Our sensory perception does not however jump between future and past boundaries of our personal CD (containing sub-CDS in turn containing…)! Why?

Possible answer: The answer to this question comes from the realization that in ordinary quantum theory state function reductions leaving the reduced state invariant are possible. This must have counterpart in ZEO. In ZEO reduces zero energy states are superpositions of zero energy states associated with CDs with second boundary fixes inside light-cone boundary and the position of the second boundary of CD varying: one can speak about wave function in the moduli space of CDs. The temporal distance between the tips of CD and discrete lattice of the 3-D hyperbolic space defined by the Lorentz boosts leaving second tip invariant corresponds to the basic moduli.

The repeated state function reductions leave both the fixed boundary and parts of zero energy states associated with this boundary invariant. They however induce dispersion in the moduli space and the average temporal distance between the tips of CDs increases. This gives rise to the flow of psychological time and to the arrow of time. Self as counterpart of observer can be identified as a sequence of quantum jumps leaving the fixed boundary of CD invariant. Sensory perception gives information about varying boundary and the fixed boundary creates the experience about self as invariant not changed during quantum jumps.

Self hierarchy corresponds to the hierarchy of CDs. For instance, we perceive from day to day the - say- positive energy part of a state assignable to this very big CD. Hence the world looks rather stable.

Question: This suggests that our sensory perception actually corresponds to sequences of state function reductions to the two fixed boundaries of CDs of superposition of CDs so that our sensory inputs would alternately be about upper and lower boundaries of personal CDs. Sleep-awake cycle could correspond to a flip flop in which self falls asleep at boundary and wakes up at opposite boundary. Doesn't this lead to problems with the arrow of time?

Possible answer: If we measure time relative to the fixed boundary then the geometric time defined as the average distance between tips in superposition of CDs would increase steadily and we get older also during sleep. Hence we would experience subjective time to increase. Larger CDs than our personal CD for which the arrow of time remains fixed in the time scale of life cycle would provide the objective measure of geometric time.

Question: What is the time scale assignable to my personal CD: the typical wake-up cycle: 24 hours? Or of the order of life span. Or perhaps shorter?

Possible answer: The durations of wake-up periods for self is determined by NMP: death means that NMP favors the next state function to take place at the opposite boundary. The first naive guess is that the duration of the wake up period is of the same order of magnitude as the geometric time scale of our personal CD. In wake-up state we we would be performing state function reduction repeatedly to say "lower" boundary of our personal CD and sensory mental images as sub-CDs would be concentrated near opposite boundary. During sleep same would happen at lower boundary of CD and sensory mental images would be at opposite boundary (dreams,…).

Question: Are dreams sensory perceptions with opposite arrow of time or is some sub-self in wake-up state and experiences same arrow of time as we during wake-up state? If the arrow is different in dreams, is the "now" of dreams in past and "past" in the recent of wake-upe state

Possible answer: Here I can suggest an answer based on my own subjective experiences and it would be cautious "yes".

Question: Why we do remember practically nothing about sensory perceptions during sleep period? (Note that we forget actively dream experiences).

Possible answer: That we do not have many memories about sleep and dream time existence and that these memories are unstable should relate to the change of the arrow of personal time as we wake up. Wake-up state should somehow rapidly destroy the ability to recall memories about dreams and sleep state. Wake-up memory recall means communications to geometric past, that is to the boundary of CD which remains fixed during wake-up state. In memory recall for dreams in wake-up state these communications should take place to geometric future. Memory recall of dreams would be seeing to future and much more difficult since the future is changing in each state function reduction so that dream memories are erased automatically during wake-up.

Question: Does the return to childhood at old age relate with this time flip-flop of arrow of time in the scale of life span: do we re-incarnate in biologically death at opposite end of CD with scale of life span?

Possible answer: Maybe this is the case. If this boundary corresponds to time scale of life cycle, the memories would be about childhood. Dreams are often located to the past and childhood.

Use Noether theorem, not moment map

Peter Woit has posting entitled "Use moment map, not Noether's theorem" in Not Even Wrong. Peter Woit argues that moment map which is the meth of dealing with conservation laws associated with continuous symmetries is superior to the Noether's theorem. His claim is wrong.

Noether's theorem is more general than moment map since it requires only Lagrangian formalism. Moment map requires the existence of also Hamiltonian formalism. In Lagrangian formalism the invariance of the action under infinitesimal symmetries leads to the expressions for conserved quantities using standard recipes of variational calculus easy to understand.

The use of moment map requires that the system allows transition from Lagrangian to Hamiltonian formalism which requires the introduction of phase space whose points are labelled by generalized positions and momenta (field values and canonical momentum densities in field theory). In Lagrangian formalism one as generalized positions and velocities (field values and their time derivatives in field theory). Moment map assigns to continuous symmetry a Hamiltonian. The continuous flow generated by this Hamiltonian is identifiable as the time development of the system. The value of the Hamiltonian is constant along the orbits of the flow and defines a conserved quantity if the dynamics is time-independent.

What Peter Woit does not notice is that this transition is not always possible.

  1. Already in gauge theories gauge symmetries produce difficulties since some canonical momentum densities vanish identically and one must perform gauge fixing. Same happens in general relativity: now one must introduce preferred space-time coordinates. The lesson number one is that Hamiltonian formalism is essentially Newtonian notion raising time in preferred position. In relativistic theories this is not natural.

  2. Also the non-linearity of the basic action principle can cause difficulties and this tends to be the case for general coordinate invariant action principles which have geometric content and have surfaces as the extrema of the action. In the case of string models with action identified as surface area of string world sheet the situation can be still handled but TGD represents a school example about failure of Hamltonian formalism. The time derivatives of imbedding space coordinates as functions of canonical momentum densities for Kähler action are many-valued functions and there is no hope of solving them.

    For vacuum extremals this difficulty becomes especially acute. All space-time surfaces having CP2 projection, which is Lagrangian manifold having vanishing induced Kähler form. In the generic situation Lagrangian manifolds of CP2 are 2-dimensional so that vacuum degeneracy is gigantic. For them canonical momentum densities vanish identically whereas time derivatives of imbedding space coordinates do not. The variation of vacuum extremal keeping it as vacuum extremal changes time derivatives but keeps canonical momentum vanishing.

  3. 3-surface would represent the point of the configuration space and the possibility of topology change for 3- surface destroys even the idea about Hamiltonian dynamics which should keep the topology unchanged along the orbit of this point (4-surface).

The extreme non-lineary and vacuum degeneracy have also other implications, which deserve to be mentioned.

  1. Minkowski space would be the vacuum extremal around which perturbation would be carried out if one would take perturbative quantization of general relativity as a model. The action density however vanishes up to fourth order around Minkowski space in the derivatives of CP2 coordinates so that linearization makes no sense and perturbative approach fails completely since there are no kinetic terms and one cannot define propagators.

  2. Path integral approach fails- this irrespective of whether it is based on Lagrangian formalism or the hopeless attempt to construct Hamiltonian formalism. This eventually led to the realization that the notion of "world of classical worlds" as infinite-dimensional geometry consisting of pairs of 3-surfaces at the opposite ends of causal diamond (in zero energy ontology) provides the only imaginable manner to consctruct quantum TGD. It means generalization of Einstein's geometrization of physics program from classical physics to entire quantum physics.

  3. The vacuum degeneracy is very much analogous to gauge degeneracy since the symplectic transformations of CP2 generate new vacua and act like U(1) gauge transformations on Kähler gauge potential. These symmetries are not however gauge symmetries and this makes the situation hopeless if one wants to stay in Newtonian framework. Vacuum degeneracy and the associated non-determinism also leads to various notions like 4-D spin glass degeneracy and hierarchy of effective Planck constants allowing an interpretation as hierarchy of dark matter phases playing a key role in the understanding of living matter as macroscopic quantum system.

  4. Critical reader could of course ask critical questions. Why to stick to K\"ahler action? Why not use say 4-volume?
    The answer is that 4-volume is wrong choice for WCW geometry: it would allow only rather small space-time surfaces (also in temporal direction) and would force to introduce dimensional fundamental coupling which does not give much hopes for divergence free theory. K\"ahler action does not suffer from these problems and has many other merits. For instance, Kähler-Dirac action - supersymmetric fermionic counterpart of K\"ahler action - leads to the emergence of string world sheets at which induced spinor modes are localized to guarantee well-definedness of em charge and which carry vanishing weak gauge potentials. This guarantees also absence of strong parity breaking effects above weak scale (which however is proportional to effective Planck constant so that strong parity breaking in living matter is obtained).

Tuesday, August 12, 2014

Three news

Below comments about three news which I have seen during the last week and which are interesting from TGD point of view.

Subterranean ocean

The first news tells about subterraean "ocean" consisting of mineral ringwoodite with a high concentration of water. The ocean is at the depth of about 600 km which is still considerably small that the depth of the inner core of Earth. The conjecture is that this water is responsible for the formations of the ordinary oceans. The model for expanding Earth that I constructed few years ago led to the prediction that the water below ground in this kind of reservoirs has bursted to the surface of Earth during relatively fast expansion of the Earth radius by a scale factor of 2. This explains the findings that the continents seem to fit nicely together to form a connected continent covering the whole surface of Earth provided the radius of Earth is one half of its present values. A phase transition increasing the value of effective Planck constant by factor 2 at some level of hierarchy of space-time sheets would be in question. These discrete expansions would define the counterpart for continuous cosmological expansion at the level of many-sheeted space-time.

Nasa's quantum space-time ship questioned

The researchers from NASA have reported a new kind of mechanism generating a small momentum without any identifiable counter momentum required by Newton's laws (see this). The system involves RF cavity excited at 935 MHz frequency claimed to cause the thrust on low-thrust torsion pendulum. The thrust is very small: 30-50 micro-Newtons.

The claim of course raises the eyebrows of physicists and strong critique has been represented. Instead of labeling the researchers as crackpots one can ask whether there could be counter momentum but such that we cannot measure it with our recent technology. Here TGD might provide a possible answer. Also the system studied has field body or magnetic body. The small unbalanced momentum would go the dark matter at the magnetic body identifiable as large heff=n× h phase.

I have proposed that the transfer of momentum and energy between system and its magnetic body or magnetic body of some other system could be basic elements of metabolism in living matter. Magnetic body could serve as kind of fuel storage storing energy as cyclotron energy. One can imagine several mechanisms. One example is following. Suppose that there is dark RF radiation with large photon energies at the magnetic body of the RF cavity, which is spontaneously magnetized: the dark RF photons could have been created by this process. Kind of dark Alfwen waves might be in question. Suppose that the flux tubes of RF cavity reconnect with those of the magnetic body of pendulum. The dark photons could be transferred to the pendulum and transform to ordinary photons and provide it with momentum. Since the dark photons at the flux tubes of the magnetic body of RF cavity are not seen experimentally, an apparent violation of momentum conservation is observed. This is what comes first in mind and is probably not the simplest explanation. What is important is the idea of large energy and momentum transfer
between the system and its field body.


The dark RF photons could be absorbed by the magnetic body of pendulum and momentum could be transferred then to the pendulum. induce flow of charged particles from magnetic body to the pendulum where they would become dark and suffer a spontaneous magnetization. This would libererate large energy since the interaction energy of spin with the magnetic field is proportional to heff and therefore large. RF radiation inducing boiling of water could be analogous effect.

Electron mechanism of anesthesia

Luca Turin is the researcher who found strong evidence that odor perception involves infrared light and that quantum effect is in question. Now Turin has represented experimental evidence that electron currents through cell membrane are essential for consciousness by studying the effects of anesthetes (see this). The effectiviness of anesthete correlates directly with its lipid solubility. It seems that general anesthetes can bind to lipids. The study of Turin and colleagues suggests that anesthetes affect the internal electronic structure of proteins and change the electronic currents through the cell membrane.

In TGD framework electronic and also ionic supracurrents are essential for consciousness. They run along as pairs of electrons at parallel flux tubes. S=1 Cooper pairs have large negative spin-spin interaction energy if magnetic fluxes have same direction due to the large value of heff to which this energy is proportional S=0 states of Cooper pairs spins are possible if magnetic fluxes have opposite directions. This mechanism could work also for high temperature super conductivity and it is now known that anti-ferromagnetism suggesting strongly antiparallel flux tubes as current carriers is essential for high temperature superconductivity.

The TGD inspired model for cell membrane assumes that transmembrane proteins act as generalized Josephson junctions. In other words, the Josephson energy contains besides electrostatic energy difference over the membrane also the difference of cyclotron energies at the two sides of the membrane and this contribution dominates over Josephson energy which varies - especially so during nerve pulses- and codes neural activity to the frequency modulations of the generalized Josephson radiation going to the magnetic body and in this manner communicating sensory input to it. If heff is proportional to the mass of charged particle, this energy is independent of particle and the spectrum of cyclotron photons emitted is universal and correspond to that for biophotons being in the visible and UV range and thus optimal for inducing molecular transitions so that they can be applied in biocontrol.

If this model is correct then the action of anesthetes would be simple: they would induce the loss of super-conductivity or cut the protein Josephson junctions through cell membrane. This is consistent with Turin's findings.

Saturday, July 26, 2014

About the origin of Born rule

Lubos has been again aggressive. At this time Sean Carroll became the victim of Lubos's verbal attacks. The reason why Lubos got angry was the articles of Carroll and his student to derive Born rule from something deeper: this deeper was proposed to be many worlds fairy tale as Lubos expresses it. I agree with Lubos about the impossibility to derive Born rule in the context of wave mechanics - here emphasis is on "wave mechanics". I also share the view about many worlds interpretation - at least I have not been able to make any sense of it mathematically.

Lubos does not miss the opportunity to personally insult people who tell about their scientific work on blogs. Lubos does not realize that this is really the only communication channel for many scientists. For the out-laws of the academic world blogs, home pages, some archives, and some journals (of course not read the "real" researchers enjoying monthly salary) provide the only manner to communicate their work. Super string hegemony did good job in eliminating people who did not play the only game in the town: also I had the opportunity to learn this.

Ironically, also Lubos is out-of-law, probably due to his overly aggressive blog behaviors in past. Perhaps Lubos does not see this as a personal problem since - according to his own words - he has decided to not publish anything without financial compensation because this would make him communist.

Concerning Born rule I dare to have a different opinion than Lubos. I need not be afraid of Lubos's insults since Lubos as a brahmine of science refuses to comment anything written by inferior human beings like me and even refuses to mention their names: maybe Lubos is afraid of doing it might somehow infect him with the thoughts of casteless.


Without going to the details of quantum measurement theory, one can say that Born's rule is bilinear exression for the initial and final states of quantum mechanical transition amplitude. Bilinearity is certainly something deep and I will go to that below. Certainly Born's rule gives the most natural expression for the transition amplitude: demonstraing this is of course not a derivation for it.

  1. One could invent for the transition amplitude formal expressions non-linear in normalized initial and final states. One can however argue that the acceptable expressions must be symmetric in initial and final states.

  2. The condition that the transition amplitude conserves quantum numbers associated with symmetries suggests strongly that the transition amplitude is a function of the bilinear transition amplitude between initial and final states and norms of initial and final states. The standard form for non-normalized states - inner product divided by product of square roots norms - is indeed of this form. For instance, one could add exponentials of the norms of initial and final state norms.

  3. Projective invariance of the transition amplitude - the independence of the transition probabilities from normalization- implies that standard transition amplitude multiplied by a function - say exponential - of the modules square of the standard amplitude (transition probability in standard approach) remains to be considered.

  4. One could however still consider the possibility that the probability as given by Born rule are replaced by its function: pij→ f(pij)pij. Unitary poses strong constraints on f and my guess that f =1 is the only possibility.

Sidestep: To make this more concrete, the proponents of so called weak measurement theory propose a modification of formulate for the matrix element of an operator A to ⟨i|A|f⟩/⟨i|f⟩. The usual expression contains the product of square roots of the norms instead of ⟨i|f⟩. This is complete nonsense since for orthogonal states the expression can give infinity and for A =I, unit matrix, it gives same matrix element between any two states. For some mysterious reason the notion weak measurement - to be sharply distinguished from interaction free measurement - has ended up to Wikipedia and popular journals comment it enthusiastically as a new revolution in quantum theory.

Consider now the situation in TGD framework.

  1. In TGD framework the configuration space, "World of Classical Worlds" consisting of pairs of 3-surfaces at opposite boundaries of causal diamonds (CDs), is infinite-dimensional, and this sharply distinguishes TGD based quantum theory from wave mechanics. More technically, hyperfinite factors of type II (and possibly also III) replace factors of type I, in the mathematical formulation of the theory.

    Finite measurement resolution is unavoidable and is represented elegantly in terms of inclusions of hyper-finite factors. This means that single ray of state space is replaced with infinite-D sub-space whose states cannot be distinguished from each other in given measurement resolution. The infinite-dimensional character of WCW makes the definition of the inner product for WCW spinor fields extremely delicate.

    Note that WCW spinor fields are formally classical at WCW level and state function reduction remains the only genuinely quantal aspect of TGD. At space-time level one must perform second quantization of induced spinor fields to build WCW gamma matrices in terms of fermionic oscillator operators.

  2. WCW spinors which are fermionic Fock states associated with a given 3-surface. There are good reasons to believe that the generalization of the usual bilinear inner product defined by integration of the spinor bilinear over space (with Euclidian signature) generalizes but under extremely restrictive condition. Spinor bilinear is replaced with fermionic Fock space inner product and this bilinear is integrated over WCW.

    The integration over WCW makes sense only if WCW allows a metric which is invariant under maximal group of isometries- this fixes WCW and physics highly uniquely. To avoid divergences one must also assume that Ricci scalar vanishes and empty space Einstein equations hold true. Metric determinant is ill-defined and must be cancelled by the Gaussian determinant coming from the exponent of vacuum functional, which is exponent of Kähler action if WCW metric is Kähler as required by the geometrization of hermitian conjugation which is basic operation in quantum theory. One could however still consider the possibility that the probabilities given by Born rule are replaced by their functions: pij→ f(pij)pij but unitarity excludes this. Infinite-dimensionality is thus quite not enough: something more is needed unless one assumes unitarity.

  3. Zero Energy Ontology brings in the needed further input. In ZEO the transition amplitudes correspond to time-like entanglement coefficients of positive and negative energy parts of zero energy states located at the opposite light-like boundaries of causal diamond. The deep principle is that zero energy states code for the laws of physics as expressed byS-matrix and its generalizations in ZEO

    This implies that transition amplitudes is automatically bilinear with respect to positive and negative energy parts of zero energy state, which correspond to initial and final states in positive energy ontology. The question why just Born rule disappears in ZEO.

That ZEO gives justification also for Born rule is nice since it has produced a solution also to many other fundamental problems of quantum theory. Consider only the basic problem of quantum measurement theory due to determinism of Schrödinger equation contra non-determinism of state function reduction: Bohr's solution was to give up entirely ontology and take QM as a mere toolbox of calculational rules.

There is also the problem the relationship between geometric time and experienced time, which ZEO allows to solve and leads to much more detailed view about what happens in state function reduction. The most profound consequences are at the level of consciousness theory which is essentially generalization of ordinary quantum measurement theory in order to make observer part of the system by introducing the notion of self. ZEO also makes the physical theories testable: any quantum state can be achieved from vacuum in ZEO whereas in standard positive energy ontology conservation laws make this impossible so that at the level of principle the testing of the theory becomes impossible without additional assumptions.

Thursday, July 17, 2014

Has the decay of dark photons to visible photons observed in cosmological scales?

There is an interesting news providing new light to the puzzles of dark matter in New Scientist. It has been found that Universe is too bright. There are too many high energy UV photons in the spectrum. The model calculations suggest also that this too high brightness has emerged lately, and was not present in the early universe. The intergalactic space contains more neutral hydrogen and thus also more ionized hydrogen as thought previously and it was hoped that the ionized hydrogen could explain the too high brightness. It is now however clear that 5 times more ionized hydrogen would be required than theory allows accepting the experimental data.


The question is whether dark matter could explain the anomaly.

  1. The usual dark matter candidates have by definition extremely weak interactions - not only with ordinary matter and also with dark matter. Therefore it is not easy to explain the finding in terms of ordinary dark matter. The idea about dark matter as remnant from the early cosmology does not fit naturally with the finding that the surplus UV radiation does not seem to be present in the early Universe.

  2. In TGD dark matter is ordinary matter with large heff=n× h and has just the ordinary interactions with itself but no direct interactions with visible matter. Thus these interactions produce dark radiation with visible and UV energies but with probably much lower frequencies (from E= hefff). The energy preserving transformations of dark photons to ordinary ones are an obvious candidate for explaining the surprlus UV light.

  3. These transitions are fundamental in TGD inspired model of quantum biology. Biophotons are in visible and UV range and identified as decay products of dark photons in living matter. The fact that the surplus has appeared recently would conform with the idea that higher levels of dark matter hierarchy have also appeared lately. Could the appearance of UV photons relate to the generation of dark matter responsible for the evolution of life? And could the surplus ionization of hydrogen also relate to this? Ionization is indeed one of the basic characteristics of living matter and makes possible charge separation (see this), which is also a crucial element of TGD inspired quantum biology (see this).

Do electrons serve as nutrients?

The New Scientist article about bacteria using electrons as nutrients is very interesting reading since the reported phenomenon might serve as a test for the TGD inspired idea about metabolism as a transfer of negentropic entanglement (NE) at fundamental level (see this and this).

  1. NE is always between two systems: nutrient and something, call it X. The proposal inspired by a numerical co-incidence was that X could be what I have called Mother Gaia. X could be also something else, say personal magnetic body. The starting point was the claim that the anomalously high mass of electronic Cooper pair in rotating supercounductor (slightly larger than the sum of electron masses!) could be due to a gravimagnetic effects which is however too strong by a factor 1028. This claim was made by a respected group of scientists. Since the effect is proportional to the gravimagnetic Thomson field proportional to the square of Planck constant, the obvious TGD inspired explanation would be heff≈ 1014 (see this and this).

  2. Gravitational Planck constant hgr= GMm/v0, v0 a typical velocity in system consisting of masses M>>m and m was introduced originally by Nottale. I proposed that it is genuine Planck constant assignable to flux tubes mediating gravitational interaction between M and m. In the recent case v0 could be the rotating velocity of Earth around its axis at the surface of Earth.

  3. For electron, ions, molecules, .. the value of hgr would of the order of 1014 required by the gravimagnetic anomaly and is also of the same order as heff=n× h needed by the hypothesis that cyclotron energies for these particles are universal (no mass dependence) and in the visible and UV range assigned to biophotons. Biophotons would result from dark photons via phase transition. This leads to the hypothesis heff=hgr unifying the two proposals for the hierarchy of Planck constants at least in microscopic scales.


    Thanks to Equivalence Principle implying that gravitational Compton length does not depend on particle's mass, Nottale's findings can be understood if hgr hypothesis holds true only in microscopic scales. This would mean that gravitation in planetary system is mediated by flux tubes attached to particles rather than entire planet, say. One non-trivial implication is that graviton radiation is dark so that single graviton carries much larger energy than in GRT based theory. The decay of dark gravitons to ordinary gravitons would produce bunches of ordinary gravitons rather than continuous stream: maybe this could serve as an experimental signature. Gravitational radiation from pulsars is just at the verge of detection if it is what GRT predicts. TGD would predict pulsed character and this might prevent its identification if based on GRT based belief system.

  4. In the recent case the model would say that the electrons serving as nutrients have this kind of negentropic entanglement with Mother Gaia. hgr=heff would be of order 108. Also in nutrients electrons would be the negentropically entangled entities. If the model is correct, nutrient electrons would be dark and could also form Cooper pairs. This might serve as the eventual test.

Electrons are certainly fundamental for living matter in TGD Universe.

  1. Cell membrane is assumed to be high Tc electronic superconductor (see this). Members of Cooper pairs are at flux tubes carrying opposite magnetic fields so that the magnetic interaction energy produces very large binding energy for the large values of heff involved: of the order of electron volts! This is also the TGD based general mechanism of high Tc superconductivity: it is now accepted that anti ferromagnetism is crucial and flux tubes carrying fluxes at opposite directions is indeed very antiferromagnetic kind of thing.

  2. Josephson energy is proportional to membrane voltage (EJ= 2eV) is just above the thermal energy at room temperature meaning minimal metabolic costs.

  3. Electron's secondary p-adic time scale is .1 seconds, the fundamental biorhythm which corresponds to 10 Hz alpha resonance.

Wednesday, July 16, 2014

What self is?

The concept of self seems to be absolutely essential for the understanding of the macroscopic and macro-temporal aspects of consciousness and would be counterpart for observer in quantum measurement theory.

The original proposal was that self is conscious entity.

  1. Self corresponds to a subsystem able to remain un-entangled under the sequential informational 'time evolutions' U. Exactly vanishing entanglement is practically impossible in ordinary quantum mechanics and it might be that 'vanishing entanglement' in the condition for self-property should be replaced with 'subcritical entanglement'. If space-time decomposes into p-adic and real regions, and if entanglement between regions representing physics in different number fields vanishes, space-time indeed decomposes into selves in a natural manner. Causal diamonds would form natural imbedding space correlates for selves and their hierarchy would correspond to self hierarchy.

  2. The intuitive idea inspired by the formation of bound states of particles from particles was that self corresponds somehow to an integration of quantum jumps to single coherent whole. Later I gave up this idea since it was difficult to understand how the integration could take place.

  3. The next suggestion was that quantum jumps as such correspond to selves. It was however difficult to assign to selves identified in this manner a definite geometric time duration. It is an empirical fact that this kind duration can be assigned to mental images (identified as subselves).

  4. One could also introduce self as a subsystem which is only potentially consciousness and here the notion of negentropic entanglement suggests an obvious approach based on interaction free measurement. Negentropy Maximization Principle (NMP) implies that Universe is like a library with new books emerging continually at its shelves. This would explain evolution. One can however argue that negentropic entanglement - "Akashic records" - gives rise only to self model rather than self.

  5. The approach which seems the most convincing relies on the observation that ZEO sequences of ordinary state function reductions leaving state unchanged are replaced with sequences for which the part of the zero energy state associated with a fixed boundary of CD state remains unchanged in state function reduction whereas the state at the other end of CD changes. This is something new and explains the arrow of time and its flow and self could be understood as a sequence of quantum jumps at fixed boundary of CD (with the average location of second boundary shifted towards geometric future like in dispersion). Amusingly, this is in accordance with the original proposal except that state function reductions
    take place on same boundary of CD.

    This view is extremely attractive since it implies that the act of free will interpreted as genuine state function reduction must mean reversal for the direction of geometric time at some level of hierarchy of selves. The proposal has indeed been that sensory perception and motor action are time reversals of each other and that motor action involves sending of negative energy signals to the geometric past.

For details and background see the chapter "Self and binding" of "TGD inspired theory of consciousness".

Saturday, July 12, 2014

Post-empirical science or an expansion of scope: which would you choose?

Bee has very interesting comments about the thoughts of Richard Dawid about what assessment of physical theory is. Dawid sees that we are making a transition to post-empirism in which other than empirical facts serve increasingly as the criteria for deciding whether physical theory is useful.

Post-empirical science does is not an attractive vision about the future of science. For instance, the standard claim during string theory hegemony has been that string theories are totally exceptional and that the usual criteria do not apply to them. Bee comments also the notion of "usefulness" having also the sociological aspects in the cruel academic world in which we have to live.


People participating in the discussion seem to agree that theory assessment has become increasingly difficult. Philosopher Richard Dawid suggests what I would call giving up.

Why theory assessment has become so difficult? Is this really true? Or could it be that some wrong belief in our scientific belief system has caused this?

Could it be that our idea about what unified physical theory should be able to describe is badly wrong. When we speak about unification, we take the naive length scale reductionism as granted. We want to believe that everything physical above weak boson length scale is understood and the next challenge is to jump directly to Planck scale (itself a notion based on naive dimensional analysis and could lead to a totally wrong track concerning the ultimate nature of gravitation!).

In practice this means that we we drop from the field of attention entire fields of natural sciences such as biology and neuroscience - to say nothing about consciousness (conveniently reduced to physics in materialistic dogma). These fields provide a rich repertoire of what might could be seen as anomalies of existing physical theory provided we give up the dogma of length scale reductionism and see these anomalies as what they really are: phenomena about whose physical description or correlates of we actually don't heave a slightest clue. Admitting that we do not actually understand could be the way out of blind alley.

This kind of expansion of the view about what theory should explain might be extremely useful and open up new words for theoretician to understand. Theory could not anymore be degenerated to a question what happens in Planck length scale and it would have huge number of observations to explain. What are the basic new principles needed? This would become the basic question. One candidate for them is obviously fractality possibly replacing the naive length scale reductionism. This would bring in also philosophy, but in good sense rather than as an attempt to authorize a theory which has turned out incapable of saying anything interesting about the observed world.