Friday, December 06, 2019

Immortal jellyfish in zero energy ontology

Immortal jellyfish - an animal smaller than fingernail living in Mediterranian Sea - is almost immortal being able to lengthen its life cyle by reversing its aging by reverting to immature stage - polyp (see this). This represents a highly interesting time anomaly from TGD point of view.

In zero energy ontology (ZEO) based theory of consciousness the basic entity is self having causal diamond (CD) has imbedding space correlate. Zero energy states are superpositions of classical time evolutions identifiable as preferred extremals of the action principle and analogous to Bohr orbits. Quantum jumps/state function reductions (SFRs) replace zero energy state with a new one.

There are two kinds of state function reductions. Self corresponds to a sequence unitary time evolutions followed by "small" state function reductions (SSFRs) as TGD counterparts of weak measurements. Self dies in "big" state function reduction (BSFR) and re-incarnates with an opposite arrow of time. During the sequence of unitary time evolutions followed by SSFRs defining self the passive boundary of CD and the members of state pairs at it are not changed whereas the active boundary recedes from passive boundary in statistical sense and also the members of state pairs at it are affected. BSFR creates a new variant about the unchanging part of self diving kind of soul. Kind of Karma's cycle would be in question.

At this moment the most plausible view about zero energy ontology (ZEO) is that the sizes of CDs serving as correlates for selves stay below fixed upper bound during the sequences of life cycles in opposite time directions. The size scale would be reduced in BSFR and then increase as the active boundary to which sensory input is assignable recedes farther away from the fixed passive boundary. The reduction of the size implies that all reincarnations of self experience "childhood". This option allows to avoid the growth of self to entire sub-cosmology.

The dramatic prediction is that selves and corresponding CDs has more or less fixed position in imbedding space H= M^4xCP_2. Our past would be living and consciousness. In particular, our memories live and also evolve in geometric past and to remember is to communicate with these entities. M^8-H duality gives support for this picture and allows to understand the failure of precise determinism of the classical theory.

The notion of self makes sense for systems with arbitrarily large size scales and one can solve several cosmological time anomalies by assuming that even astrophysical objects correspond to selves, which repeatedly reincarnate with opposite time direction and evolve as number theoretical vision about TGD predicts.

In the case of jellyfish this picture suggests that sub-..-sub-selves of jellyfish at some level of the hierarchy - perhaps the cells of jellyfish -, have very short life-cycle and remain therefore in the immature state. To live in an eternal childhood is to die often enough!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, December 04, 2019

Three condensed matter surprises

I learned about 3 surprising findings related to condensed matter physics and defying standard quantum theory and having a natural explanation in TGD framework.

The strange behavior of light

Light does not behave quite in the manner expected (see this). What was studied was splitting of photons to entangled pairs of photons in the crystal beam entering a crystal. Quantum field theory based on the idea of completely point-like particle predicts that photon pairs should be created at single point. What was observed that members of entangled photon pairs can be also created at separate points. The distances of these points can be about 1/100 microns- which happens to the size scale of cell membrane and fundamental scale in living matter. This length scale is about 100 times the atomic length scale.

Researchers argue that this findings supports new kind of Uncertainty Principle. I am not quite easy with this proposal unless it is taken to mean that particle has geometric size.

  1. In TGD Universe geometric size would be due to the fact that particles are not point-like but correspond to 3-D surfaces whose "orbits" define basic building bricks of space-time as 4-D surface in 8-D space-time H= M4 × CP2. Particles can exiss superpositions of their variants with different size scales.

  2. p-Adic physics for various primes p fusing together with real number based physics to what I call adelic physics would provide physical correlates of cognition and sensory experience. The number theoretic vision assigns to each particle extension of rationals characterized by so called ramified primes, which are excellent candidates for defining preferred p-adic length scales. The dimension n of extension defining a measure for algebraic complexity and serving as a kind of universal IQ has interpretation as effective Planck constant heff/h0=n so that a connection with quantum physics - or rather its TGD based generalization - emerges.

  3. p-adic mass calculations rely on p-adic length scale hypothesis stating that primes near powers of 2 are especially interesting physically and massive elementary particles and also hadrons correspond to this kind of primes. p-Adic mass scale would be proportional to p1/2.

A lot of new physics is predicted.
  1. TGD predicts caled variants of strong and weak interaction physics corresponding to different values of p and LHC provides handful of bumps having identification as scaled variants of ordinary hadrons and having mass which is 512 higher.

  2. For given particle several mass scales are in principle allowed. Quite generally, particle can correspond to several p-adic primes and therefore can exist in states with different masses differing by power of 21/2. The existence of this kind of states in the case of neutrinos would solve some problems related to neutrinos and their masses.

  3. In the case of massless particles different p-adic mass scales do not mean that masses are different (or more precisely, the masses depend on p but are extremely small and below measurement resolution so that mass differences cannot be detected). The p-adic length scale defines the geometric size of the particle as 3-surface to be distinguished from quantum size defined by Compton length. Quantum classical correspondence (QCC) strongly suggests that these two scales are same or at least closely correlated.

The hierarchy of Planck constants heff= n× h0 having an interpretation in terms of dark variants of ordinary particles predicts second kind of scale hierarchy.
  1. The mass of the dark variant of elementary particle would not differ from the mass of ordinary particle but Compton size for a dark particle is proportional to n - a good guess is that n=6 would correspond to ordinary particle and ordinary value h of heff.

  2. The scales defined by dark matter hierarchy could relate to p-adic length scales. There could be kind of resonance coupling for massless particles: dark massless particle labelled by n and particle labelled by p-adic prime p could transform to each other with high rate if the p-adic and dark length scales are nearly the same. This could be very relevant for biology.

The experimental findings could be understood if photons can correspond to several p-adic length scales. The length scale 10 nm defining the upper bound for distance between members of entangled photon pair in experiments would correspond to p-adic length scale L(151), which corresponds to Gaussian Mersenne prime p= (1+i)151-1. A simple model for photon could be as a closed flux tube like structure of this length. Also k=157, 163, and 167 define Gaussian Mersenne primes, which is a number theoretical miracle. What is fascinating that these scales are fundamental biological length scales assignable to the basic structures of DNA.

New surprises related to super-conductors

So called Anderson's theorem applying to the conventional super-conductors (BCS) states that the addition of non-magnetic impurities does not destroy super-conductivity. It has been however found (see this) that this is not the case for iron based high Tc super-conductors. This gives a valuable hints in still-continuing to attempts to understand high Tc super-conductivity.

I have been preaching for fifteen years new kind of super-conductivity explaining high Tc superconductivity making living systems high Tc superconductors (see for instance, this and this).

  1. The TGD view about magnetic fields differs from Maxwellian view. The counterparts of Maxwellian magnetic fields are flux quanta, flux tubes or sheets realized as space-time surfaces (or regions of them). Besides counterparts of ordinary magnetic fields there are also monopole flux tubes and they appear in all scales and form the basis of entire TGD view of Universe. They carry dark matter as heff= n× h0 phases and for large value of heff> h there is quantum coherence in long scales making possible super-conductivity along dark magnetic flux tubes. This could explain also high Tc superconductivity in iron based super-conductors.

  2. What was found that the addition of Cobalt atoms destroys the super-conductivity by inducing quantum phase transition. Anderson's theorem for ordinary super-conductivity however states that non-magnetic perturbations do not affect superconductivity. In TGD framework the natural interpretation would be that the quantum phase transition reduces the value of heff/ h0=n and thus also the quantum coherence length meaning that flux tube length is reduces and super-conductivity is possible only in short scales. Note that dark matter is identified as phases with non-standard value of heff different from h.

  3. Also the nature of so called energy gap assignable to super-conductors was modified as Cobalt atoms were gradually addedto destroy super-conductivity. This is not surprising if the value of heff was reduced. The reduction of heff in general decreases energies for other parameters kept constant and now it would mean reduction of energy gap and loss of superconductivity.

Conductors of electricity, which are poor conductors of heat

The so called Wiedemann-Franz Law states that good conductors of electricity are also good conductors of heat. The two conductivities are proportional to each other. The metal found 2017 however volates this law (see this) Vanadium dioxide VO2 transforms from insulator to a conductive metal at 67 degrees Celsius. The experimenters argue that this property could make possible new technologies. For instance, conversion of wasted heat from engines could be transformed to electricity.

Electrons are found to move in coordinated, synchronous manner and this would explain the reduction of heat conductivity to 1/10 of the expected value. There is no super-conductivity however. TGD explanation would be in terms of coherence and synchrony induced from the quantum coherence of dark phases of matter having heff/ h0=n residing at the magnetic body of the system controlling it.

This forced coherence would be also crucial in living matter: ordinary living matter would not be quantum coherent but the magnetic body carrying dark matter would force the coherence. In fact, all self-organization processes could involve magnetic body and dark matter.

See the article Three condensed matter surprises or the chapter Quantum criticality and dark matter .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, December 02, 2019

Time reversal of blackhole like object observed?

A very strange object behaving like time reversal of blackhole - or a candidate rather blackhole like object (BHE) has been observed (see this). The blackhole in question is super-massive and in the middle of galaxy cluster.

Usually blackhole "eats" the surrounding matter and also prevent the formation of stars since they are powerful emitters of gamma rays - this is not in accordance with the naive view about blackholes. The weird blackhole does not emit gamma rays and the environment around it cools and this makes possible star formation. Instead of eating the surrounding matter it should feed matter to surroundings making possible the star formation.

The most obvious TGD identification of the mystery object relies on zero energy ontology allowing both arrow of time. The arrow of time chances in ordinary state function reduction - the "big" one as opposed to "small" one corresponding to weak measurement. This predicts time reversed blackhole like objects (BHEs) analogous to white holes: white hole like objects (WHEs).

WHEs could appear in the very early states of the galactic evolution. They could feed the magnetic energy of monopole flux tubes to environment transformed to ordinary matter in turn forming galaxies. As a matter of fact, monopole flux tubes portions emanating it much lines of magnetic field would be formed and their local thickening and formation of tangles would give rise to stars.

If the time reversal idea is taken very seriously WHEs should suck gamma rays from environment inducing cooling making the star formation easier. This would be dissipation in non-standard direction of time identifiable as the basic metabolic mechanism associated with all kinds of self-organization process: quantum coherence at the level of magnetic body would be essential and induce long range coherence of ordinary matter as forced coherence.

WHE could be also created in BSFR for a BHE.


See the article Cosmic string model for the formation of galaxies and stars or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Did cosmology have Dark Ages at all?

A potential time anomaly of the recent cosmology relates to the "Dark Ages" of the Universe. Between the decoupling of CMB radiation from matter and the formation of stars there should have been a "Dark Ages" during which there was only neutral hydrogen. Star formation generated radiation at energies high enough to ionize hydrogen and the ionized interstellar gas started to produce radiation.

The 21 cm line of neutral hydrogen serves as a signature of neutral hydrogen. This line is redshifted and from the lower bound for the redshift one can deduce the time when "Dark Ages" ended. The popular article tells (see this) that the recent study using Murchison Widefield Array (MWA) radio telescope by Jonathan Pober and collaborators gave an unexpected result. Only a new lower upper bound for this redshift emerged: the upper bound corresponds to about 2 meters (see this). The conclusion of the experiments is optimistic: soon the upper bound for the redshift should be brought to light.

In TGD based view about cosmology and astrophysics (this) one one can formulate two questions.

  1. One can ask whether there were any "Dark Ages" at all!

  2. An alternative question is wheter the "Dark Ages" at distant past are prevailing anymore! This would be like asking whether the Hitler of thirties is the Hitler we know anymore. The point is that in TGD framework one must distinguish between subjective time and geometric time and this leads to some rather dramatic modifications of the prevailing view about time. The following arguments encourage a positive answer to the first question and negative answer to the second question.

The following arguments encourage positive answer to the first question and negative answer to the second question.

The answer to the first question relies of TGD based view about nuclear physics solving anomalies of standard nuclear physics and leading to a new view about stellar evolution.

  1. In TGD framework the formation of stars could have preceded by a pre-stellar period during which dark fusion giving rise to dark proton sequences - dark nuclei - at monopole flux tubes happened: this is Pollack effect in biology. This would have been "cold fusion" period in the stellar evolution and would have occurred spontaneously at low temperatures. It would have already produced abundances, which are not far from modern ones and one of the recent surprises is that the abundances at very early period are already near to modern ones.

  2. The model predicts also the possibility of neutral states for which electrons are at flux tubes parallel to dark proton flux tubes and have the same scaled up size (due to non-standard value of heff=nh0, which is smaller by factor about 1/2000) as dark protons. In solar interior dark protons would have Compton size of electron so that heff for them would be about 2000 times higher H=M4× CP2 than h. Also smaller and larger value of heff are possible. For blackholes the protons at flux tubes would be ordinary: heff=h.

  3. The transformation of dark nuclei having much smaller binding energy would have liberated nuclear binding energy and the resulting photons having energy up to gamma ray energies would have ionized the neutral hydrogen.

Zero energy ontology (ZEO) leads to a negative answer to the question whether Dark Ages still prevail in distant past.
  1. In ZEO Universe consists at the level of imbedding space H=M4 × CP2 of a fractal hierarchy of CD= cd× CP2, where cd is causal diamond of M4. CDs have interpretation as a hierarchy of sub-cosmologies. Each CD defines a correlate for a conscious entity and increases in size in each "small" state function reduction (SSFR) defining a counterpart of weak measurement. The flow of experienced time corresponds to the increase of distance between tips of CD. Second boundary of CD is however fixed - passive - as also members of state pairs at it defining zero energy states. The active boundary recedes farther away from the passive one. This gives rise to the arrow of time for given life of CD.

  2. In a "big" (ordinary) state function reduction (BSFR) the roles of boundaries of CD change. Active becomes passive and vice versa. The arrow of time changes. Self dies and reincarnates with opposite arrow of time. The simplest possibility is that the size of CD can decrease in BSFR meaning that the formerly passive boundary becomes much nearer to active. In this case CD begins to grow from a small size: self has "childhood". In this case it can happen that self never reaches a size larger than some upper bound and lives again and its life. Each life is more evolved since the extension of rationals involved with space-time surface increases in statistical sense in BSFR. This is nothing but Karma's cycle but in all scales.

  3. At the level of stars this would mean that star could undergo evolution as Karma's cycle also in cosmological remote past as an object located at fixed point of H. The abundances would be more or less the same as for modern stars. This would explain the mystery of stars older than the Universe and solve also other time anomalies of the standard cosmology. This explanation is consistent with the first one and actually the first one is needed to explain abundances of nuclei heavier than Fe and the light nuclei Li, B, Be much higher than predicted by standard model.

See the article Cosmic string model for the formation of galaxies and stars or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.


Sunday, December 01, 2019

Snow flakes and macrocopic quantum criticality

Thanks for Nikolina Benedikovic for a link representing images of snowflakes. This led to a very interesting discussion generating new details to the view about self-organization in TGD Universe. Also phase transitions liberating heat as a new manner to generate dark matter in TGD sense in phase transition liberating heat energy suggests themselves and could provide a manner to generate artificial life in quantum sense.

The link told about snowflakes having incredibly precise symmetry. Their formation is still poorly understood and their precise symmetries remain a mystery. One would expect something like this in atomic length scales, where one has quantum coherence but certainly not in macroscopic scales. This inspires heretic questions. Could it be that the snowflakes reflect quantum coherence in their own size scale? Snowflakes are not macroscopically quantum coherent. What could be the quantum coherent system involved?

I can reveal my cards. This was mere rhetoric. I have made these questions 15 years ago but in different context. The outcome of these questions is TGD view about living matter and matter in general based now of adelic physics providing number theoretic vision about TGD (see for instance this and this) .

Magnetic body containing dark matter as heff=nh0 phases (h/h0=6 is a good guess) and inducing self-organization of ordinary matter with quantum coherence of dark matter inducing the ordinary long range coherence of ordinary matter. The relevance for quantum biology would be that the highly problematic quantum coherence of ordinary bio-matter would not be needed.

Could this explain snowflakes as impossibly perfect designs as self-organization patters forced in ordinary matter by quantum coherent magnetic body of water? I remember that some-one has said that snowflakes are like zoom-ups of atomic systems reflecting basic molecular symmetries. They could be indeed analogous to zoom-ups of atomic systems with zooming factor given by n. Quite concretely, the lengths of hydrogen bonds would be scaled up by n.

Concerning concrete model for snowflakes there is clear hint. The self-organization would increase the values of heff and this requires energy feed. Where does it come from?

Freezing of water liberates energy: this could serve as source of metabolic energy. More generally phase transitions liberating heat energy could generate heff> h phases and generate highly ordered structures. Here might a possible method to create dark matter in TGD sense.

Findings of Emoto

An interesting application is to the findings of Masaru Emoto (see this) that emotional expressions of humans seem to affect water at criticality for freezing. Angry voices are claimed to create ugly patterns and friendly voices beautiful ones. The metabolic energy needed to induce phase transition transforming ordinary matter to dark matter as exotic phase of water would come from the latent heat liberated in freezing. By macroscopic quantum coherence of MB the resulting dark parts of water's MB would be sensitive to human emotional expressions.

Could living systems utilize quantum critical phase transition liberating energy?

Wes Johnson commented about the ability of living systems to use heat as metabolic energy. Could phase transitions liberating heat produce this energy and lead to a generation of large heff phases?

  1. In TGD Universe the efficiency of living matter to use heat as metabolic energy would a characteristic of not only life but all self-organizing systems. The distinction between living and in-animate would be only quantitative. The evolutionary aspect of self-organization would be generation of coherence in longer scales and would be induced by generation of large heff phases at magnetic body becoming thus quantum coherent in long scales. Energy feed would generate these phases and at criticality for a phase transition liberating heat energy (enthalpy) this is easy.

  2. Living systems are conscious in a narrow temperature range. Perhaps this relates to the criticality for phase transition liberating energy in turn generating especially important heff phases. Water has special anomalies around the physiological temperature and looks like a two-phase system (at least). This kind of a phase transition of water could be fundamental for living matter.

    This could have a direct connection with the Pollack effect (see this) creating charge separation: in TGD part of protons would become dark protons at magnetic flux tubes - dark nuclei providing a fundamental representation for genetic code.

  3. Carbohydrates are carriers of metabolic energy. Could this mean that they have molecular bonds (valence bonds) with non-standard value of Planck constant heff and that their energy is liberated when these bonds disappear in the splitting of these bonds or even in the reduction of heff, which would be basic element of bio-catalysis. I have indeed proposed a model for valence bonds in terms of dark flux tubes with heff> h (see this). The values of n involved would be relatively small and would correspond to the many-sheetedness for the space-time surface as covering of M4 ×CP2 coordinates would be n-valued. n would increase towards right end of the rows of the periodic end and this would explain the different roles of the molecules at opposite ends of the rows in biology.


Two aspects of self-organization

Note that these phase transitions producing phases with a non-standard value of heff represent evolution as a statistical increase of the dimension of extension of rationals and relying on "big" (ordinary) state function reductions (BSFRs). This active, evolutionary aspect could be seen as quantal aspect of self-organization.

There is also classical, passive, aspect assignable to the evolution of subsystem by "small" state function reductions (SSFRs) serving as counterparts of weak measurements. In TGD inspired theory of consciousness, motory-sensory duality corresponds to these two aspects. Motor actions correspond to BSFRs and sensory experience to SSFRs.

  1. ZEO predicts that time reversal occurs in ordinary state function reductions (BSFRs) and that these reductions occur in all scales and look like ordinary classical evolutions leading to the final state smoothly and deterministically: this was discovered by Minev et all in atomic systems (see this). This would remove the conflict between classicality and no-determinism at the level of conscious experience. Quantum systems would do their best to look like classical.

  2. Self-organization as generation of structures at space-time level (passive aspect) can be understood in terms of zero energy ontology (ZEO) alone (see this). Self-organization (its sensory aspect) and metabolism (use of energy) could be seen as a dissipation in opposite direction of time: no separate models or mechanisms would be needed. Gradients would increase, structure would be generated. Basic biological processes at bio-molecular level would be controlled by magnetic bodies in time reversed states. The only challenge is to understand how living matter generates the sources of metabolic energy - how living system stores energy.

See the article Some comments related to Zero Energy Ontology (ZEO)).

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, November 28, 2019

Evidence for the anisotropy of the acceleration of cosmic expansion

Evidence for the anisotropy of the acceleration of cosmic expansion has been reported (see this). Thanks to Wes Johnson for the link. Anisotropy of cosmic acceleration would fit with the hierarchy of scaled dependent cosmological constants predicting a fractal hierarchy of cosmologies within cosmologies down to particle physics length scales and even below.

The phase transitions reducing the value of Λ for given causal diamond would induce accelerated inflation like period as the magnetic energy of flux tubes decays to ordinary particles. This would give a fractal hierachy of accelerations in various scales.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Darvinian or neutral theory evolution or something else?

The stimulus to the posting came from an article telling that the neutral theory of evolution has been challenged by evidence for DNA selection (see this). I must admit that I had no idea what neutral theory of evolution means. I had thought that Darwinian view based on random mutations and selection of the most adaptive ones is the dominating view. The debate has been about whether Darwinian or neutral theory of evolution is correct or is some new vision needed.

Darwinian and neutral theories of evolution

  1. Adaptive evolution is the Darwinian view. Random mutations are generated and organisms with the most adaptive genome survive. One can of course argue that also recombination occurring during mitosis creating germ cells creates new genetic combinations and must be important for the evolution. Selection can be either negative (purifying) and eliminate the non-adaptive ones or positive favoring the reproduction of the adaptive ones.

    One can argue that notions like "fight for survival" and selection do not fit with the idea about organisms as basically inanimate matter having no goals. Also second law poses problems: no evolution should take place, just the opposite.
    Metabolic energy induces self-organization but by second law all gradients about which metabolic energy feed is an example, disappear.

  2. Neutral evolution theory was proposed by Morita 50 years ago and gained a lot of support because of its simplicity. Point mutations for the codons of DNA would create alleles. Already in Darwinian evolution one knows that large fraction of mutations are neutral having not positive or negative effect of survival. Morita claims that all mutations are of this kind. There would be no "fight for survival" or selection.

    The so called genetic drift, which is completely random process is possible in small populations and can lead to counterpart of selection: it can happen that only single allele remains and is counterpart for the winner in selection. This is purely random and combinatorial effect and in physics one would not call it drift.

    The first objection is that if one has several isolated small populations, the outcomes are completely random so that in this sense there is no genetic drift. Furthermore, there is no reason why further mutations would not bring the disappeared alleles back. Second objection is that there would not be no genuine evolution - how one can speak about theory of evolution?

    Now the feed of experimental and empirical data is huge as compared to what it was 5 decades ago and it is now known that the neutral theory fails: for instance, varying patterns of evolution among species with different population sizes cannot be understood. It is also clear that selection and adaptions really occur so that Darwin was right.

  3. The shortcomings of the neutral theory led Ohta to propose nearly neutral theory of evolution. Mutations can be slightly deleterious. For large populations this leads to a purging of slightly deleterious mutations. For small populations deleterious mutations are effectively neutral and lead to the genetic drift.

    There is however a further problem: why the rate of evolution varies as observed between different lineages of organisms.

  4. One reason for fashionability was that the model was very simple and allowed to compute and predict. Only the size of the population and rate for the mutations is enough to predict the future in small populations. The predictions have been poor but this has not bothered the proponents of the neutral evolution theory.

    As an outsider I see this as a typical example of a fashionable idea: these have plagued theoretical particle physics for four decades now and led to a practically complete stagnation of the field via hegemony formation. Simple arguments show that the idea cannot be correct but have no effect.

Article explains several related notions.
  1. It has been possible to determine the mutation rates at the level of individual sites of genome since 2005. Only subset of mutations of say cancer cells are functionally important to cancer and they can be identified. This leads to a selection intensity as basic notion. This notion is expected to be very valuable for the attempts to find targeted cure of cancer.

  2. Neutral theory of evolution assumes that only point mutations matter. Theory was therefore completely local at the level of genome - and certainly simple! Innocent outsider knowing a little bit about biology wonders why the recombination of maternal and paternal chromosomes in meiosis creating the chromosomes associated with germ cells are not regarded as important. This mechanism is non-local at the level of genome and would naturally lead to a selection at the level of individuals of the species. It has been indeed learned that the genetic variation and the rate of recombination in meiosis correlate in given region of genome. This sounds almost obvious to the innocent novice but had to be discovered experimentally.

    One can however still try to keep the neutral theory of evolution by assuming that recombination is completely random process and there is no selection and adaption - contrary to the experimental facts and the basic idea behind the notion of evolution. Recombination would bring only an additional complication.

    Besides the direct purifying selection and neutral drift there would be recombination creating differences in the levels of variation across the genomic landscape. This leads to the notion of genetic hitchiking. When beneficial alleles are closely linked to neighboring neutral mutations, selection acts as a unit on them. One speaks about linked selection. Frequencies of neutral alleles are determined by more than genetic drift but one can speak of neutrality still. Linkage of hitchiker to allele - beneficial or not - is however random. Does genuine evolution takes place at all?

  3. Most of the DNA is not expressed as proteins. It would not be surprising if this part of DNA could have important indirect role in gene expression or perhaps be expressed in some other manner - say electromagnetically. How important role this part of DNA has in evolution? There are also transposons inducing non-point like mutations of this part of DNA: what is their role. There also proposals that viruses, usually though to be a mere nuisance, could play decisive role in evolution by modifying the DNA of host cells.

  4. It is now known that up to 80-85 per cent of human genome is probably affected by background selection. Moreover, height, skin color blood pressure are polygenic properties in the sense that hundreds or thousands of genes are acting in concert to determine these properties. This strongly suggests that point-like mutations cannot be responsible for evolution and not even recombinations are enough if random. A control of evolution in longer scales seems to be required. This of course relates to the basic problem of molecular biology: what gives rise to the coherence of living matter. Mere bio-chemistry cannot explain this. Something else perhaps controlling the bio-chemistry is needed.

TGD based view about evolution

One can start by criticizing the standard view.

  1. Is the standard view (to the existent that such exists) about evolution consistent with second law? One can even ask whether standard view about thermodynamics assuming a fixed arrow of time is correct.

  2. If mutations and more general changes of genome occur by pure change, can they really lead to a genuine evolution. The notions of selection and survival of fittest are notion, which do not conform with the view about evolution as mere standard physics. A probable motivation for neutral evolution theory has been the attempt to get rid of these notions: physicalism taken to extreme.

  3. The reduction of life to bio-chemistry does not allow to understand the coherence of organisms.

  4. One can also criticizing the reduction of life to mere genetics.

    1. Genetic dogma does not tell much about morphogenesis.
    2. Is genetic determinism a realistic assumption? Clones of bacterium are know know to have personalities behaving differently under given conditions (see this).
    3. Most of the genome of the higher organisms consists of DNA not transcribed to RNA still interpreted as junk by some biologists. What about introns? Could there exists other forms of gene expression - say electromagnetic.


TGD based view about evolution can be seen as a response to these criticisms but actually developed from a proposal for a unification for fundamental interactions and from the generalization of quantum measurement theory leading to a theory of consciousness and generalization of quantum theory itself.

  1. TGD leads to a new view about space-time and classical fields. In particular, many-sheeted space-time and magnetic body bring in new element changing dramatically the views about biology.

    The notion of Maxwellian fields is modified. Unlike in Maxwellian theory any system has field identity, field body, in particular magnetic body (MB) carrying dark matter n TGD sense and in well-define sense at higher evolutionary level as compared to ordinary bio-matter. This expands the standard pairing organism-environment to a triple MB-organism-environment.

    MB can be seen as the controlling intentional agent and its evolution would induce also the evolution of the ordinary bio-matter. MB carries dark matter as heff/h0=n phases giving rise to macroscopic quantum coherence at level of MB. MB forces the ordinary bio-matter to behave coherently (not quantum coherently).

    TGD leads also to a realization of genetic code at the level of dark analog of DNA represented as dark proton sequences (see this) - dark nuclei, which are now essential element of TGD based view about nuclear physics (see this). Dark photons are essential for the communications between MB and ordinary bio-matter. Also dark photons would realize genetic code with codon represented as 3-chord consisting of 3 dark photons.

    Genetic modification would take place at the level of magnetic flux tubes containing dark analog of DNA and induce changes of the ordinary genome, which would do its best to mimic dark genome. In particular, the recombination occurring during the meiosis would be induced by the reconnection of the flux tubes of dark genome.

  2. Number theoretical vision about evolution deriving from the proposal that p-adic physics for various primes combining to what I call adelic physics is second needed element (see this). Any system can be characterized by a extension of rationals defining its algebraic complexity. The dimension of extension identifiable in terms of the effective Planck constant heff/h0=n defines evolutionary level as a kind of IQ. What is remarkable that n increases in statistical sense since the number extensions with n larger than that for given extension is infinitely larger than that of lower-dimensional extensions. Intelligent ones have larger scale of quantum coherence and thus coherence of bio-matter and survive. Evolution is directed process forced by number theory alone.

    Quantum jumps in the sense of ZEO tending to increase n occurring naturally in mitosis generating germ cells lead also to a more intelligent genomes. Point mutations could be seen something occurring at the level of ordinary matter rather than being induced by dark matter.

  3. Zero energy ontology (ZEO) is behind the generalization of quantum measurement theory solving the basic problem of standard quantum measurement theory. There are two kinds of state function reductions: "small" state function reductions (SSFRs) as analogs of weak measurements and giving rise to the the life cycle of conscious entity self having so called causal diamond (CD) as a correlate. Under SSFRs the passive boundary of CD is unaffected as also members of state pairs at it: this gives rise to the "soul" of unchanging part of self. "Big" state function reductions (BSFRs) correspond to ordinary state function reductions. They change the arrow of time and one can say that self dies and re-incarnates with a reversed arrow of time. This applies in all scales since consciousness and cognition predicted to be universal. In BSFRs the value of heff increases in statistical sense and this gives rise to evolution also at the level of genome. The reversal of the arrow of time allows to see self-organization and metabolism as dissipation in non-standard time direction so that generalization of thermodynamics to allow both arrows of time allows to understand both self-organization and evolution.

Evolution at DNA level

A possible application would be TGD based model for meiosis and fertilization. The starting point is that recombinations occurring in meiosis represent a fundamental step in evolution preserving the species and point mutations are mostly noise having also negative effects. There are also modification which produce a new species. Consider first recombinations.

  1. In meiosis BSFR for the dark proton sequences defining dark DNA could induce reconnections of parallel maternal and paternal dark proton flux tubes inducing recombination at the level of the ordinary genome.

  2. The resulting germ chromosomes - or rather their dark variants realized in terms of dark proton sequences would have arrow of time opposite that of chromosomes. They would be in a dormant state analogous to sleep.

  3. Fertilization involves the pairing of paternal and maternal germ chromosomes and looks almost like time reversal of meiosis. In the proposed picture it would indeed change the arrow of time for the germ chromosomes - wake up them. The sequence meiosis replication-meiosisI-division - meiosisII would correspond to 4 BSFRs leading to germ cells having dark genome as as time reversal of ordinary genome.

    Remark: One can ask whether also the passive strand of ordinary DNA has arrow of time opposite to that of the active strand.

Recombinations do not change the genome dramatically and can be said to be species preserving. Big leaps in evolution change genome more drastically - say by adding new genes - to yield what might be regarded as a new species. They represent a challenge also for the TGD based view.

The big changes should occur at the level of the magnetic body inducing in turn modifications at the level of ordinary genome. The addition of a portion of DNA double strand of same length to the end of DNA double strand could be a species changing modification. This would not change the earlier genome and could add a new gene for instance. How this change could occur?

  1. In TGD dark genome acts as master controlling the ordinary genome playing the role of slave. Dark genes correspond to dark proton sequences with possibly subset of protons behaving like neutrons due to the presence of negatively charged bonds between two neighboring protons of the sequence. Large modification would add to this sequence new dark protons: dark counterpart of nuclear fusion would take place.

    In water Pollack effect would correspond to this process this process and would give rise to charge separation creating negatively charged regions called exclusion zones (EZs) by Pollack. There is no reason why this process could not occur also inside cells containing pairs maternal and paternal chromosomes.

  2. At the level of dark magnetic body the modification of dark double strand could be realized if it corresponds to a closed monopole flux loop having double helical structure: the conjugate strand would carry the return flux. The addition of a piece of DNA would be induced by a reconnection gluing shorter helical flux loop to the end of the helical loop. The chemical counterpart of dark DNA would be formed by the pairing of dark codons with the ordinary codons - kind of transcription process.

  3. The modifications of paternal and maternal dark genomes are expected to occur independently and typically lead to different lengths of paternal and maternal DNAs. Hence the condition that the paternal and maternal modifications of germ cells are identical (same length) is too strong.

    Can the maternal and paternal DNA double strands have different lengths? This seems to be possible. The reconnection process in meiosis does not require same lengths for maternal and paternal genomes. In fertilization the chromosomes of paternal and maternal gametes form pairs and also this allows different lengths. Therefore the big leaps in the evolution could correspond to additions of new pieces to maternal an/or paternal dark genome.

This picture is of course over-simplified. Also addition of DNA portions in the middle of genome - say adding a new gene or par of gene - should be possible at the level of dark matter. Also this process should occur by reconnection process at the level of dark matter. Also now it seems that the process can occur independently for maternal and paternal chromosomes.

See the article Darwinian or neutral theory of evolution or something else?, the longer article Getting philosophic: some comments about the problems of physics, neuroscience, and biology, or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, November 27, 2019

Multilocal viruses

I learned about very interesting piece of strangeness in biology known already for half a century (see this): there are viruses which can split into segments going into different host cells, replicate and produce proteins there, and self-assemble to original virus after this.

Virus (see this) consist of DNA or RNA, protein coat, and in some cases outside envelope consisting of lipids and analogous to cell membrane. Typically viruses consist of DNA or RNA decomposing to short segments coding for single protein. The reason for this is that RNA replication is prone to errors and for short segments these errors are not so fatal. Also DNA can be segmented but the segments are longer. RNA can be have positive sense in which it can be directly translated to protein or negative sense in which case replication producing positive sense RNA is needed made possible by an enzyme contained by the virus.

The usual thinking about viruses is that virus finds its way to cell and then uses the genetic machinery of the cell to replicate its DNA and RNA and produce also proteins. This does not not occur in the case of multipartite viruses infecting plants. The virus can split into segments infecting host cells separately. The segments of RNA and proteins contained by the virus are thus shared by different cells are replicated and coded to proteins. The outcome of the process is then brought together in some cell which need not contain gene segments in it and self-assembly to full virus can occur. Also fractured viruses can flourish and can infect some other plant.

It has been found that the full complement of most viral segments is missing from most plant cells. Protein required for viral replication present in cells that did not have genome for producing it so that the produced proteins can be transferred from the cell where they are produced to neighboring cells: it is though that so called plasmodesmata connecting cells to a network make this possible.

In standard view assuming that the viral segments are completely independent systems multi-partitioning has high risks. In this view theoretically not more than 4 segments are possible. For instance, 8 has been observed in the examples discussed. Even flu virus decomposes into 8 DNA segments with the cell inside which it replicates. Multi-partitioning produces also problems for spreading. In the case of FBNSV viruses mentioned in the article on the insect - aphid- eating FBNSV spreads the virus to plants. How can it get all 8 parts of virus simultaneously? This is very difficult to understand if the segments are really independent.

This suggests that the view about these viruses somehow wrong. Multi-partitioning happens and standard view does not allow it.

One can start by asking why the multi-partitioning implying modular reproduction (something analogous to that in industry!)? One good reason is that host cell might not be able to recognize the segments. Also transcription of too large number of RNAs might be too much for the host and kill it. It seems that viruses act as populations.

TGD based model is based on familiar basic notions.

  1. The basic mystery of the biology is coherence of organisms. Bio-chemistry alone cannot explain it. In TGD quantum coherence of dark matter identified as heff=nh0 phases of ordinary matter at magnetic flux tubes of the magnetic body (MB) of the system is quantum coherent in long scales and this quantum coherence forces the coherence of ordinary living matter.

  2. The flux tubes of MB connect cells to larger networks (tensor networks). In particular the segments of virus can be connected to a network in this manner. The segments would be effectively free but their behavior would be correlated.
    Virus would be multi-local entity at the level of ordinary matter but single connected structure at the level of MB.

  3. The TGD based model for bio-catalysis and replication and the model for monopole flux tubes suggests that
    the the phase transition increasing heff/h0 =n increases the length of the flux tube. This process requires metabolic energy since quite generally the energy of system increases with n serving as a kind of IQ of the system measuring its algebraic complexity and identifiable as the dimension of extension of rationals assignable to the system. Multi-partitioning requires metabolic energy presumably given by a host cell. The components of multi-partitioned virus are virtually independent but flux tube connections are not lost. There are very many possible multi-partitionings and the individual host cell can contain several segments.

  4. If the decay of virus to multi-partition corresponds to ordinary state function reduction ("big" state function reduction (BSFR) in zero energy ontology (ZEO) the arrow of time changes at the level of MB of virus (dark matter). heff/h0=n increases in statistical sense in BSFR so that the multi-partitioned state should have higher IQ and is thus favored by quantum TGD. One might perhaps say that when virus is not active it does not need too much IQ: IQ requires metabolic energy feed and low IQ is the most economical choice in the dormant space. When virus infects the host it become active and and increase of n makes it multi-local at the level of ordinary matter.

    If this view is correct the self-assembly of the virus would lead back to dormant state with opposite arrow of time. That dormant state of virus would correspond to opposite arrow of time for "virus self" would conform with the general view that observer with opposite arrow of time than conscious entity experiences it as sleeping. One must be of course however very cautious with interpretations.

  5. This dormant states would not be specific to viruses. Also folded protein would be dormant. External perturbation would feed metabolic energy feed waking up the dormant protein and protein would un-fold and become active and intelligent.

    Same applies to multi-locality. Also bacterial colony could be seen as single organism multi-local only at the level of ordinary bio-matter. When bacterial colony suffers starvation the bacteria form a single tightly connected structure also at the level of ordinary bio-matter. In the absence of metabolic energy feed the values of n associated with the flux tubes would be reduced and they would shorten causing the phenomenon.

    For cellular organisms the multi-locality at the level of ordinary bio-matter be realized for cell but the distances of cells would be fixed. Also at the level of DNA, RNA, tRNA and amino-acids multi-locality would be realized but the distances would not be fixed. In bio-catalysis the reactants are brought together and here heff reducing phase transition would take place providing also the energy needed to overcome the potential wall making the reaction extremely slow otherwise. In TGD based model for replication, transcription, and translation this flexible multi-locality is indeed assumed (see this).

  6. How sexual reproduction (see this) emerged is one of the mysteries of biology. The formation of tightly bound multi-local states of mono-cellulars would have increased the probability for lateral gene transfer between neighboring cells, and also the replacement of mere replication with a two-step process consisting of replication followed by meiosis and fertilization as its inverse. The reconnection of flux tubes assignable to DNA is a prerequisite of this process in TGD framework so that the formation of states analogous multi-cellulars would have made this process plausible.

It has been found (see this), thanks for Nikolina Bendedikovic for a link) that multicellulars have monocellular colonies as predecessors in the sense that the bacteria (monocellulars) form temporarily tight structures resembling multicellular embryos. The transition from loose multi-locality to a more tight one suggets itself. When metabolic energy feed is low bacteria form tightly bound non-multilocal structures analogous to multi-cellulars. The flux tubes are shorten and metabolic energy is liberated, and also the need form metabolic energy is lower when flux tubes have lower values of heff. Multi-cellulars would be permanently in this configuration and their intelligence coded by distribution of heff:s would be realized differently.

Multi-cellulars would have been formed when these multi-cellular like bacterial colonies became permanent and began to evolve from embryos to more developed forms (see this and this). Hitherto I have assumed that multi-cellulars were formed already before the Cambrian explosion assumed to be induced by a relatively rapid phase transition increasing reducing the local cosmological constant by factor 1/2, and increasing the radius of Earth by a factor 2. This transition would have brought multi-cellulars to the surface from underground oceans giving also rise to the ordinary oceans. I have compared underground oceans to a womb of magnetic Mother Gaia. Ontogeny recapitulates phylogeny principle suggests that the life of the multicellular embryo in womb corresponds to the period of multicellular life in underground oceans.

Second possibility is that the multi-cellulars emerged from underground mono-cellulars during this transition or immediately after it. Could the emergence of bacterial colonies to the surface perhaps providing less metabolic energy feed forced them to form tightly bound colonies forcing the evolution of multi-cellulars?

See the article Multilocal viruses or the chapter Dark matter, quantum gravity, and prebiotic evolution.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Too heavy blackhole in Milky Way

The standard model for blackhole formation predicts an upper bound on the mass of blackhole depending also on environment since the available amount of matter in environment is bounded. In the case of Milky Way the bound is about 20 solar masses. Now however a blackhole like entity (BHE) with mass about 70 solar masses has been discovered (see this) . I am grateful for Wes Johnson for the link. Also the masses of BHEs producing the gravitational radiation in their fusion have been also unexpectedly high, which suggests that standard view about BHEs is not quite correct.

In TGD framework the blackhole-like entities (BHEs) are volume filling flux tubes: the thickness of the flux tube is roughly proton Compton length. Also asymptotic states of stars are BHEs in this sense: the flux tube radius is some longer p-adic length scale coming in half octaves. The model leads to a correct lower bound for the radius and mass satisfied by neutron stars and standard blackholes. Also the masses of very light stars can be understood.

Model in its recent form does not give upper bound for the mass. For time reversed BHEs - analogs of white holes (WHEs) possibly identifiable as quasars - the mass of WHE comes from a tangling long cosmic string and there is no obvious upper bound. Even galactic BHEs would correspond to WHEs made quantum jump to BHEs at the level of magnetic body: in this state the flux tube forming counter the magnetic field is fed back from environment. A breathing spaghetti.

In standard model the mechanism for the formation of blackhole is different since there is no flux tube giving the dominant dark energy/dark matter contribution. Therefore the upper bound for mass - if there exists such - is expected to increase. An upper bound could come from the transformation of de-entangling of flux tube so that spaghetti would straighten.

The simplest model predicts that only the flux tube mass contributes. The mass of the ordinary matter going to BHE would transform back to dark energy/mass of the flux tube. The process would be time reversal of the process making sense in zero energy ontology (see this and this) in which the magnetic energy of flux tube transforms to ordinary matter: time reversal for the TGD counterpart of inflation.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, November 25, 2019

Blackholes, quasars, and galactic blackholes

In the sequel I summarize the dramatic progress which has taken place in the understanding of blackhole like entities (BHEs) in TGD framework. This picture allows to see also stars as BHEs. A more detailed representation can be found in the article Cosmic string model for the formation of galaxies and stars.

I have discussed a model of quasars earlier (see this) . The model is inspired by the notion of MECO and proposes that quasar has a core region analogous to black hole in the sense that the radius is apart from numerical factor near unit rS=2GM. This comes from mere dimensional analysis.

1. Blackholes in TGD framework

In TGD the metric of blackhole exterior makes sense and also part of interior is embeddable but there is not much point to consider TGD counterpart of blackhole interior, which represents failure of GRT as a theory of gravitation: the applicability of GRT ends at rS. The following picture is an attempt to combine ideas about hierarchy of Planck constant and from the model of solar interior (see this) deriving from the 10 year old nuclear physics anomaly.

  1. The TGD counterpart of blackhole would be maximally dense spaghetti formed from monopole flux tube. Stars would not be so dense spaghettis. A still open challenge is to formulate precise conditions giving the condition rS= 2GM. The fact that condition is "stringy" with T= 1/2G taking formally the role of string tension encourages the spaghetti idea with length of cosmic string/flux tube proportional to rS.

  2. The maximal string tension allowed by TGD is determined by CP2 radius and estimate for Kähler coupling strength as 1/αK ≈ 1/137 and is roughly Tmax∼ 10-7.5/G suggesting that in blackhole about 107.5 parallel flux tubes with maximal string tension and with length of about rS give rise to blackhole like entity. Kind of dipole core consisting of monopole flux tubes formed by these flux tubes comes in mind. The flux tubes could close to short flux tubes or flux tubes could continue like flux lines of dipole magnetic field and thicken so that the energy density would be reduced.

  3. This picture conforms with the proposal that the integer n appearing in effective Planck constant heff=n× h0 can be decomposed to a product n=m× r associated to space-time surface which is m-fold covering of CP2 and r-fold covering of M4. For r=1 m-fold covering property could be interpreted as a coherent structure consisting of m almost similar regions projecting to M4: one could say that one has field theory in CP2 with m-valued fields represented by M4 coordinates. For r=1 each region would correspond to r-valued field in CP2.

    This suggests that Newton's constant corresponds apart from numerical factors 1/G= mℏ/R2, where R is CP2 radius (the radius of geodesic circle). This gives m∼ 107.5 for gravitational flux tubes. The deviations of m from this value would have interpretation in term of observed deviations of gravitational constant from its nominal value. In the fountain effect of super-fluidity the deviation could be quite large (see this) .

    Smaller values of heff are assigned in the applications of TGD with the flux tubes mediating other than gravitational interactions, which are screened and should have shorter scale of quantum coherence. Could one identify corresponding Planck constant in terms of the factor r of m: heff = rhbar0? TGD leads also to the notion of gravitational Planck constant hbargr= GMm/v0 assigned to the flux tubes mediating gravitational interactions - presumably these flux tubes do not carry monopole flux.

  4. Length scale dependent cosmological constant should characterize also blackholes and the natural first guess is that the radius of the blackhole corresponds to the scaled defined by the value of cosmological constant. This allows to estimate the thickness of the flux tube by a scaling argument. The cosmological constant of Universe corresponds to length scale L=1/Λ1/2∼ 1026 m and the density ρ of dark energy corresponds to length scale r= ρ-1/4 ∼ 10-4 m. One has r= (8π r)1/4LlP1/2 giving the scaling law (r/r1)= (L/L1)1/2. By taking L1=rs(Sun)=3 km one obtains r1= .7× 10-15 m rather near to proton Compton length 1.3× 10-15 m and even nearer to proton charge radius .87× × 10-15 m. This suggests that the nuclei arrange into flux tubes with thickness of order proton size, kind of giant nucleus. Neutron star would be already analogous structure but the flux tubes tangled would not be so dense.

    Denoting the number of protons by N, the length of flux tube would be L1≈ Nlp== xrS (lp denotes proton Compton length) and the mass would be Nmp. This would give x as x= (lp/lPl)2 ∼ 1038. Note that the ratio of the volume filled by the flux tube to the M4 volume VS defined by rS is

    Vtube/VS = (3/8) (lP/lPl)2 × (lp/rS)2∼ 10 (rS(Sun)/rS)2 .

    The condition Vtube/VS<1 gives a lower bound to the Schwartschild radius of the object and therefore also to its mass: rS>101/2rS(Sun) and M>101/2M(Sun). The lower bound means that the flux tube fills the entire M4 volume of blackhole. Blackhole would be a volume filling flux tube with maximal mass density of protons (or rather, neutrons -) per length unit and therefore a natural endpoint of stellar evolution. The known lower limit for the mass of stellar blackhole is few stellar masses (see this) so that the estimate makes sense.

  5. An objection against this picture are very low mass stars with masses below .5M(Sun) (see this) not allowed for k≥ 107. They are formed in the burning of hydrogen and the time to reach white dwarf state is longer than the age of the universe. Could one give up the condition that flux tube volume is not larger than the volume of the star. Could one have dark matter in the sense of n2-sheeted covering over M4 increasing the flux tube volume by factor n2.

  6. This picture does not exclude star like structure realized in terms of analogs of protons for scaled up variants of hadron physics M89 hadron physics would have mass scale scaled up by a factor 512 with respect to standard hadron physcs characterized by Mersenne prime M107. The mass scale would correspond to LHC energy scale and there is evidence for a handful of bumps having interpretation as M89 mesons. It is of course quite possible that M89 baryons are unstable against transforming to M107 baryons.

  7. The model for star (see this) inspired by the 10 year old nuclear physics anomaly led to the picture that protons form at least in the core dark proton sequences associated with the flux tube and that the scaled up Compton length of proton is rather near to the Compton length of electron: there would be zooming up of proton by a factor about 211∼ mp/me. The formation of blackhole would mean reduction of heff by factor about 2-11 making dark protons and neutrons ordinary.

Can one see also stars as blackhole like entities?

The assignment of blackholes to almost any physical objects is very fashionable, and the universality of the flux tube structures encourages to ask whether the stellar evolution to blackhole as flux tube tangle could involve discrete steps involving blackhole like entities but with larger Planck constant and with larger radius of flux tube.

  1. Could one regard stellar objects as blackholes labelled by various values of Planck constant heff? Note that heff is determined essentially as the dimension n of the extension of rationals (see this and this). The possible p-adic length scales would correspond to the ramified primes of the extension. p-Adic length scale hypothesis selects preferred length scales as p≈ 2k, with prime values of k preferred. Mersennes and Gaussian Mersennes would be in favoured nearest to powers of 2.

    The most general hypothesis is that all values of k in the range [127,107] are allowed: this would give half-octaves spectrum for p-adc length scales. If only odd values of k are allowed, one obtains octave spectrum.

  2. The counterpart of Schwartchild radius would be rS(k)= (L(k)/L(107))2rS corresponding to the scaling of maximal string tension proportional to 1/G by L(107)/L(k)2, where k is consistent with p-adic length scale hypothesis.

    The flux tube area would be scaled up to L(k)2= 2k-107L(107)2, and the constant x== x(107) would scale to x(k)=2k-107x. Scaling guarantees that condition V(tube)/VS does not change at all so that the same lower bound to mass is obtained. Note that the argument do not give upper bound on the mass of star and this conforms with the surprisingly large masses participating in the fusion of blackholes producing gravitational radiation detected at LIGO.

  3. The favoured p-adic length scales between p-adic length scale L107 assignable to black hole and L(127) corresponding to electron Compton length assignable to solar interior are the p-adic length scale L(113)= 8L(127) assignable to nuclei, and the length scale L(109), which corresponds to p near prime power of two.

    1. For k=109 (assignable to deuteron) the value of the mass would be scaled by factor 4 to a lower about 12 km to be compared with the typical radius of neutron star about 10 km. The masses of neutron stars around about 1.4 solar masses, which is rather near to the lower bound derived for blackholes. Neutron star could be seen the last phase transition in the sequence of p-adic phase transition leading to the formation of blackhole.

    2. Could k=113 phase precede neutron stars and perhaps appear as an intermediate step in supernova? Assuming that the flux tubes consist of nucleons (rather than nuclei), one would have rS(113)= 64 rS giving in the case of Sun rS(113)=192 km.

    3. For k=127 the p-adic scaling from k=107 would give Schwartschild radius rS(127) ∼ 220rS. For Sun this would give rS(127)=3× 109 m is roughly by factor 4 larger than the radius of the solar photosphere radius 7× 108 meters. k=125 gives a correct result. This suggests that k=127 corresponds to the minimal value of temperature for ordinary fusion and corresponds to the value of dark nuclear binding energy at magnetic flux tubes.

      The evolution of stars increases the fraction of heavier elements created by hot fusion and also temperatures are higher for stars of later generations. This would suggest that the value of k is gradually reduced in stellar evolution and temperature increases as T∝ 2(127-k)/2. Sun would be in the second or third step as far the evolution of temperature is considered. Note that the lower bound on radius of star allows also larger radii so that the allowance of smaller values of k does not lead to problems.


2. What about blackhole thermodynamics?

Blackhole thermodynamics is part of the standard blackhole paradigm? What is the fate of this part of theoretical physics in light of the proposed model?

2.1. TGD view about blackholes

Consider first the natural picture implied the vision about blackhole as space-filling flux tube tangle.

  1. The flux tubes are deformations of cosmic strings characterized by cosmological constant which increases in the sequence of increasing the temperature of stellar core. The vibrational degrees of freedom are excited and characterized by a temperature. The large number of these degrees of freedom suggests the existence of maximal temperature known as Hagedorn temperature at which heat capacity approaches to infinity value so that the pumping of energy does not increase temperature anymore.

    The straightforward dimensionally motivated guess for the Hagedorn temperature is suggested by p-adic length scale hypothesis as T= xhbar/L(k) , where x is a numerical factor. For blackholes as k=107 objects this would give temperature of order 224 MeV for x=1. Hadron physics giving experimentally evidence for Hagedorn temperature about T=140 MeV near to pion mass and near to the scale determined by ΛQCD, which would be naturally relate to the hadronic value of the cosmological constant Λ.

    The actual temperature could of course be lower than Hagedorn temperature and it is natural to imagine that blackhole cools down. The Hagedorn temperature and also actual temperature would increase in the phase transition k→ k-1 increasing the value of Λ(k) by a factor of 2.

  2. The overall view about the situation would be that the thermal excitations of cosmic string die out by emissions assignable perhaps to black hole jets and also going to the cosmic string until a state function reduction decreasing the value of k occurs and the process repeats itself.

    The naive idea is that this process eventually leads to ideal cosmic string having Hagedorn temperature T= hbar/R and possible existing at very low temperature: this would conform with the idea that the process is the time reversal of the evolution leading from cosmic strings to astrophysical objects as tangles of flux tube. This would at least require a phase transition replacing M107 hadron physics with M89 hadron physics and this with subsequent hadron physics. One must of course consider also all values of k as possible options as in the case of the evolution of star. The hadron physics assignable to Mersenne primes and their Gaussian counterparts could only be especially stable against a phase transition increasing Λ (k).

2.2. What happens to blackhole thermodynamics in TGD?

Blackhole thermodynamics (see this) has produced admirable amounts of literature during years. What is the fate of the blackhole thermodynamics in this framework? It turns out that the the dark counterpart of of Hawking radiation makes sense if one accepts the notion of gravitational Planck constant assigned to gravitational flux tube and depending on masses assignable to the flux tube. The condition that dark Hawking radiation and flux tubes at Hagedorn temperature are in thermal radiation implying TB,dark= TH. The emerging prediction TH is consistent with the value of the hadronic Hagedorn temperature.

  1. In standard blackhole thermodynamics the blackhole temperature TB identifiable identifiable as the temperature of Hawking radiation (see this) is essentially the surface gravity at horizon and equal to TB= κ/2π= hbar/4π rS is analogous to Hagedorn temperature as far as dimensional analysis is considered. One could think of assigning TB to the radial pulsations of blackhole like object but it is very difficult to understand how the thermal isolation between stringy degrees of freedom and radial oscillation degrees of freedom could be possible.

  2. The ratio TB/TH ∼ Lp/4π rS would be extremely small for ordinary value of Planck constant. Situation however changes if one has

    TB= hbareff/4π rS ,

    with hbareff= nhbar0=hbargr, where hbargr is gravitational Planck constant.

    The gravitational Planck constant hbargr was originally introduced by Nottale (see this and this) assignable to gravitational flux tube (presumably non-monopole flux tube) connecting dark mass MD and mass m (M and m touch the flux tubes but do not define its ends as assumed originally) is given by

    hbargr= GMDm/v0 ,

    where v0<c is velocity parameter. For the Bohr orbit model of inner planets Nottale assumes MD= M(Sun) and β0=v0/c≈ 2-11. For blackholes one expects that one has β0<1 is not too far from β0=1.

    The identification of MD is not quite clear. I have considered the problem how v0 and MD are determined in (see this and this). For the inner planets of Sun one would have β0∼ 2-11 ∼ me/mp. Note that the size of dark proton would be that of electron, and one could perhaps interpret 1/β0 as the heff/hbar assignable to dark protons in Sun. This would solve the long standing problem about identification of β0.

  3. One would obtain for the Hawking temperature TB,D of dark Hawking radiation with heff=hgr

    TB,D= (ℏgr/ℏ) TB= (1/8π β0)× (MD/M) × m .

    For k=107 blackhole one obtains

    TB,D/TH = ( ℏgr/ℏ)× TB× (L(107)/xℏ)= (1/8π β0(107))× (MD/M) × (L(107)m/xℏ) .

    For m=mp this gives

    TB,D/TH = ℏgr/ℏ) TB× (L(107)/xℏ)= (1/8π x β0(107))× (MD/M) × (mp/224 MeV) .

    The order of magnitude of thermal energy is determined by mp. The thermal energy of dark Hawking photon would depend on m only and would be gigantic as compared to that of ordinary Hawking photon.

  4. Thermal equilibrium between flux tubes and dark Hawking radiation looks very natural physically. This would give

    TB,D/TH=1

    giving the constraint

    (ℏgr/ℏ) TB ×( L(107)/xℏ)= (1/8π x β0)× (MD/M) (mp/224 MeV)=1 .

    on the parameters. For M/MD=1 this would give xβ0≈ 1/6.0 conforming with the expectation that β0 is not far from its upper limit.

  5. If ordinary stars are regarded as blackholes in the proposed sense, one can assign dark Hawking radiation also with them. The temperature is scaled down by L(107)/L(k) and for Sun this would give factor of L(107)/L(125)=2-9 if one requires that rS(k) corresponds to solar radius. This would give

    TB(dark,k)→ (ℏgr/ℏ)× (L(107)/L(k)) TB= (2(k-107)/2/8π β0)× (MD/M) × m .

    For k=125 and MD= M this would give TB(dark,125)= m/2π.

    The condition TB,D= TH for k=125 would require scaling of β0(107) to β(125)= 2-9β0(107) ≈ 2-11. This would give β0(107)≈ 1/4 in turn giving x ≈ .66 implying TH≈ 149 MeV. The replacement of mp=1 GeV with correct value .94 GeV improves the value. This value is consistent with the value of hadronic Hagedorn temperature so that there is remarkable internal consistency involved although a detailed understanding is lacking.


  6. The flux of ordinary Hawking thermal radiation is T4B/ℏ3. The flux of dark Hawking photons would be T4B,dark/ℏgr3 = (ℏgr/ℏ) TB4 and therefore extremely low also now also. In principle however the huge energies of the dark Hawking quanta might make them detectable. I have already earlier proposed that TB(hgr) could be assigned with gravitational flux tubes so that thermal radiation from blackhole would make sense as dark thermal radiation having much higher energies.

    One can however imagine a radical re-interpretation. BHE is not the thermal object emitting thermal radiation but BHE plus gravitational flux tubes are the object carrying thermal radiation at temperature TH= TB. For this option dark Hawking radiation could play fundamental role in quantum biology as will be found.

  7. What about the analog of blackhole entropy given by

    SB= A/4G= π lPl2TB2 ,

    where A= 4π rS2 is blackhole surface area. This corresponds intuitively to the holography inspired idea that horizon decomposes to bits with area of order lP2?

    The flux tube picture does not support this view. One however ask whether the volume filling property of flux tube could effectively freeze the vibrational degrees of flux tubes. Or whether these degrees of freedom are thermally frozen for ideal blackhole. If so, only the ends of he flux tubes at the surface or their turning points (in case that they are turn back) can oscillate radially. This would give an entropy proportional to the area of the surface but using flux tube transversal area as a unit. This would give apart from numerical constant

    SB= A/4L(k)2 .

2.3. Constraint from ℏgr/ℏ>1

Under what conditions mass m can interact quantum gravitationally and are thus allowed in hgr for given MD?

  1. The notion of hgr makes sense only for hgr>h. If one has hgr<h assume hgr=h. An alternative would be hgr=→ h0=h/6 for hgr<h0. This would given GMDm/v0>hbarmin (hbarmin=hbar or hbar/6) leading

    m>( β0ℏ/2rS(MD)) × (ℏmin/ℏ) .

    This condition is satisfied in the case of stellar blackholes for all elementary particles.

  2. One can strengthen this condition so that it would satisfied also for gravitational interactions of two particles with the same mass (MD=m). This would give

    m/mPl01/2 .

    For β0=1 this would give m=mPl, which corresponds to a mass scale of a large neuron and to size scale 10-4 m. β0(125)=2-11 gives mass scale of cell and size scale about 10-5 meters. β0(127)≈ 2-12 corresponding to minimum temperature making hot fusion possible gives length scale about 10-6 m of cell nucleus. A possible interpretation is that the structure in cellular length scale have quantum gravitational interaction via gravitational flux tubes. Biological length scales would be raised in special position from the point of view of quantum gravitation.

  3. Also interactions of structures smaller than the size of cell nucleus with structures with size larger the size of cell nucleus are possible. By writing the above condition as (m/mPl)(MD/mpl)>β0, one sees that from a given solution to the condition one obtains solutions by scaling m→ xm and MD→ MD/x. For β0(127)≈ 2-11 corresponding to the scale of cell nucleus the atomic length scale 10-10 m and length scale 10-4 m of large neuron would correspond to each other as "mirror" length scales. There would be no quantum gravitational interactions between structures smaller than cell nucleus. There would be master-slave relationship: the smaller the scale of slave, the larger the scale of the master.

2.4. Quantum biology and dark Hawking radiation

The scaling formula β0(k)∝ 1/L(k) with flux tube thickness scale given by L(k) allows to estimate β0(k). In this manner one obtains also biologically interesting length scales. An interesting question is whether the scales for the velocities of Ca waves (see this) and nerve pulse conduction velocity could relate to v0.

  1. The tube thickness about 10-4 m, which corresponds to ordinary cosmological constant being in this sense maximal corresponds to the p-adic length scale k=171. The scaling of β0∝ 1/L(k) gives v0(171)∼ 4.7 μm/s. In eggs the velocity of Ca waves varies in the range 5-14 μm/s, which roughly corresponds to range k∈ {171,170,169,168}.

    In other cells Ca wave velocity varies in the range 15-40 μm/s. k=165 corresponds to 37.7 μm/s near the upper bound 40 μm/s. The lower bound corresponds to k=168. k=167, which corresponds to the larges Gaussian Mersenne in the series assignable to k∈{151,157,163,167} the velocity is 75 μm/s.

  2. For k=127 gives v0∼ 75 m/s. k=131 corresponds to v0= 18 m/s. These velocities could correspond to conduction velocities for nerve pulses in accordance with the view that the smaller the slave, the larger the master.

I have already earlier considered that dark Hawking radiation could have important role in living matter. The Hawking/Hagedorn temperature assuming x=1/6.0 k=L(171) has peak energy 38 meV to be compared with the membrane potential varying in the range 40-80 meV. Room temperature corresponds to 34 meV. For k=163 defining Gaussian Mersenne one would have peak energy about .6 eV: the nominal value of metabolic energy quantum is .5 eV. k=167 corresponds to .15 eV and 8.6 μm - cell size. Even dark photons proposed to give bio-photons when transforming to ordinary photons could be seen as dark Hawking radiation: Gaussian Mersenne k=157 corresponds to 4.8 eV in UV. Could CMB having peak energy of .66 meV and peak wavelength of 1 mm correspond to Hawking radiation associated with k= 183? Interestingly, cortex contains 1 mm size structures.

To sum up, these considerations suggest that biological length scales defined by flux tube thickness and cosmological length scales defined by cosmological constant are related.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.



The relationship between p-adic time and discrete flow of geometric time defined by quantum jumps

There could be a relationship between quantal flow of geometric time by SSFRs and p-adic variant of time coordinates giving a reason why for p-adicity.

  1. TGD predicts geometric time as a real variant and p-adic variants in extensions of various p-adics induced by given extension of rationals (adelic space-time and adelic geometric time). Real and p-adic times share discrete points in the extension of rationals considered: roots of octonionic polynomials defining space-time surfaces as roots for their "real" and "imaginary" parts in quaternionic sense (see this). The roots of the real polynomial with rational coefficients giving octonionic polynomial as its continuation define space moments of M4 linear time assignable to special SSFRs. p-Adic time associated with the p-adic balls assignable the points are not well-ordered. One cannot tell about two moments of time which is earlier and which later.

  2. This could relate to the corresponding lack of well ordering related to "clock time" associated with self at given level of evolutionary hierarchy defined by the extension of rationals. The increase of "clock time" as a distance between tips of CD for a sequence of small state function reductions (weak measurements) occurs only in statistical sense and "clock time" can also decrease. The moments of time correspond to roots of the real polynomial define "special moments in the life of self", one might say.

    At the limit of infinite-D extension the roots of the polynomial define algebraic numbers forming a dense set in the set of reals. Cognitive representation becomes dense set. These "special moments" need not however become dense.

  3. One can raise an interesting question inspired by self inspection. As one types text, it often happen that the letters of the word become in wrong order, change places, and even jump from a word to another one. The experienced order of letters assignable to a sequence of SSFRs is not the same as the order of letters representing the order for the moments of geometric time. When one is tired, the phenomenon is enhanced.

    Neuroscientists can certainly propose an explanation for this. But could this be at deeper level quantum effect based on the above mechanism and have a description in terms of p-adicity assignable to prime p defining a ramified prime for the extension of rationals involved? When one is tired the metabolic resources have petered out and the IQs n=heff/h0 defined by dimensions of extensions of rationals for the distribution of extensions tend to reduce, cognitive resolution for time becomes lower and mistakes of this kind become worse.

See the article Latest progress in TGD or the chapter Zero Energy Ontology and Matrices of "Towards M- matrix".

For a summary of the earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Cosmological time anomalies and zero energy ontology

Zero energy ontology (ZEO) replaces ordinary ontology in TGD based view about quantum states and quantum jump (see this). It came as a surprise that the new view about time allows to understand several time anomalies of cosmology. I have already earlier considered strange findings of Minev et al about state function reduction in atomic systems in terms of ZEO (see this). In neuroscience Libet's findings that volition is preceded by neural activity can be understood to. Earthquakes and volcanic eruptions involve also strange time anomalies (see this). The general explanation of time anomalies would be ZEO plus quantum coherence up to cosmological scales made possible by dark matter in TGD sense. This would give completely unexpected direct connection with TGD inspired theory of consciousness and cosmology. The following considerations can be found from the article "Cosmic string model for the formation of galaxies and stars")

Consider first ZEO.

  1. In ZEO zero energy states are superpositions of space-time surfaces inside causal diamond (CD) identified as preferred extremals of the basic action principle of TGD. CD is cartesian product of causal diamond cd of M4 and of CP2. The preferred extremals analogous to Bohr orbits have boundaries - ends of space-time - at the light-like boundaries of CD. There is a fractal hierarchy of CDs and given CD is an imbedding space correlate for a conscious entity - self - consciousness is universal.

  2. Zero energy states can be seen as superpositions of state pairs with members assigned to the opposite boundaries of CD. ZEO predicts that in ordinary or "big" state function reductions (BSFRs) the arrow of time of system changes and remains unaffected in "small" state functions (SSFRs), which are TGD counterpart for "weak" measurements and associated with a sequence of unitary evolution for the state assignable to the active boundary CD, which also shifts farther from the passive boundary. Passive boundary is unaffected as also members of state pairs at it.

  3. Subjective time is identified as a sequence of SSFRs and correlates strongly with clock time identifiable as the distance between the tips of CD and increasing in statistical sense during the sequences of SSFRs.

  4. BSFR corresponds to state function reduction at active boundary of CD which becomes passive. This forces the state at passive boundary to change. Passive boundary becomes active. BSFR means the death of self and reincarnation with an opposite arrow of time. Thus the notion of life cycle is universal and life can be lieved in both directions.

  5. What happens to CD in long run? There are two options.

    1. The original assumption was that the location of formerly passive boundary is not changed. This would mean that the size of CD would increase steadily and the outcome would be eventually cosmology: this sounds counter-intuitive. Classically energy and other Poincare charges are conserved for single preferred extremal could fail in BSFRs due to the fact that zero energy states cannot be energy eigenstates.

    2. The alternative view suggested strongly M8-H duality (see this) is that the size of CD is reduced in BSFR so that the new active boundary can be rather near to the new passive boundary. One could say that the reincarnated self experiences childhood. In this case the size of CD can remain finite and its location in M8 more or less fixed. One can say that the self associated with the CD is in a kind of Karma's cycle living its life again and again. Since the extension of rationals can change in BSFR and since the number of extensions larger than given extension is infinitely larger than those smaller than it, the dimension of extension identifiable in terms of effective Planck constant increases. Since n= heff/h0 serves as a kind of IQ, one can say that the system becomes more intelligent.

Cosmic redshift but no expansion of receding objects: one further piece of evidence for TGD cosmology

"Universe is Not Expanding After All, Controversial Study Suggests" was the title of very interesting Science News article (see this) telling about study, which forces to challenge Big Bang cosmology. The title of course involved the typical exaggeration.

The idea behind the study was simple. If Universe expands and also astrophysical objects - such as stars and galaxies - participate the expansion, they should increase in size. The observation was that this does not happen! One however observes the cosmic redshift so that it is too early to start to bury Big Bang cosmology. This finding is however a strong objection against the strongest version of expanding Universe. That objects like stars do not participate the expansion was actually known already when I developed TGD inspired cosmology for quarter century ago, and the question is whether GRT based cosmology can model this fact naturally or not.

The finding supports TGD cosmology based on many-sheeted space-time. Individual space-time sheets do not expand continuously. They can however expand in jerk-wise manner via quantum phase transitions increasing the p-adic prime characterizing space-time sheet of object by say factor two of increasing the value of heff=n× h for it. This phase transition could change the properties of the object dramatically. If the object and suddenly expanded variant of it are not regarded as states of the same object, one would conclude that that astrophysical objects do not expand but only comove. The sudden expansions should be observable and happen also for Eart. I have proposed a TGD variant of Expanding Earth hypothesis along these lines (see this).

Stars as reincarnating conscious entities

One can apply ZEO to the evolution of stars. The basic story (see this) is that the star is formed from he interstellar gas cloud, evolves and eventually collapses to a white dwarf, degenerate carbon-oxygen core, supernova or even blackhole if the mass of the remnant resulting in explosion throwing outer layers of the star away is in the range of 3-4 solar masses. Only very massive stars end up to supernovas. The type of the star depends on the abundances of various elements in the interstellar gas from which they formed and believed to contain heavier elements produced by earlier supernovas.

There are however several anomalies challenging the standard story. There are stars older than Universe (see this)). There is also evidence that the abundances of heavier elements in the early cosmology are essentially the same as for modern stars (see this). TGD based explanation is discussed earlier.

Karma's cycle option for the stellar evolution could explain these anomalies.

  1. Stars would be selves in Karma's cycle with their magnetic bodies reincarnating with a reversed arrow of time in a collapse to blackhole/white hole like entity (BHE/WHE) - depending on the arrow of time. This would follow by a stellar evolution leading to an asymptotic state BHE/WHE corresponding to maximum size of CD followed by a collapse to BHE or WHE. Also ordinary stars would correspond to BHEs/WHEs characterized by p-adic length scale L(k) longer than L(107) assignable to GRT blackholes. In standard time direction WHE would look like blackhole evaporation.

  2. This would allow stars older than the Universe and suggests also universal abundances. Note however that the abundances would strongly depend on the abundances of the interstellar gas and matter produced by the magnetic energy of flux tube. "Cold fusion" as dark fusion could produce elements heavier than Fe and light elements Li, Be, B, whose abundances for fusion in stellar core is predicted to be much much smaller than the observed abundances in the case of old stars. The lifetimes of stars depend on their type. Also a universal age distribution of stars in stellar clusters not depending appreciably on cosmic time is highly suggestive. I remember of even writing about this. Unfortunately I could not find the article.

To put it more generally, the hierarchy of CDs implies that the Universe decomposes effectively to sub-Universes behaving to some degree independently. The view about Karma's cycles provides a more precise formulation of the pre-ZEO idea that systems are artists building themselves as 4-D sculptures. In particular, this applies to mental images in TGD based view about brain.
  1. One could perhaps say that also quantum non-determinism has classical correlates. CDs would be the units for which time-reversing BSFRs are possible. Also SSFRs affecting CDs could have classical space-time correlates. M8-H duality predicts that the time evolution for space-time surface inside CDs decomposes to a sequence of deterministic evolutions glued together along M4 time t=rn hyperplanes of M4 defining special moments in the life of self at which the new larger CD receives a new root t=rn. The non-deterministic discontinuity could be localized to the 2-D vertices represented by partonic 2-surfaces at which the ends of light-like partonic orbits meet.

  2. The M4 hyperplanes t=rn correspond to the roots of a real polynomial with rational coefficients defining the space-time surfaces at the level of M8 as roots for the real or imaginary part in quaternionic sense for the octonionic continuation of the polynomial. These moments of time could correspond to SSFRs.

  3. The finite classical non-determinism is in accordance with the classical non-determinism predicted at the limit of infinitely large CD and vanishing cosmological constant at which classical action reduces to Kähler action having a huge vacuum degeneracy due to the fact than any space-time surface having Lagrangian manifold (vanishing induced Kähler form) as CP2 projection is a vacuum extremal. The interpretation of this degeneracy interpreted in terms of 4-D spin glass degeneracy would be that at the limit of infinitely large CD the extension of rationals approaches to algebraic numbers and the roots t=rn becomes dense and the dynamics becomes non-deterministic for vacuum extremals and implies non-determinism for non-vacuum extremals.

No time dilation for the periods of processes of quasars

There are strange findings about the time dilation of quasar dynamics challenging the standard cosmology . One expects that the farther the object is the slower its dynamics looks as seen from Earth. Lorentz invariance implies red shift for frequencies and in time domain this means the stretching of time intervals so that the evolution of distant objects should look the slower the longer their distance from the observer is. In the case of supernovae this seems to be the case. What was studied now were quasars at distances of 6 and 10 billion years and the time span of the study was 28 years. Their light was red shifted by different amounts as one might expect but their evolution went on exactly the same rhythm. This looks really strange.

In GRT the redshift violates conservation of four-momentum. In TGD cosmic redshift reduces to the fact that the tangent spaces of the space-time surface for target and receiver differ by a Lorentz boost. Redshift does not mean non-conservation of four-momentum but only that the reference frames are different for target and observer. The size for the space-time sheets assignable to the systems considere must be large, of the order of the size scale L defined by the size of the recent cosmology to which one assigns the Hubble constant. In the flux tube picture this means that the flux tubes have length of order L but thickness would be about R=10-4 meters - the size scale of large neuron. Photons arrive along flux tubes connecting distant systems. Note that CMB corresponds to 10 times longer peak wavelength.

I have already earlier discussed this time anomaly but what I have written is just the statement of the problem and and some speculations about its solution in terms of ZEO. A valuable hint is that the time anomaly appears for quasars- very heavy objects - but not for supernovae - much lighter objects. This suggests that the redshift depends on the masses of the objects considered.

  1. One considers an approximately periodic process. It is quite possible that this process is not classical deterministic process at space-time level but that one has sequence of SSFRs (weak measurements) or even BSFRs for a subsystem of the target. These processes replace quantum superposition of space-time surfaces inside CD with a new one and SSFR also increases its size in statistical sense. A natural Lorentz invariant "clock time" for the target is the distance between the tips of CD - light-cone proper time. Both M4 linear coordinates and light-cone Robertson-Walker coordinates are natural coordinates for space-time sheets with 4-D M4 projection.

    "Clock time" must be mapped to M4 linear time for some space-time sheet. The Minkowski coordinates for the CD are determined only modulo Lorentz boost leaving the light-like boundary of CD invariant. In general the M4 coordinates of the target and observer are related by a Lorentz boost and this gives rise to cosmological redshift and also gravitational reshift.

  2. The information about SSFR or BSFR at the target must be communicated to the observer so that the space-time sheets in question must be connected by flux tubes carrying the photons. CD must contain both systems and naturally has cosmological size given by L so that flux tubes have thickness about R. The M4 time coordinate must be common to both systems. The natural system to consider is center of mass system (cm) in which the sum of the momenta of two systems vanishes.

Do quasars and galactic blackholes relate by time reversal in ZEO?

This picture combined with zero energy ontology (ZEO) based view about ordinary state functions changing the arrow of time and occurring even in astrophysical scales leads to a tentative view about quasars and galactic blackholes as time reversals of each other.

  1. Quasars could be seen as analogs of white holes feeding the mass of cosmic string out to build the galactic tangle and part of the mass of thickening tangle would transform to ordinary matter. They would initiate the formation of galaxy meaning emergence of increasing values of heff in the hierarchy of Planck constant. Cosmic string would basically feed the mass and energy liberated in the decay of magnetic energy at cosmic strings thickening to flux tubes to ordinary matter and serving in the role of metabolic energy driving self-organization.

  2. Galactic blackholes could be perhaps indeed analogs of blackholes as time reversals of quasars - "big" (ordinary) state function reduction would transform quasar as white hole to a galactic blackhole. Now the system would be drawing back the mass from the surroundings to the flux tube and maybe cosmic string. The process could be like breathing. In zero energy ontology breathing could indeed involve a sequence of states and their time reversals.

This raises also the question whether the evolution of stars could be seen as a time reverse for the formation of blackholes: kind of growth followed by a decay perhaps since the values of Planck constant heff would be reduced. The climax of his evolution would correspond to maximal values of heff. The evolution of life would be certainly this kind of climax.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.