Thursday, November 28, 2019

Evidence for the anisotropy of the acceleration of cosmic expansion

Evidence for the anisotropy of the acceleration of cosmic expansion has been reported (see this). Thanks to Wes Johnson for the link. Anisotropy of cosmic acceleration would fit with the hierarchy of scaled dependent cosmological constants predicting a fractal hierarchy of cosmologies within cosmologies down to particle physics length scales and even below.

The phase transitions reducing the value of Λ for given causal diamond would induce accelerated inflation like period as the magnetic energy of flux tubes decays to ordinary particles. This would give a fractal hierachy of accelerations in various scales.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Darvinian or neutral theory evolution or something else?

The stimulus to the posting came from an article telling that the neutral theory of evolution has been challenged by evidence for DNA selection (see this). I must admit that I had no idea what neutral theory of evolution means. I had thought that Darwinian view based on random mutations and selection of the most adaptive ones is the dominating view. The debate has been about whether Darwinian or neutral theory of evolution is correct or is some new vision needed.

Darwinian and neutral theories of evolution

  1. Adaptive evolution is the Darwinian view. Random mutations are generated and organisms with the most adaptive genome survive. One can of course argue that also recombination occurring during mitosis creating germ cells creates new genetic combinations and must be important for the evolution. Selection can be either negative (purifying) and eliminate the non-adaptive ones or positive favoring the reproduction of the adaptive ones.

    One can argue that notions like "fight for survival" and selection do not fit with the idea about organisms as basically inanimate matter having no goals. Also second law poses problems: no evolution should take place, just the opposite.
    Metabolic energy induces self-organization but by second law all gradients about which metabolic energy feed is an example, disappear.

  2. Neutral evolution theory was proposed by Morita 50 years ago and gained a lot of support because of its simplicity. Point mutations for the codons of DNA would create alleles. Already in Darwinian evolution one knows that large fraction of mutations are neutral having not positive or negative effect of survival. Morita claims that all mutations are of this kind. There would be no "fight for survival" or selection.

    The so called genetic drift, which is completely random process is possible in small populations and can lead to counterpart of selection: it can happen that only single allele remains and is counterpart for the winner in selection. This is purely random and combinatorial effect and in physics one would not call it drift.

    The first objection is that if one has several isolated small populations, the outcomes are completely random so that in this sense there is no genetic drift. Furthermore, there is no reason why further mutations would not bring the disappeared alleles back. Second objection is that there would not be no genuine evolution - how one can speak about theory of evolution?

    Now the feed of experimental and empirical data is huge as compared to what it was 5 decades ago and it is now known that the neutral theory fails: for instance, varying patterns of evolution among species with different population sizes cannot be understood. It is also clear that selection and adaptions really occur so that Darwin was right.

  3. The shortcomings of the neutral theory led Ohta to propose nearly neutral theory of evolution. Mutations can be slightly deleterious. For large populations this leads to a purging of slightly deleterious mutations. For small populations deleterious mutations are effectively neutral and lead to the genetic drift.

    There is however a further problem: why the rate of evolution varies as observed between different lineages of organisms.

  4. One reason for fashionability was that the model was very simple and allowed to compute and predict. Only the size of the population and rate for the mutations is enough to predict the future in small populations. The predictions have been poor but this has not bothered the proponents of the neutral evolution theory.

    As an outsider I see this as a typical example of a fashionable idea: these have plagued theoretical particle physics for four decades now and led to a practically complete stagnation of the field via hegemony formation. Simple arguments show that the idea cannot be correct but have no effect.

Article explains several related notions.
  1. It has been possible to determine the mutation rates at the level of individual sites of genome since 2005. Only subset of mutations of say cancer cells are functionally important to cancer and they can be identified. This leads to a selection intensity as basic notion. This notion is expected to be very valuable for the attempts to find targeted cure of cancer.

  2. Neutral theory of evolution assumes that only point mutations matter. Theory was therefore completely local at the level of genome - and certainly simple! Innocent outsider knowing a little bit about biology wonders why the recombination of maternal and paternal chromosomes in meiosis creating the chromosomes associated with germ cells are not regarded as important. This mechanism is non-local at the level of genome and would naturally lead to a selection at the level of individuals of the species. It has been indeed learned that the genetic variation and the rate of recombination in meiosis correlate in given region of genome. This sounds almost obvious to the innocent novice but had to be discovered experimentally.

    One can however still try to keep the neutral theory of evolution by assuming that recombination is completely random process and there is no selection and adaption - contrary to the experimental facts and the basic idea behind the notion of evolution. Recombination would bring only an additional complication.

    Besides the direct purifying selection and neutral drift there would be recombination creating differences in the levels of variation across the genomic landscape. This leads to the notion of genetic hitchiking. When beneficial alleles are closely linked to neighboring neutral mutations, selection acts as a unit on them. One speaks about linked selection. Frequencies of neutral alleles are determined by more than genetic drift but one can speak of neutrality still. Linkage of hitchiker to allele - beneficial or not - is however random. Does genuine evolution takes place at all?

  3. Most of the DNA is not expressed as proteins. It would not be surprising if this part of DNA could have important indirect role in gene expression or perhaps be expressed in some other manner - say electromagnetically. How important role this part of DNA has in evolution? There are also transposons inducing non-point like mutations of this part of DNA: what is their role. There also proposals that viruses, usually though to be a mere nuisance, could play decisive role in evolution by modifying the DNA of host cells.

  4. It is now known that up to 80-85 per cent of human genome is probably affected by background selection. Moreover, height, skin color blood pressure are polygenic properties in the sense that hundreds or thousands of genes are acting in concert to determine these properties. This strongly suggests that point-like mutations cannot be responsible for evolution and not even recombinations are enough if random. A control of evolution in longer scales seems to be required. This of course relates to the basic problem of molecular biology: what gives rise to the coherence of living matter. Mere bio-chemistry cannot explain this. Something else perhaps controlling the bio-chemistry is needed.

TGD based view about evolution

One can start by criticizing the standard view.

  1. Is the standard view (to the existent that such exists) about evolution consistent with second law? One can even ask whether standard view about thermodynamics assuming a fixed arrow of time is correct.

  2. If mutations and more general changes of genome occur by pure change, can they really lead to a genuine evolution. The notions of selection and survival of fittest are notion, which do not conform with the view about evolution as mere standard physics. A probable motivation for neutral evolution theory has been the attempt to get rid of these notions: physicalism taken to extreme.

  3. The reduction of life to bio-chemistry does not allow to understand the coherence of organisms.

  4. One can also criticizing the reduction of life to mere genetics.

    1. Genetic dogma does not tell much about morphogenesis.
    2. Is genetic determinism a realistic assumption? Clones of bacterium are know know to have personalities behaving differently under given conditions (see this).
    3. Most of the genome of the higher organisms consists of DNA not transcribed to RNA still interpreted as junk by some biologists. What about introns? Could there exists other forms of gene expression - say electromagnetic.

TGD based view about evolution can be seen as a response to these criticisms but actually developed from a proposal for a unification for fundamental interactions and from the generalization of quantum measurement theory leading to a theory of consciousness and generalization of quantum theory itself.

  1. TGD leads to a new view about space-time and classical fields. In particular, many-sheeted space-time and magnetic body bring in new element changing dramatically the views about biology.

    The notion of Maxwellian fields is modified. Unlike in Maxwellian theory any system has field identity, field body, in particular magnetic body (MB) carrying dark matter n TGD sense and in well-define sense at higher evolutionary level as compared to ordinary bio-matter. This expands the standard pairing organism-environment to a triple MB-organism-environment.

    MB can be seen as the controlling intentional agent and its evolution would induce also the evolution of the ordinary bio-matter. MB carries dark matter as heff/h0=n phases giving rise to macroscopic quantum coherence at level of MB. MB forces the ordinary bio-matter to behave coherently (not quantum coherently).

    TGD leads also to a realization of genetic code at the level of dark analog of DNA represented as dark proton sequences (see this) - dark nuclei, which are now essential element of TGD based view about nuclear physics (see this). Dark photons are essential for the communications between MB and ordinary bio-matter. Also dark photons would realize genetic code with codon represented as 3-chord consisting of 3 dark photons.

    Genetic modification would take place at the level of magnetic flux tubes containing dark analog of DNA and induce changes of the ordinary genome, which would do its best to mimic dark genome. In particular, the recombination occurring during the meiosis would be induced by the reconnection of the flux tubes of dark genome.

  2. Number theoretical vision about evolution deriving from the proposal that p-adic physics for various primes combining to what I call adelic physics is second needed element (see this). Any system can be characterized by a extension of rationals defining its algebraic complexity. The dimension of extension identifiable in terms of the effective Planck constant heff/h0=n defines evolutionary level as a kind of IQ. What is remarkable that n increases in statistical sense since the number extensions with n larger than that for given extension is infinitely larger than that of lower-dimensional extensions. Intelligent ones have larger scale of quantum coherence and thus coherence of bio-matter and survive. Evolution is directed process forced by number theory alone.

    Quantum jumps in the sense of ZEO tending to increase n occurring naturally in mitosis generating germ cells lead also to a more intelligent genomes. Point mutations could be seen something occurring at the level of ordinary matter rather than being induced by dark matter.

  3. Zero energy ontology (ZEO) is behind the generalization of quantum measurement theory solving the basic problem of standard quantum measurement theory. There are two kinds of state function reductions: "small" state function reductions (SSFRs) as analogs of weak measurements and giving rise to the the life cycle of conscious entity self having so called causal diamond (CD) as a correlate. Under SSFRs the passive boundary of CD is unaffected as also members of state pairs at it: this gives rise to the "soul" of unchanging part of self. "Big" state function reductions (BSFRs) correspond to ordinary state function reductions. They change the arrow of time and one can say that self dies and re-incarnates with a reversed arrow of time. This applies in all scales since consciousness and cognition predicted to be universal. In BSFRs the value of heff increases in statistical sense and this gives rise to evolution also at the level of genome. The reversal of the arrow of time allows to see self-organization and metabolism as dissipation in non-standard time direction so that generalization of thermodynamics to allow both arrows of time allows to understand both self-organization and evolution.

Evolution at DNA level

A possible application would be TGD based model for meiosis and fertilization. The starting point is that recombinations occurring in meiosis represent a fundamental step in evolution preserving the species and point mutations are mostly noise having also negative effects. There are also modification which produce a new species. Consider first recombinations.

  1. In meiosis BSFR for the dark proton sequences defining dark DNA could induce reconnections of parallel maternal and paternal dark proton flux tubes inducing recombination at the level of the ordinary genome.

  2. The resulting germ chromosomes - or rather their dark variants realized in terms of dark proton sequences would have arrow of time opposite that of chromosomes. They would be in a dormant state analogous to sleep.

  3. Fertilization involves the pairing of paternal and maternal germ chromosomes and looks almost like time reversal of meiosis. In the proposed picture it would indeed change the arrow of time for the germ chromosomes - wake up them. The sequence meiosis replication-meiosisI-division - meiosisII would correspond to 4 BSFRs leading to germ cells having dark genome as as time reversal of ordinary genome.

    Remark: One can ask whether also the passive strand of ordinary DNA has arrow of time opposite to that of the active strand.

Recombinations do not change the genome dramatically and can be said to be species preserving. Big leaps in evolution change genome more drastically - say by adding new genes - to yield what might be regarded as a new species. They represent a challenge also for the TGD based view.

The big changes should occur at the level of the magnetic body inducing in turn modifications at the level of ordinary genome. The addition of a portion of DNA double strand of same length to the end of DNA double strand could be a species changing modification. This would not change the earlier genome and could add a new gene for instance. How this change could occur?

  1. In TGD dark genome acts as master controlling the ordinary genome playing the role of slave. Dark genes correspond to dark proton sequences with possibly subset of protons behaving like neutrons due to the presence of negatively charged bonds between two neighboring protons of the sequence. Large modification would add to this sequence new dark protons: dark counterpart of nuclear fusion would take place.

    In water Pollack effect would correspond to this process this process and would give rise to charge separation creating negatively charged regions called exclusion zones (EZs) by Pollack. There is no reason why this process could not occur also inside cells containing pairs maternal and paternal chromosomes.

  2. At the level of dark magnetic body the modification of dark double strand could be realized if it corresponds to a closed monopole flux loop having double helical structure: the conjugate strand would carry the return flux. The addition of a piece of DNA would be induced by a reconnection gluing shorter helical flux loop to the end of the helical loop. The chemical counterpart of dark DNA would be formed by the pairing of dark codons with the ordinary codons - kind of transcription process.

  3. The modifications of paternal and maternal dark genomes are expected to occur independently and typically lead to different lengths of paternal and maternal DNAs. Hence the condition that the paternal and maternal modifications of germ cells are identical (same length) is too strong.

    Can the maternal and paternal DNA double strands have different lengths? This seems to be possible. The reconnection process in meiosis does not require same lengths for maternal and paternal genomes. In fertilization the chromosomes of paternal and maternal gametes form pairs and also this allows different lengths. Therefore the big leaps in the evolution could correspond to additions of new pieces to maternal an/or paternal dark genome.

This picture is of course over-simplified. Also addition of DNA portions in the middle of genome - say adding a new gene or par of gene - should be possible at the level of dark matter. Also this process should occur by reconnection process at the level of dark matter. Also now it seems that the process can occur independently for maternal and paternal chromosomes.

See the article Darwinian or neutral theory of evolution or something else?, the longer article Getting philosophic: some comments about the problems of physics, neuroscience, and biology, or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, November 27, 2019

Multilocal viruses

I learned about very interesting piece of strangeness in biology known already for half a century (see this): there are viruses which can split into segments going into different host cells, replicate and produce proteins there, and self-assemble to original virus after this.

Virus (see this) consist of DNA or RNA, protein coat, and in some cases outside envelope consisting of lipids and analogous to cell membrane. Typically viruses consist of DNA or RNA decomposing to short segments coding for single protein. The reason for this is that RNA replication is prone to errors and for short segments these errors are not so fatal. Also DNA can be segmented but the segments are longer. RNA can be have positive sense in which it can be directly translated to protein or negative sense in which case replication producing positive sense RNA is needed made possible by an enzyme contained by the virus.

The usual thinking about viruses is that virus finds its way to cell and then uses the genetic machinery of the cell to replicate its DNA and RNA and produce also proteins. This does not not occur in the case of multipartite viruses infecting plants. The virus can split into segments infecting host cells separately. The segments of RNA and proteins contained by the virus are thus shared by different cells are replicated and coded to proteins. The outcome of the process is then brought together in some cell which need not contain gene segments in it and self-assembly to full virus can occur. Also fractured viruses can flourish and can infect some other plant.

It has been found that the full complement of most viral segments is missing from most plant cells. Protein required for viral replication present in cells that did not have genome for producing it so that the produced proteins can be transferred from the cell where they are produced to neighboring cells: it is though that so called plasmodesmata connecting cells to a network make this possible.

In standard view assuming that the viral segments are completely independent systems multi-partitioning has high risks. In this view theoretically not more than 4 segments are possible. For instance, 8 has been observed in the examples discussed. Even flu virus decomposes into 8 DNA segments with the cell inside which it replicates. Multi-partitioning produces also problems for spreading. In the case of FBNSV viruses mentioned in the article on the insect - aphid- eating FBNSV spreads the virus to plants. How can it get all 8 parts of virus simultaneously? This is very difficult to understand if the segments are really independent.

This suggests that the view about these viruses somehow wrong. Multi-partitioning happens and standard view does not allow it.

One can start by asking why the multi-partitioning implying modular reproduction (something analogous to that in industry!)? One good reason is that host cell might not be able to recognize the segments. Also transcription of too large number of RNAs might be too much for the host and kill it. It seems that viruses act as populations.

TGD based model is based on familiar basic notions.

  1. The basic mystery of the biology is coherence of organisms. Bio-chemistry alone cannot explain it. In TGD quantum coherence of dark matter identified as heff=nh0 phases of ordinary matter at magnetic flux tubes of the magnetic body (MB) of the system is quantum coherent in long scales and this quantum coherence forces the coherence of ordinary living matter.

  2. The flux tubes of MB connect cells to larger networks (tensor networks). In particular the segments of virus can be connected to a network in this manner. The segments would be effectively free but their behavior would be correlated.
    Virus would be multi-local entity at the level of ordinary matter but single connected structure at the level of MB.

  3. The TGD based model for bio-catalysis and replication and the model for monopole flux tubes suggests that
    the the phase transition increasing heff/h0 =n increases the length of the flux tube. This process requires metabolic energy since quite generally the energy of system increases with n serving as a kind of IQ of the system measuring its algebraic complexity and identifiable as the dimension of extension of rationals assignable to the system. Multi-partitioning requires metabolic energy presumably given by a host cell. The components of multi-partitioned virus are virtually independent but flux tube connections are not lost. There are very many possible multi-partitionings and the individual host cell can contain several segments.

  4. If the decay of virus to multi-partition corresponds to ordinary state function reduction ("big" state function reduction (BSFR) in zero energy ontology (ZEO) the arrow of time changes at the level of MB of virus (dark matter). heff/h0=n increases in statistical sense in BSFR so that the multi-partitioned state should have higher IQ and is thus favored by quantum TGD. One might perhaps say that when virus is not active it does not need too much IQ: IQ requires metabolic energy feed and low IQ is the most economical choice in the dormant space. When virus infects the host it become active and and increase of n makes it multi-local at the level of ordinary matter.

    If this view is correct the self-assembly of the virus would lead back to dormant state with opposite arrow of time. That dormant state of virus would correspond to opposite arrow of time for "virus self" would conform with the general view that observer with opposite arrow of time than conscious entity experiences it as sleeping. One must be of course however very cautious with interpretations.

  5. This dormant states would not be specific to viruses. Also folded protein would be dormant. External perturbation would feed metabolic energy feed waking up the dormant protein and protein would un-fold and become active and intelligent.

    Same applies to multi-locality. Also bacterial colony could be seen as single organism multi-local only at the level of ordinary bio-matter. When bacterial colony suffers starvation the bacteria form a single tightly connected structure also at the level of ordinary bio-matter. In the absence of metabolic energy feed the values of n associated with the flux tubes would be reduced and they would shorten causing the phenomenon.

    For cellular organisms the multi-locality at the level of ordinary bio-matter be realized for cell but the distances of cells would be fixed. Also at the level of DNA, RNA, tRNA and amino-acids multi-locality would be realized but the distances would not be fixed. In bio-catalysis the reactants are brought together and here heff reducing phase transition would take place providing also the energy needed to overcome the potential wall making the reaction extremely slow otherwise. In TGD based model for replication, transcription, and translation this flexible multi-locality is indeed assumed (see this).

  6. How sexual reproduction (see this) emerged is one of the mysteries of biology. The formation of tightly bound multi-local states of mono-cellulars would have increased the probability for lateral gene transfer between neighboring cells, and also the replacement of mere replication with a two-step process consisting of replication followed by meiosis and fertilization as its inverse. The reconnection of flux tubes assignable to DNA is a prerequisite of this process in TGD framework so that the formation of states analogous multi-cellulars would have made this process plausible.

It has been found (see this), thanks for Nikolina Bendedikovic for a link) that multicellulars have monocellular colonies as predecessors in the sense that the bacteria (monocellulars) form temporarily tight structures resembling multicellular embryos. The transition from loose multi-locality to a more tight one suggets itself. When metabolic energy feed is low bacteria form tightly bound non-multilocal structures analogous to multi-cellulars. The flux tubes are shorten and metabolic energy is liberated, and also the need form metabolic energy is lower when flux tubes have lower values of heff. Multi-cellulars would be permanently in this configuration and their intelligence coded by distribution of heff:s would be realized differently.

Multi-cellulars would have been formed when these multi-cellular like bacterial colonies became permanent and began to evolve from embryos to more developed forms (see this and this). Hitherto I have assumed that multi-cellulars were formed already before the Cambrian explosion assumed to be induced by a relatively rapid phase transition increasing reducing the local cosmological constant by factor 1/2, and increasing the radius of Earth by a factor 2. This transition would have brought multi-cellulars to the surface from underground oceans giving also rise to the ordinary oceans. I have compared underground oceans to a womb of magnetic Mother Gaia. Ontogeny recapitulates phylogeny principle suggests that the life of the multicellular embryo in womb corresponds to the period of multicellular life in underground oceans.

Second possibility is that the multi-cellulars emerged from underground mono-cellulars during this transition or immediately after it. Could the emergence of bacterial colonies to the surface perhaps providing less metabolic energy feed forced them to form tightly bound colonies forcing the evolution of multi-cellulars?

See the article Multilocal viruses or the chapter Dark matter, quantum gravity, and prebiotic evolution.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Too heavy blackhole in Milky Way

The standard model for blackhole formation predicts an upper bound on the mass of blackhole depending also on environment since the available amount of matter in environment is bounded. In the case of Milky Way the bound is about 20 solar masses. Now however a blackhole like entity (BHE) with mass about 70 solar masses has been discovered (see this) . I am grateful for Wes Johnson for the link. Also the masses of BHEs producing the gravitational radiation in their fusion have been also unexpectedly high, which suggests that standard view about BHEs is not quite correct.

In TGD framework the blackhole-like entities (BHEs) are volume filling flux tubes: the thickness of the flux tube is roughly proton Compton length. Also asymptotic states of stars are BHEs in this sense: the flux tube radius is some longer p-adic length scale coming in half octaves. The model leads to a correct lower bound for the radius and mass satisfied by neutron stars and standard blackholes. Also the masses of very light stars can be understood.

Model in its recent form does not give upper bound for the mass. For time reversed BHEs - analogs of white holes (WHEs) possibly identifiable as quasars - the mass of WHE comes from a tangling long cosmic string and there is no obvious upper bound. Even galactic BHEs would correspond to WHEs made quantum jump to BHEs at the level of magnetic body: in this state the flux tube forming counter the magnetic field is fed back from environment. A breathing spaghetti.

In standard model the mechanism for the formation of blackhole is different since there is no flux tube giving the dominant dark energy/dark matter contribution. Therefore the upper bound for mass - if there exists such - is expected to increase. An upper bound could come from the transformation of de-entangling of flux tube so that spaghetti would straighten.

The simplest model predicts that only the flux tube mass contributes. The mass of the ordinary matter going to BHE would transform back to dark energy/mass of the flux tube. The process would be time reversal of the process making sense in zero energy ontology (see this and this) in which the magnetic energy of flux tube transforms to ordinary matter: time reversal for the TGD counterpart of inflation.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, November 25, 2019

Blackholes, quasars, and galactic blackholes

In the sequel I summarize the dramatic progress which has taken place in the understanding of blackhole like entities (BHEs) in TGD framework. This picture allows to see also stars as BHEs. A more detailed representation can be found in the article Cosmic string model for the formation of galaxies and stars.

I have discussed a model of quasars earlier (see this) . The model is inspired by the notion of MECO and proposes that quasar has a core region analogous to black hole in the sense that the radius is apart from numerical factor near unit rS=2GM. This comes from mere dimensional analysis.

1. Blackholes in TGD framework

In TGD the metric of blackhole exterior makes sense and also part of interior is embeddable but there is not much point to consider TGD counterpart of blackhole interior, which represents failure of GRT as a theory of gravitation: the applicability of GRT ends at rS. The following picture is an attempt to combine ideas about hierarchy of Planck constant and from the model of solar interior (see this) deriving from the 10 year old nuclear physics anomaly.

  1. The TGD counterpart of blackhole would be maximally dense spaghetti formed from monopole flux tube. Stars would not be so dense spaghettis. A still open challenge is to formulate precise conditions giving the condition rS= 2GM. The fact that condition is "stringy" with T= 1/2G taking formally the role of string tension encourages the spaghetti idea with length of cosmic string/flux tube proportional to rS.

  2. The maximal string tension allowed by TGD is determined by CP2 radius and estimate for Kähler coupling strength as 1/αK ≈ 1/137 and is roughly Tmax∼ 10-7.5/G suggesting that in blackhole about 107.5 parallel flux tubes with maximal string tension and with length of about rS give rise to blackhole like entity. Kind of dipole core consisting of monopole flux tubes formed by these flux tubes comes in mind. The flux tubes could close to short flux tubes or flux tubes could continue like flux lines of dipole magnetic field and thicken so that the energy density would be reduced.

  3. This picture conforms with the proposal that the integer n appearing in effective Planck constant heff=n× h0 can be decomposed to a product n=m× r associated to space-time surface which is m-fold covering of CP2 and r-fold covering of M4. For r=1 m-fold covering property could be interpreted as a coherent structure consisting of m almost similar regions projecting to M4: one could say that one has field theory in CP2 with m-valued fields represented by M4 coordinates. For r=1 each region would correspond to r-valued field in CP2.

    This suggests that Newton's constant corresponds apart from numerical factors 1/G= mℏ/R2, where R is CP2 radius (the radius of geodesic circle). This gives m∼ 107.5 for gravitational flux tubes. The deviations of m from this value would have interpretation in term of observed deviations of gravitational constant from its nominal value. In the fountain effect of super-fluidity the deviation could be quite large (see this) .

    Smaller values of heff are assigned in the applications of TGD with the flux tubes mediating other than gravitational interactions, which are screened and should have shorter scale of quantum coherence. Could one identify corresponding Planck constant in terms of the factor r of m: heff = rhbar0? TGD leads also to the notion of gravitational Planck constant hbargr= GMm/v0 assigned to the flux tubes mediating gravitational interactions - presumably these flux tubes do not carry monopole flux.

  4. Length scale dependent cosmological constant should characterize also blackholes and the natural first guess is that the radius of the blackhole corresponds to the scaled defined by the value of cosmological constant. This allows to estimate the thickness of the flux tube by a scaling argument. The cosmological constant of Universe corresponds to length scale L=1/Λ1/2∼ 1026 m and the density ρ of dark energy corresponds to length scale r= ρ-1/4 ∼ 10-4 m. One has r= (8π r)1/4LlP1/2 giving the scaling law (r/r1)= (L/L1)1/2. By taking L1=rs(Sun)=3 km one obtains r1= .7× 10-15 m rather near to proton Compton length 1.3× 10-15 m and even nearer to proton charge radius .87× × 10-15 m. This suggests that the nuclei arrange into flux tubes with thickness of order proton size, kind of giant nucleus. Neutron star would be already analogous structure but the flux tubes tangled would not be so dense.

    Denoting the number of protons by N, the length of flux tube would be L1≈ Nlp== xrS (lp denotes proton Compton length) and the mass would be Nmp. This would give x as x= (lp/lPl)2 ∼ 1038. Note that the ratio of the volume filled by the flux tube to the M4 volume VS defined by rS is

    Vtube/VS = (3/8) (lP/lPl)2 × (lp/rS)2∼ 10 (rS(Sun)/rS)2 .

    The condition Vtube/VS<1 gives a lower bound to the Schwartschild radius of the object and therefore also to its mass: rS>101/2rS(Sun) and M>101/2M(Sun). The lower bound means that the flux tube fills the entire M4 volume of blackhole. Blackhole would be a volume filling flux tube with maximal mass density of protons (or rather, neutrons -) per length unit and therefore a natural endpoint of stellar evolution. The known lower limit for the mass of stellar blackhole is few stellar masses (see this) so that the estimate makes sense.

  5. An objection against this picture are very low mass stars with masses below .5M(Sun) (see this) not allowed for k≥ 107. They are formed in the burning of hydrogen and the time to reach white dwarf state is longer than the age of the universe. Could one give up the condition that flux tube volume is not larger than the volume of the star. Could one have dark matter in the sense of n2-sheeted covering over M4 increasing the flux tube volume by factor n2.

  6. This picture does not exclude star like structure realized in terms of analogs of protons for scaled up variants of hadron physics M89 hadron physics would have mass scale scaled up by a factor 512 with respect to standard hadron physcs characterized by Mersenne prime M107. The mass scale would correspond to LHC energy scale and there is evidence for a handful of bumps having interpretation as M89 mesons. It is of course quite possible that M89 baryons are unstable against transforming to M107 baryons.

  7. The model for star (see this) inspired by the 10 year old nuclear physics anomaly led to the picture that protons form at least in the core dark proton sequences associated with the flux tube and that the scaled up Compton length of proton is rather near to the Compton length of electron: there would be zooming up of proton by a factor about 211∼ mp/me. The formation of blackhole would mean reduction of heff by factor about 2-11 making dark protons and neutrons ordinary.

Can one see also stars as blackhole like entities?

The assignment of blackholes to almost any physical objects is very fashionable, and the universality of the flux tube structures encourages to ask whether the stellar evolution to blackhole as flux tube tangle could involve discrete steps involving blackhole like entities but with larger Planck constant and with larger radius of flux tube.

  1. Could one regard stellar objects as blackholes labelled by various values of Planck constant heff? Note that heff is determined essentially as the dimension n of the extension of rationals (see this and this). The possible p-adic length scales would correspond to the ramified primes of the extension. p-Adic length scale hypothesis selects preferred length scales as p≈ 2k, with prime values of k preferred. Mersennes and Gaussian Mersennes would be in favoured nearest to powers of 2.

    The most general hypothesis is that all values of k in the range [127,107] are allowed: this would give half-octaves spectrum for p-adc length scales. If only odd values of k are allowed, one obtains octave spectrum.

  2. The counterpart of Schwartchild radius would be rS(k)= (L(k)/L(107))2rS corresponding to the scaling of maximal string tension proportional to 1/G by L(107)/L(k)2, where k is consistent with p-adic length scale hypothesis.

    The flux tube area would be scaled up to L(k)2= 2k-107L(107)2, and the constant x== x(107) would scale to x(k)=2k-107x. Scaling guarantees that condition V(tube)/VS does not change at all so that the same lower bound to mass is obtained. Note that the argument do not give upper bound on the mass of star and this conforms with the surprisingly large masses participating in the fusion of blackholes producing gravitational radiation detected at LIGO.

  3. The favoured p-adic length scales between p-adic length scale L107 assignable to black hole and L(127) corresponding to electron Compton length assignable to solar interior are the p-adic length scale L(113)= 8L(127) assignable to nuclei, and the length scale L(109), which corresponds to p near prime power of two.

    1. For k=109 (assignable to deuteron) the value of the mass would be scaled by factor 4 to a lower about 12 km to be compared with the typical radius of neutron star about 10 km. The masses of neutron stars around about 1.4 solar masses, which is rather near to the lower bound derived for blackholes. Neutron star could be seen the last phase transition in the sequence of p-adic phase transition leading to the formation of blackhole.

    2. Could k=113 phase precede neutron stars and perhaps appear as an intermediate step in supernova? Assuming that the flux tubes consist of nucleons (rather than nuclei), one would have rS(113)= 64 rS giving in the case of Sun rS(113)=192 km.

    3. For k=127 the p-adic scaling from k=107 would give Schwartschild radius rS(127) ∼ 220rS. For Sun this would give rS(127)=3× 109 m is roughly by factor 4 larger than the radius of the solar photosphere radius 7× 108 meters. k=125 gives a correct result. This suggests that k=127 corresponds to the minimal value of temperature for ordinary fusion and corresponds to the value of dark nuclear binding energy at magnetic flux tubes.

      The evolution of stars increases the fraction of heavier elements created by hot fusion and also temperatures are higher for stars of later generations. This would suggest that the value of k is gradually reduced in stellar evolution and temperature increases as T∝ 2(127-k)/2. Sun would be in the second or third step as far the evolution of temperature is considered. Note that the lower bound on radius of star allows also larger radii so that the allowance of smaller values of k does not lead to problems.

2. What about blackhole thermodynamics?

Blackhole thermodynamics is part of the standard blackhole paradigm? What is the fate of this part of theoretical physics in light of the proposed model?

2.1. TGD view about blackholes

Consider first the natural picture implied the vision about blackhole as space-filling flux tube tangle.

  1. The flux tubes are deformations of cosmic strings characterized by cosmological constant which increases in the sequence of increasing the temperature of stellar core. The vibrational degrees of freedom are excited and characterized by a temperature. The large number of these degrees of freedom suggests the existence of maximal temperature known as Hagedorn temperature at which heat capacity approaches to infinity value so that the pumping of energy does not increase temperature anymore.

    The straightforward dimensionally motivated guess for the Hagedorn temperature is suggested by p-adic length scale hypothesis as T= xhbar/L(k) , where x is a numerical factor. For blackholes as k=107 objects this would give temperature of order 224 MeV for x=1. Hadron physics giving experimentally evidence for Hagedorn temperature about T=140 MeV near to pion mass and near to the scale determined by ΛQCD, which would be naturally relate to the hadronic value of the cosmological constant Λ.

    The actual temperature could of course be lower than Hagedorn temperature and it is natural to imagine that blackhole cools down. The Hagedorn temperature and also actual temperature would increase in the phase transition k→ k-1 increasing the value of Λ(k) by a factor of 2.

  2. The overall view about the situation would be that the thermal excitations of cosmic string die out by emissions assignable perhaps to black hole jets and also going to the cosmic string until a state function reduction decreasing the value of k occurs and the process repeats itself.

    The naive idea is that this process eventually leads to ideal cosmic string having Hagedorn temperature T= hbar/R and possible existing at very low temperature: this would conform with the idea that the process is the time reversal of the evolution leading from cosmic strings to astrophysical objects as tangles of flux tube. This would at least require a phase transition replacing M107 hadron physics with M89 hadron physics and this with subsequent hadron physics. One must of course consider also all values of k as possible options as in the case of the evolution of star. The hadron physics assignable to Mersenne primes and their Gaussian counterparts could only be especially stable against a phase transition increasing Λ (k).

2.2. What happens to blackhole thermodynamics in TGD?

Blackhole thermodynamics (see this) has produced admirable amounts of literature during years. What is the fate of the blackhole thermodynamics in this framework? It turns out that the the dark counterpart of of Hawking radiation makes sense if one accepts the notion of gravitational Planck constant assigned to gravitational flux tube and depending on masses assignable to the flux tube. The condition that dark Hawking radiation and flux tubes at Hagedorn temperature are in thermal radiation implying TB,dark= TH. The emerging prediction TH is consistent with the value of the hadronic Hagedorn temperature.

  1. In standard blackhole thermodynamics the blackhole temperature TB identifiable identifiable as the temperature of Hawking radiation (see this) is essentially the surface gravity at horizon and equal to TB= κ/2π= hbar/4π rS is analogous to Hagedorn temperature as far as dimensional analysis is considered. One could think of assigning TB to the radial pulsations of blackhole like object but it is very difficult to understand how the thermal isolation between stringy degrees of freedom and radial oscillation degrees of freedom could be possible.

  2. The ratio TB/TH ∼ Lp/4π rS would be extremely small for ordinary value of Planck constant. Situation however changes if one has

    TB= hbareff/4π rS ,

    with hbareff= nhbar0=hbargr, where hbargr is gravitational Planck constant.

    The gravitational Planck constant hbargr was originally introduced by Nottale (see this and this) assignable to gravitational flux tube (presumably non-monopole flux tube) connecting dark mass MD and mass m (M and m touch the flux tubes but do not define its ends as assumed originally) is given by

    hbargr= GMDm/v0 ,

    where v0<c is velocity parameter. For the Bohr orbit model of inner planets Nottale assumes MD= M(Sun) and β0=v0/c≈ 2-11. For blackholes one expects that one has β0<1 is not too far from β0=1.

    The identification of MD is not quite clear. I have considered the problem how v0 and MD are determined in (see this and this). For the inner planets of Sun one would have β0∼ 2-11 ∼ me/mp. Note that the size of dark proton would be that of electron, and one could perhaps interpret 1/β0 as the heff/hbar assignable to dark protons in Sun. This would solve the long standing problem about identification of β0.

  3. One would obtain for the Hawking temperature TB,D of dark Hawking radiation with heff=hgr

    TB,D= (ℏgr/ℏ) TB= (1/8π β0)× (MD/M) × m .

    For k=107 blackhole one obtains

    TB,D/TH = ( ℏgr/ℏ)× TB× (L(107)/xℏ)= (1/8π β0(107))× (MD/M) × (L(107)m/xℏ) .

    For m=mp this gives

    TB,D/TH = ℏgr/ℏ) TB× (L(107)/xℏ)= (1/8π x β0(107))× (MD/M) × (mp/224 MeV) .

    The order of magnitude of thermal energy is determined by mp. The thermal energy of dark Hawking photon would depend on m only and would be gigantic as compared to that of ordinary Hawking photon.

  4. Thermal equilibrium between flux tubes and dark Hawking radiation looks very natural physically. This would give


    giving the constraint

    (ℏgr/ℏ) TB ×( L(107)/xℏ)= (1/8π x β0)× (MD/M) (mp/224 MeV)=1 .

    on the parameters. For M/MD=1 this would give xβ0≈ 1/6.0 conforming with the expectation that β0 is not far from its upper limit.

  5. If ordinary stars are regarded as blackholes in the proposed sense, one can assign dark Hawking radiation also with them. The temperature is scaled down by L(107)/L(k) and for Sun this would give factor of L(107)/L(125)=2-9 if one requires that rS(k) corresponds to solar radius. This would give

    TB(dark,k)→ (ℏgr/ℏ)× (L(107)/L(k)) TB= (2(k-107)/2/8π β0)× (MD/M) × m .

    For k=125 and MD= M this would give TB(dark,125)= m/2π.

    The condition TB,D= TH for k=125 would require scaling of β0(107) to β(125)= 2-9β0(107) ≈ 2-11. This would give β0(107)≈ 1/4 in turn giving x ≈ .66 implying TH≈ 149 MeV. The replacement of mp=1 GeV with correct value .94 GeV improves the value. This value is consistent with the value of hadronic Hagedorn temperature so that there is remarkable internal consistency involved although a detailed understanding is lacking.

  6. The flux of ordinary Hawking thermal radiation is T4B/ℏ3. The flux of dark Hawking photons would be T4B,dark/ℏgr3 = (ℏgr/ℏ) TB4 and therefore extremely low also now also. In principle however the huge energies of the dark Hawking quanta might make them detectable. I have already earlier proposed that TB(hgr) could be assigned with gravitational flux tubes so that thermal radiation from blackhole would make sense as dark thermal radiation having much higher energies.

    One can however imagine a radical re-interpretation. BHE is not the thermal object emitting thermal radiation but BHE plus gravitational flux tubes are the object carrying thermal radiation at temperature TH= TB. For this option dark Hawking radiation could play fundamental role in quantum biology as will be found.

  7. What about the analog of blackhole entropy given by

    SB= A/4G= π lPl2TB2 ,

    where A= 4π rS2 is blackhole surface area. This corresponds intuitively to the holography inspired idea that horizon decomposes to bits with area of order lP2?

    The flux tube picture does not support this view. One however ask whether the volume filling property of flux tube could effectively freeze the vibrational degrees of flux tubes. Or whether these degrees of freedom are thermally frozen for ideal blackhole. If so, only the ends of he flux tubes at the surface or their turning points (in case that they are turn back) can oscillate radially. This would give an entropy proportional to the area of the surface but using flux tube transversal area as a unit. This would give apart from numerical constant

    SB= A/4L(k)2 .

2.3. Constraint from ℏgr/ℏ>1

Under what conditions mass m can interact quantum gravitationally and are thus allowed in hgr for given MD?

  1. The notion of hgr makes sense only for hgr>h. If one has hgr<h assume hgr=h. An alternative would be hgr=→ h0=h/6 for hgr<h0. This would given GMDm/v0>hbarmin (hbarmin=hbar or hbar/6) leading

    m>( β0ℏ/2rS(MD)) × (ℏmin/ℏ) .

    This condition is satisfied in the case of stellar blackholes for all elementary particles.

  2. One can strengthen this condition so that it would satisfied also for gravitational interactions of two particles with the same mass (MD=m). This would give

    m/mPl01/2 .

    For β0=1 this would give m=mPl, which corresponds to a mass scale of a large neuron and to size scale 10-4 m. β0(125)=2-11 gives mass scale of cell and size scale about 10-5 meters. β0(127)≈ 2-12 corresponding to minimum temperature making hot fusion possible gives length scale about 10-6 m of cell nucleus. A possible interpretation is that the structure in cellular length scale have quantum gravitational interaction via gravitational flux tubes. Biological length scales would be raised in special position from the point of view of quantum gravitation.

  3. Also interactions of structures smaller than the size of cell nucleus with structures with size larger the size of cell nucleus are possible. By writing the above condition as (m/mPl)(MD/mpl)>β0, one sees that from a given solution to the condition one obtains solutions by scaling m→ xm and MD→ MD/x. For β0(127)≈ 2-11 corresponding to the scale of cell nucleus the atomic length scale 10-10 m and length scale 10-4 m of large neuron would correspond to each other as "mirror" length scales. There would be no quantum gravitational interactions between structures smaller than cell nucleus. There would be master-slave relationship: the smaller the scale of slave, the larger the scale of the master.

2.4. Quantum biology and dark Hawking radiation

The scaling formula β0(k)∝ 1/L(k) with flux tube thickness scale given by L(k) allows to estimate β0(k). In this manner one obtains also biologically interesting length scales. An interesting question is whether the scales for the velocities of Ca waves (see this) and nerve pulse conduction velocity could relate to v0.

  1. The tube thickness about 10-4 m, which corresponds to ordinary cosmological constant being in this sense maximal corresponds to the p-adic length scale k=171. The scaling of β0∝ 1/L(k) gives v0(171)∼ 4.7 μm/s. In eggs the velocity of Ca waves varies in the range 5-14 μm/s, which roughly corresponds to range k∈ {171,170,169,168}.

    In other cells Ca wave velocity varies in the range 15-40 μm/s. k=165 corresponds to 37.7 μm/s near the upper bound 40 μm/s. The lower bound corresponds to k=168. k=167, which corresponds to the larges Gaussian Mersenne in the series assignable to k∈{151,157,163,167} the velocity is 75 μm/s.

  2. For k=127 gives v0∼ 75 m/s. k=131 corresponds to v0= 18 m/s. These velocities could correspond to conduction velocities for nerve pulses in accordance with the view that the smaller the slave, the larger the master.

I have already earlier considered that dark Hawking radiation could have important role in living matter. The Hawking/Hagedorn temperature assuming x=1/6.0 k=L(171) has peak energy 38 meV to be compared with the membrane potential varying in the range 40-80 meV. Room temperature corresponds to 34 meV. For k=163 defining Gaussian Mersenne one would have peak energy about .6 eV: the nominal value of metabolic energy quantum is .5 eV. k=167 corresponds to .15 eV and 8.6 μm - cell size. Even dark photons proposed to give bio-photons when transforming to ordinary photons could be seen as dark Hawking radiation: Gaussian Mersenne k=157 corresponds to 4.8 eV in UV. Could CMB having peak energy of .66 meV and peak wavelength of 1 mm correspond to Hawking radiation associated with k= 183? Interestingly, cortex contains 1 mm size structures.

To sum up, these considerations suggest that biological length scales defined by flux tube thickness and cosmological length scales defined by cosmological constant are related.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

The relationship between p-adic time and discrete flow of geometric time defined by quantum jumps

There could be a relationship between quantal flow of geometric time by SSFRs and p-adic variant of time coordinates giving a reason why for p-adicity.

  1. TGD predicts geometric time as a real variant and p-adic variants in extensions of various p-adics induced by given extension of rationals (adelic space-time and adelic geometric time). Real and p-adic times share discrete points in the extension of rationals considered: roots of octonionic polynomials defining space-time surfaces as roots for their "real" and "imaginary" parts in quaternionic sense (see this). The roots of the real polynomial with rational coefficients giving octonionic polynomial as its continuation define space moments of M4 linear time assignable to special SSFRs. p-Adic time associated with the p-adic balls assignable the points are not well-ordered. One cannot tell about two moments of time which is earlier and which later.

  2. This could relate to the corresponding lack of well ordering related to "clock time" associated with self at given level of evolutionary hierarchy defined by the extension of rationals. The increase of "clock time" as a distance between tips of CD for a sequence of small state function reductions (weak measurements) occurs only in statistical sense and "clock time" can also decrease. The moments of time correspond to roots of the real polynomial define "special moments in the life of self", one might say.

    At the limit of infinite-D extension the roots of the polynomial define algebraic numbers forming a dense set in the set of reals. Cognitive representation becomes dense set. These "special moments" need not however become dense.

  3. One can raise an interesting question inspired by self inspection. As one types text, it often happen that the letters of the word become in wrong order, change places, and even jump from a word to another one. The experienced order of letters assignable to a sequence of SSFRs is not the same as the order of letters representing the order for the moments of geometric time. When one is tired, the phenomenon is enhanced.

    Neuroscientists can certainly propose an explanation for this. But could this be at deeper level quantum effect based on the above mechanism and have a description in terms of p-adicity assignable to prime p defining a ramified prime for the extension of rationals involved? When one is tired the metabolic resources have petered out and the IQs n=heff/h0 defined by dimensions of extensions of rationals for the distribution of extensions tend to reduce, cognitive resolution for time becomes lower and mistakes of this kind become worse.

See the article Latest progress in TGD or the chapter Zero Energy Ontology and Matrices of "Towards M- matrix".

For a summary of the earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Cosmological time anomalies and zero energy ontology

Zero energy ontology (ZEO) replaces ordinary ontology in TGD based view about quantum states and quantum jump (see this). It came as a surprise that the new view about time allows to understand several time anomalies of cosmology. I have already earlier considered strange findings of Minev et al about state function reduction in atomic systems in terms of ZEO (see this). In neuroscience Libet's findings that volition is preceded by neural activity can be understood to. Earthquakes and volcanic eruptions involve also strange time anomalies (see this). The general explanation of time anomalies would be ZEO plus quantum coherence up to cosmological scales made possible by dark matter in TGD sense. This would give completely unexpected direct connection with TGD inspired theory of consciousness and cosmology. The following considerations can be found from the article "Cosmic string model for the formation of galaxies and stars")

Consider first ZEO.

  1. In ZEO zero energy states are superpositions of space-time surfaces inside causal diamond (CD) identified as preferred extremals of the basic action principle of TGD. CD is cartesian product of causal diamond cd of M4 and of CP2. The preferred extremals analogous to Bohr orbits have boundaries - ends of space-time - at the light-like boundaries of CD. There is a fractal hierarchy of CDs and given CD is an imbedding space correlate for a conscious entity - self - consciousness is universal.

  2. Zero energy states can be seen as superpositions of state pairs with members assigned to the opposite boundaries of CD. ZEO predicts that in ordinary or "big" state function reductions (BSFRs) the arrow of time of system changes and remains unaffected in "small" state functions (SSFRs), which are TGD counterpart for "weak" measurements and associated with a sequence of unitary evolution for the state assignable to the active boundary CD, which also shifts farther from the passive boundary. Passive boundary is unaffected as also members of state pairs at it.

  3. Subjective time is identified as a sequence of SSFRs and correlates strongly with clock time identifiable as the distance between the tips of CD and increasing in statistical sense during the sequences of SSFRs.

  4. BSFR corresponds to state function reduction at active boundary of CD which becomes passive. This forces the state at passive boundary to change. Passive boundary becomes active. BSFR means the death of self and reincarnation with an opposite arrow of time. Thus the notion of life cycle is universal and life can be lieved in both directions.

  5. What happens to CD in long run? There are two options.

    1. The original assumption was that the location of formerly passive boundary is not changed. This would mean that the size of CD would increase steadily and the outcome would be eventually cosmology: this sounds counter-intuitive. Classically energy and other Poincare charges are conserved for single preferred extremal could fail in BSFRs due to the fact that zero energy states cannot be energy eigenstates.

    2. The alternative view suggested strongly M8-H duality (see this) is that the size of CD is reduced in BSFR so that the new active boundary can be rather near to the new passive boundary. One could say that the reincarnated self experiences childhood. In this case the size of CD can remain finite and its location in M8 more or less fixed. One can say that the self associated with the CD is in a kind of Karma's cycle living its life again and again. Since the extension of rationals can change in BSFR and since the number of extensions larger than given extension is infinitely larger than those smaller than it, the dimension of extension identifiable in terms of effective Planck constant increases. Since n= heff/h0 serves as a kind of IQ, one can say that the system becomes more intelligent.

Cosmic redshift but no expansion of receding objects: one further piece of evidence for TGD cosmology

"Universe is Not Expanding After All, Controversial Study Suggests" was the title of very interesting Science News article (see this) telling about study, which forces to challenge Big Bang cosmology. The title of course involved the typical exaggeration.

The idea behind the study was simple. If Universe expands and also astrophysical objects - such as stars and galaxies - participate the expansion, they should increase in size. The observation was that this does not happen! One however observes the cosmic redshift so that it is too early to start to bury Big Bang cosmology. This finding is however a strong objection against the strongest version of expanding Universe. That objects like stars do not participate the expansion was actually known already when I developed TGD inspired cosmology for quarter century ago, and the question is whether GRT based cosmology can model this fact naturally or not.

The finding supports TGD cosmology based on many-sheeted space-time. Individual space-time sheets do not expand continuously. They can however expand in jerk-wise manner via quantum phase transitions increasing the p-adic prime characterizing space-time sheet of object by say factor two of increasing the value of heff=n× h for it. This phase transition could change the properties of the object dramatically. If the object and suddenly expanded variant of it are not regarded as states of the same object, one would conclude that that astrophysical objects do not expand but only comove. The sudden expansions should be observable and happen also for Eart. I have proposed a TGD variant of Expanding Earth hypothesis along these lines (see this).

Stars as reincarnating conscious entities

One can apply ZEO to the evolution of stars. The basic story (see this) is that the star is formed from he interstellar gas cloud, evolves and eventually collapses to a white dwarf, degenerate carbon-oxygen core, supernova or even blackhole if the mass of the remnant resulting in explosion throwing outer layers of the star away is in the range of 3-4 solar masses. Only very massive stars end up to supernovas. The type of the star depends on the abundances of various elements in the interstellar gas from which they formed and believed to contain heavier elements produced by earlier supernovas.

There are however several anomalies challenging the standard story. There are stars older than Universe (see this)). There is also evidence that the abundances of heavier elements in the early cosmology are essentially the same as for modern stars (see this). TGD based explanation is discussed earlier.

Karma's cycle option for the stellar evolution could explain these anomalies.

  1. Stars would be selves in Karma's cycle with their magnetic bodies reincarnating with a reversed arrow of time in a collapse to blackhole/white hole like entity (BHE/WHE) - depending on the arrow of time. This would follow by a stellar evolution leading to an asymptotic state BHE/WHE corresponding to maximum size of CD followed by a collapse to BHE or WHE. Also ordinary stars would correspond to BHEs/WHEs characterized by p-adic length scale L(k) longer than L(107) assignable to GRT blackholes. In standard time direction WHE would look like blackhole evaporation.

  2. This would allow stars older than the Universe and suggests also universal abundances. Note however that the abundances would strongly depend on the abundances of the interstellar gas and matter produced by the magnetic energy of flux tube. "Cold fusion" as dark fusion could produce elements heavier than Fe and light elements Li, Be, B, whose abundances for fusion in stellar core is predicted to be much much smaller than the observed abundances in the case of old stars. The lifetimes of stars depend on their type. Also a universal age distribution of stars in stellar clusters not depending appreciably on cosmic time is highly suggestive. I remember of even writing about this. Unfortunately I could not find the article.

To put it more generally, the hierarchy of CDs implies that the Universe decomposes effectively to sub-Universes behaving to some degree independently. The view about Karma's cycles provides a more precise formulation of the pre-ZEO idea that systems are artists building themselves as 4-D sculptures. In particular, this applies to mental images in TGD based view about brain.
  1. One could perhaps say that also quantum non-determinism has classical correlates. CDs would be the units for which time-reversing BSFRs are possible. Also SSFRs affecting CDs could have classical space-time correlates. M8-H duality predicts that the time evolution for space-time surface inside CDs decomposes to a sequence of deterministic evolutions glued together along M4 time t=rn hyperplanes of M4 defining special moments in the life of self at which the new larger CD receives a new root t=rn. The non-deterministic discontinuity could be localized to the 2-D vertices represented by partonic 2-surfaces at which the ends of light-like partonic orbits meet.

  2. The M4 hyperplanes t=rn correspond to the roots of a real polynomial with rational coefficients defining the space-time surfaces at the level of M8 as roots for the real or imaginary part in quaternionic sense for the octonionic continuation of the polynomial. These moments of time could correspond to SSFRs.

  3. The finite classical non-determinism is in accordance with the classical non-determinism predicted at the limit of infinitely large CD and vanishing cosmological constant at which classical action reduces to Kähler action having a huge vacuum degeneracy due to the fact than any space-time surface having Lagrangian manifold (vanishing induced Kähler form) as CP2 projection is a vacuum extremal. The interpretation of this degeneracy interpreted in terms of 4-D spin glass degeneracy would be that at the limit of infinitely large CD the extension of rationals approaches to algebraic numbers and the roots t=rn becomes dense and the dynamics becomes non-deterministic for vacuum extremals and implies non-determinism for non-vacuum extremals.

No time dilation for the periods of processes of quasars

There are strange findings about the time dilation of quasar dynamics challenging the standard cosmology . One expects that the farther the object is the slower its dynamics looks as seen from Earth. Lorentz invariance implies red shift for frequencies and in time domain this means the stretching of time intervals so that the evolution of distant objects should look the slower the longer their distance from the observer is. In the case of supernovae this seems to be the case. What was studied now were quasars at distances of 6 and 10 billion years and the time span of the study was 28 years. Their light was red shifted by different amounts as one might expect but their evolution went on exactly the same rhythm. This looks really strange.

In GRT the redshift violates conservation of four-momentum. In TGD cosmic redshift reduces to the fact that the tangent spaces of the space-time surface for target and receiver differ by a Lorentz boost. Redshift does not mean non-conservation of four-momentum but only that the reference frames are different for target and observer. The size for the space-time sheets assignable to the systems considere must be large, of the order of the size scale L defined by the size of the recent cosmology to which one assigns the Hubble constant. In the flux tube picture this means that the flux tubes have length of order L but thickness would be about R=10-4 meters - the size scale of large neuron. Photons arrive along flux tubes connecting distant systems. Note that CMB corresponds to 10 times longer peak wavelength.

I have already earlier discussed this time anomaly but what I have written is just the statement of the problem and and some speculations about its solution in terms of ZEO. A valuable hint is that the time anomaly appears for quasars- very heavy objects - but not for supernovae - much lighter objects. This suggests that the redshift depends on the masses of the objects considered.

  1. One considers an approximately periodic process. It is quite possible that this process is not classical deterministic process at space-time level but that one has sequence of SSFRs (weak measurements) or even BSFRs for a subsystem of the target. These processes replace quantum superposition of space-time surfaces inside CD with a new one and SSFR also increases its size in statistical sense. A natural Lorentz invariant "clock time" for the target is the distance between the tips of CD - light-cone proper time. Both M4 linear coordinates and light-cone Robertson-Walker coordinates are natural coordinates for space-time sheets with 4-D M4 projection.

    "Clock time" must be mapped to M4 linear time for some space-time sheet. The Minkowski coordinates for the CD are determined only modulo Lorentz boost leaving the light-like boundary of CD invariant. In general the M4 coordinates of the target and observer are related by a Lorentz boost and this gives rise to cosmological redshift and also gravitational reshift.

  2. The information about SSFR or BSFR at the target must be communicated to the observer so that the space-time sheets in question must be connected by flux tubes carrying the photons. CD must contain both systems and naturally has cosmological size given by L so that flux tubes have thickness about R. The M4 time coordinate must be common to both systems. The natural system to consider is center of mass system (cm) in which the sum of the momenta of two systems vanishes.

Do quasars and galactic blackholes relate by time reversal in ZEO?

This picture combined with zero energy ontology (ZEO) based view about ordinary state functions changing the arrow of time and occurring even in astrophysical scales leads to a tentative view about quasars and galactic blackholes as time reversals of each other.

  1. Quasars could be seen as analogs of white holes feeding the mass of cosmic string out to build the galactic tangle and part of the mass of thickening tangle would transform to ordinary matter. They would initiate the formation of galaxy meaning emergence of increasing values of heff in the hierarchy of Planck constant. Cosmic string would basically feed the mass and energy liberated in the decay of magnetic energy at cosmic strings thickening to flux tubes to ordinary matter and serving in the role of metabolic energy driving self-organization.

  2. Galactic blackholes could be perhaps indeed analogs of blackholes as time reversals of quasars - "big" (ordinary) state function reduction would transform quasar as white hole to a galactic blackhole. Now the system would be drawing back the mass from the surroundings to the flux tube and maybe cosmic string. The process could be like breathing. In zero energy ontology breathing could indeed involve a sequence of states and their time reversals.

This raises also the question whether the evolution of stars could be seen as a time reverse for the formation of blackholes: kind of growth followed by a decay perhaps since the values of Planck constant heff would be reduced. The climax of his evolution would correspond to maximal values of heff. The evolution of life would be certainly this kind of climax.

See the article Cosmic string model for the formation of galaxies and stars or the chapter of "Physics in many-sheeted space-time" with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, November 23, 2019

Some aspects of the TGD inspired model of blackhole like objects

This posting was inspired by a Youtube video (thanks to Howard Lipman for a link) telling about galaxies separated by millions of ligt-year moving in synch. The posting summarizes some ideas and results of the article Cosmic string model for the formation of galaxies and stars.

The Youtube video suggests that Electric Universe hypothesis (EU) explains the flood of various anomalies found in cosmology. In its extreme version EU states that also gravitation and even nuclear physics reduces to electromagnetism. The view that gravity alone determines the structure of the Universe in long scales is certainly wrong on basis of empirical findings. But the view that electromagnetism does it alone is also wrong. Both gravitation and other standard model interactions are needed also in astrophysics and cosmology in all length scales.

What is nice that SAFIRE group approaches the situation purely experimentally and tries to find how much nuclear physics of Sun can be produced in low temperature plasma in lab. I have written about the findings of SAFIRE group comparing EU to TGD earlier (see this).

TGD view about space-time and classical fields

Also a new spacetime concept is indeed. TGD as a modification of Einstein's vision follows solely from the requirement that classical conservation laws are not lost as in general relativity. Space-time is replaced with a 4-D surface in 8-D H=M4×CP2, and can regarded as a generalization of string world sheet.

Space-time surfaces are minimal surfaces with 2-D singularities: partonic 2-surfaces and 2-D string world sheets so that string model like theory emerges as a "sub-theory": 2-D surfaces carry the data coding for the space-time surfaces in strong form of holography suggests strongly by general coordinate invariance alone.

This leads to new view about classical fields, in particular magnetic fields.

  1. Besides ordinary Maxwellian magnetic fields replaced by flux quanta with vanishing total flux represented as topology of space-time surface also monopole flux tubes having no Maxwellian correlates are predicted. They are stable and need no currents as sources. They explain magnetic fields in cosmic scales and solve the maintenance problem of the Earth's magnetic field.

    The understanding of the strange behavior of the magnetic field of Mars is the latest application: only the manopole part is present since the core has not yet split to inner core and outer core creating the ordinary magnetic field by its currents. The motion of monopole flux tubes induces currents (Birkeland currents in particular), which create the TGD counterparts of ordinary magnetic fields. Auroras are indeed obtained also in Mars although it has very small ordinary magnetic field (see this) .

  2. Monopole magnetic flux tubes provide carriers of dark energy for which the identification as an exact classical correlate of dark matter is suggested by quantum classical correspondence. The flux tubes defining the counterparts of ordinary magnetic fields mediate gravitational interactions as kind of wave guides along with gravitational interactions are mediated. Two kinds of flux tubes forming a fractal hierarchy are involved. Ordinary Einsteinian space-time emerges at QFT limit and is not a good approximation when one wants to understand the formation of galaxies, stars and other astrophysical objects. Situation is same in biology.

Number theoretical universality and hierarchy of Planck constants

A generalization of quantum theory emerges from number theoretical vision motivated by the need to understand mathematical correlates of cognition. This involves number theoretical universality so that p-adic physics as correlate for cognition has a hierarchy of Planck constants labelling dark matter as phases of ordinary matter and making them selves visible with interaction with ordinary matter. For the application to the formation of stars and galaxies (see this).

  1. p-Adic length scale hypothesis makes a long list of correct predictions about stellar masses and provides a new view about blackhole like entities (BHEs). Even ordinary stars would have BHEs asymptotic states meaning that they correspond to volume filling flux tube tangles labelled by p-adic length scale. The Schwartschild radius of generalizes BHE would be the radius of the star. This predicts correctly the lower limits for blackholes, neutron stars, and radius of Sun. One can also understand why very low mass stars can have so small radii.

  2. Blackhole thermodynamics is replaced a modification of ordinary blackhole thermodynamics for particles at the counterparts of ordinary flux tubes at Hawking temperature T(Hawking) and for the excitations of monopole flux tubes at Hagedorn temperature T(Hagedorn). The condition of thermal equilibrium stating that the Hawking and Hagedorn temperatures are identical - T(Hawking)= T(Hagedorn) - leads to several correct predictions, in particular a correct prediction for hadronic Hagedorn temperature prevailing at the flux tubes of the TGD counterpart of GRT blackhole identified as volume filling flux tube tangle. The Hawking temperature as Hagedorn temperature is gigantic as compared to its GRT counterpart.

    A very strong correlation between physics in short and very long scales is implied by the length scale dependent cosmological constant predicted by the twistor lift of TGD. Cosmological constant characterizes also stars and even hadrons. Length scale dependence meaning that &Lamba; approaches zero in long length scales solves the standard problem of GRT based cosmology. The sign of the cosmological constant is correct. Recall that huge size and wrong sign of the cosmological constant turned out to be lethal for superstring models.

  3. Also BHEs are predicted to have structure and dynamics. They have a lot of hair and internal degrees of freedom since almost Hagedorn temperature corresponds to huge number of different thermal states. This allows to understand why blackholes can be active and passive. The standard view about BHEs is simply wrong as already Einstein emphasized.

Zero energy ontology and new view about time

Zero energy ontology (ZEO) leads to a new view about quantum measurement and quantum jump. ZEP predicts both arrows of time and that the arrow of time changes in the ZEO counterpart of ordinary state function reduction but not in the counterpart of "weak" measurement. The second prediction is that quantum jumps can occur in all scales at the level of dark matter (heff phases of ordinary matter). For instance, ZEO leads to the understanding of various time anomalies such as the existence of stars older than the universe and the very paradoxical fact that abundances do not seem to depend on cosmic time.

  1. The key point is that cosmology decomposes to cosmologies within cosmologies within .... These sub-cosmologies correspond to causal diamonds (CD) with more or less fixed location in H=M4 ×CP2 and defining sub-universes. This is due to failure of the complete classical determinism in TGD.

    These sub-cosmologies evolve and re-incarnate with opposite arrow of time again and again. This happens in astrophysical counterpart of ordinary state function reduction. This implies local ageing of the sub-cosmology, say star. One can say that the object is located in fixed position in M4 as 4-D object. Hence arbitrarily old stars can be found located at arbitrary small values of cosmological time, which is totally paradoxical in GRT view. Obviously this is a dramatic deviation from the standard views about time.

  2. These sub-cosmologies, say stars, evolve by small state function reductions ("weak" measurements) and are replaced with a new one with reversed arrow of time in "big" (ordinary) state function reduction (BSFR). The arrow of time means that only second boundary of CD changes and second remains unaffected and that the distance between the tips of CD increases in statistical sense.

    This defines stellar Karma's cycle but applies in all scales. Even human scales: in TGD Universe we would live again and again and every time it gets better in statistical sense since big state function reductions tend to increase the dimension of extension of rationals defining kind of IQ. Various stages of stellar evolution correspond to different re-incarnations and asymptotic states reached before BSFR correspond white dwarfs, neutron stars, carbon-oxygen cores, blackholes,...

For instance, SAFIRE group has found evidence for the occurrence of "cold fusion" in low temperature plasma. The model involves plasmoids - one of the earliest TGD inspired ideas about quantum biology and counterpart for monopole flux tube (see this). The TGD based explanation involves in an essential manner gravitational flux tubes and the TGD view about nuclear physics based on the notion of dark nuclei. One cannot therefore understand plasma without including dark gravitational interactions and gravitational Planck constant ℏgr with huge value reflecting the fact that gravitation has long range and is non-screened interaction so that quantum correlation lengths typically proportional to ℏgr become large - of the order of ordinary Schwartschild radius for ordinary blackholes and of the order of the size scale of the star in the general case.

We live in a world quantal in all scales at dark matter level. Quantum gravitation is everywhere, not only at Planck length scale. The Universe is a fractal tensor network quantum coherent in all scales. This explains also the sycnchoronous rotation of galaxies separated by millions of light years. Here superstring models get it completely wrong.

See the articles Cosmic string model for the formation of galaxies and stars and Comparing Electric Universe hypothesis and TGD.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, November 20, 2019

Badly behaving blackholes

There is an excellent video (thanks to Howard Lipman for a link) challenging the standard view about blackholes. In the sequel list some arguments that I remember.

The basic theoretical objection against blackholes was due to Einstein himself. The collapse of matter to single point is simply impossible. This objection has been however forgotten since doing calculations is much more pleasant activity than hard thinking, and an enormous literature have been produced based on this idealization. There is no doubt that blackhole like entities (BHEs) with about Schwartschild radius exist, but general relativity does not allow to say anything about the situation inside possibly existing horizon.

TGD was born as a solution to the fundamental difficulty of GRT due to the loss of classical conservation laws. In TGD framework BHEs correspond to volume filling flux tube tangles. Also galactic BHEs would correspond to a volume filling flux tube tangles.

In TGD framework also stars could be seen as BHEs having the flux tube thickness characterized by p-adic length scale as an additional parameter. GRT blackholes correspond to flux tube thickness about proton Compton length. For instance, Sun can be seen as a BHE and the size is predicted correctly (see this) .

The model for BHEs makes large number of correct predictions.

  1. The minimal radii/masses of GRT blackholes and neutron stars are predicted correctly.

  2. Ordinary blackhole thermodynamics is replaced with the thermodynamics associated with monopole flux tubes carrying galactic dark mass characerized by Hagedorn temperature and the thermodynamics gravitional flux tubes characerized by Hawking temperature but for gravitational Planck constant h_gr so that it is gigantic as compared to the ordinary Hawking temperature.

    In thermal equilibrium these temperatures are same and this predicts hadronic string tension correctly.

Consider now the empirical objections against BH paradigm in light of TGD picture.
  1. The observations by ALMA telescope show that stars can be formed surprisingly near to galactic BHEs (see this). For instance, 11 young stars just forming have been found at distane of 3 ly from galactic BHE of Milky Way. This is impossible since the intense tidal forces and UV and X ray radiation should make impossible the condensation of stars from gas clouds.

    TGD explanation: Galaxies are formed as tangles on long thicknened cosmic string responsible for galactic dark matter as dark energy. Same mechanism give rise to stars as subtangles generating at least part of the orinary matter as decay of the magnetic energy of the flux tube as it thickens. Ordinary matter already present could concentrate around the tangle.

    One learns from the discussion in the above link that star formation involves bipolar flow consisting two jets in opposite directions believed to take care of angular momentum conservation: the star formed is thought to be formed from a rotating gas cloud (rotation would be around flux tube) having much larger angular momentum and part of must be carried out by jets naturally parallel to the flux tube. Also this gives support for the view that stars are tangles along flux tube. There are also hundreds of massive and much older stars in the vicinity of galactic BHE.

    Note that in TGD also these stars could be seen as BHEs but with different p-adic length scale characterizing the thickened flux tube.

  2. "Non-hungry" BHEs are found.

    TGD explanation: In zero energy ontology to which quantum TGD relies, one must distinguish between BHEs and their time reversals, whilehole like objects (WHEs), analogous to white holes. WHEs would not be "hungry" but feed matter into environment. The counterparts or jets would flow into WHE and matter would flow out from WHE.

  3. The standard theoretical belief is that in a dense star cluster only single blackhole can exist. If there are several blackholes, they start to rotate around each other and fuse to a smaller blackhole. A case with two blackholes have been however observed.

    TGD explanation: A possible explanation is that the objets are WHEs and their behavior is time reversal of BHEs.

  4. The velocities of particles in the jets associated with a galactic BHEs are near light veloity and require extremely high energies and thus strong magnetic fields. No strong magnetic field has been however observed.

    TGD explanation: In TGD Maxwellian magnetic fields are replaced with flux tubes carrying quantized monopole flux not possible in Maxwellian world. Their existence allows to understand the presence of magnetic fields in even cosmological scales, the maintenance problem of Earth's magnetic field, and the recent findings about the magnetic field of Mars. Ordinary magnetic fields correspond to vanishing total flux and are indeed weak: it is these magnetic fields outside the jet which would have been measured. Galaxies are tangles in monopole flux tube and this is the carrier of very strong magnetic field associated with jets parallel to the flux tube.

  5. Very distant galactic blackholes with distances in scale of million light years have radio jets in the same direction. This is very difficult to unders in standard view about cosmology.

    TGD explanation: The galactic BHEs would be associated with the same long cosmic string forming galaxies as tangles.

See the article Brief description of the model for for the formation of galaxies and stars.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.