https://matpitka.blogspot.com/2018/07/

Tuesday, July 31, 2018

How life began?

The central question of biology is "How life began?" and dark variants of biomolecules suggest not only a solution to various paradoxes but also a concrete answer to this question.

The transcription machinery for rRNA including ribozymes and mRNA coding for the proteins associated with ribosomes is central for the translation. The DNA coding for rRNA is associated with nucleolus (see this) in the center of the nucleus.

  1. After the emergence of the first ribosome the ribosomes of the already existing nucleus can take care of the translation of the ribosomal proteins. But how could the first ribosome emerge? This question leads to a paradox bringing in mind self-reference - the basic theme of Gödel-Escher-Bach of Douglas Hofstadter, perhaps the most fascinating and inspiring book I have ever read. The ribosomal proteins associated with the first ribosomes should have been translated using ribosome, which did not yet exist!

  2. Could the translation of the first ribosomal proteins directly from the dark variants of these proteins solve the paradox? The idea of shadow dynamics induced by the pairing of basic biomolecules with their dark variants even allows to ask whether the replication, transcription, and translation could occur at dark level so that dark genes for ribosomes would be transcribed to dark ribosomal RNA and dark mRNA translated to dark AA associated with the ribosomes. These in turn would pair with ordinary ribosomal RNA and AA.

  3. But what about dark variants of ribosomes? One can encounter the same paradox with them if they are needed for the translation. Could it be that dark variants of the ribosomes are not needed at all for the translation but would only give rise to ordinary ribosomes by the pairings basic biomolecules and their dark variants. Dark DNA would pair with dark mRNA, which pairs spontaneously with dark tRNA. Once the ordinary ribosomes are generated from the dark ribosomes by pairing, they could make the translation much faster.

  4. There is however a problem. Both dark RNA and AA correspond to dark nuclear strings. Dark tRNA realized as nuclear string in the proposed manner does not have a decomposition to dark AA and dark RNA as ordinary tRNA has. The pairing of dark tRNA and dark mRNA should rise to dark AA and dark nuclear string - call it X - serving as the analog for the pairing of mRNA sequence with "RNAs" of tRNAs in the ordinary translation.

  5. How to identify X? Could the translation be analogous to a reaction vertex in which dark mRNA and dark tRNA meet and give rise to dark AA and X? X cannot be completely trivial. Could X correspond to the dark DNA?! If so, the process would transcribe from dark DNA dark RNA and translate from dark RNA and dark tRNA AA and dark DNA. This would lead to an exponential growth of dark DNA and other dark variants of bio-molecules. This exponential growth would induce exponential growth of the basic bio-molecules by pairing. Life would emerge! No RNA era or lipid era might be needed. All basic biomolecules or their precursors could emerge even simultaneously - presumably in presence of lipids - but this is not the only possibility.

One can take a more precise look at the situation and try to understand the emergence of bio-molecules and their basic reactions as shadows of the dark variants of bio-molecules appearing in dark particle reactions. The basic idea is that same dark reaction can give rise to several reactions of biomolecules if varying number of the external dark particles are paired with corresponding bio-molecules. Under what conditions this pairing could occur, is left an open question. Consider now the dark 2→ 2 reactions and possible reactions obtained by pairing of some particles.
  1. The reaction

    DmRNA+DtRNA→ DAA + DDNA

    gives rise to translation mRNA+tRNA → AA if DDNA-DNA pairing does not occur in the final state but other dark particles are paired with the their ordinary variants. If only DmRNA-mRNA and DDNA-DNA pairings occur, the reaction gives the reversal mRNA → DNA of transription.

    It should be easy to check whether this is allowed by the tensor product decomposition for the group representations associated with dark proton triplets. Same applies to other reactions considered below.

    If this reaction is possible then also the reversal

    DAA + DDNA → DmRNA+DtRNA.

    can occur. If only DDNA-DNA and DmRNA-mRNA pairings occur this gives rise to transcription of DNA→ mRNA.
    Also reverse translation AA → mRNA is possible.

  2. One can consider also the reaction

    DmRNA+DtRNA → DAA + DmRNA

    If all pairings except DAA-AA pairing are present, the outcome is instead of translation the replication of mRNA such that the amino-acid in tRNA serves the role of catalyzer. I have considered the possibility that this process preceded the ordinary translation: in a phase transition increasing heff the roles of AA and RNA in tRNA would have changed.

    If this reaction is possible then also its reversal

    DAA + DmRNA → DmRNA+DtRNA

    is allowed. If all pairing except DmRNA-mRNA occur, this gives rise to AA +RNA → tRNA allowing to generate tRNA from AA and RNA (not quite RNA).

  3. The replication of DNA strand would correspond at dark level to a formation of bound states by the reaction

    DDNA+DDNA→ DDNA +bound DDNA

    in which all particles are paired. The opening of DNA double strand would correspond to the reverse of this bound state formation.

These dark particle reactions behind the shadow dynamics of life should be describable by S-matrices, which one might call the S-matrix of life.
  1. For instance for

    DmRNA+DtRNA→ X,

    where X can be DmRNA+DtRNA (nothing happens - forward scattering) or DAA + DDNA and perhaps even DAA+DmRNA, one would have unitary S-matrix satisfying SSdagger=Id giving probability conservation as ∑n pm,n= |Smn|2 =1 as a special case. Writing S=1+iT unitarity gives i(T-T)+TT=0 giving additional constraints besides probability conservation.

    For

    DmRNA+DtRNA→ DAA + DDNA

    the non-vanishing elements of T are only between pairs [(DmRNA,DtRNA), (DAA,DDNA)] for which mRNA pairs with tRNA and DNA codes for AA. Unitary matrix would be coded by amplitudes t(AA,DNAi(A)) satisfying ∑ipi(DAA)=p(DDNA+DAA), pi(AA)=|t(DAA,DDNAi(A)|2. p(DDNA+DAA) equals to p(DDNA+DAA)= (1-p) Br(DDNA+DAA), where p is the probability that nothing happens (forward scattering) and Br(DDNA+DAA) is the branching ratio to DDNA+DAA channel smaller than 1 if Br(DDNA+DmRNA) is non-vanishing. The natural interpretation for pi(AA) would be as probability that DNAi codes for it.

  2. For the reverse reaction

    DAA + DDNA rightarrow DmRNA+DtRNA

    it is natural to assume that DtRNA corresponds to any tRNA, which pairs with RNA. The AA associated with this tRNA is always the same but the counterpart of RNA can vary (wobbling). One can speak of the decomposition of dark genetic code to DmRNA→ DtRNA → DAA to a pair of codes mapping DmRNA to DtNRA and DtRNA to DAA. There is a set tRNAi(mRNA) of tRNAs coding for given mRNA, and the probabilities pi(DmRNA) sum up to p= ∑i pi(DmRNA)= (1-p) Br(DmRNA+DtRNA) , where p is the probability for forward scattering and Br(DmRNA+DtRNA) is the branching fraction. The natural identification of pi(DmRNA) is as the probability that mRNA pairs with tRNAi.

A possible weak point of the proposal is pairing: what are the conditions under which it occurs and are different pairing patterns possible. Possible second weak point is purely group theoretic: one should check whether which reactions are allowed by the tensor product decompositions for the states of dark proton triplets.

See the article Getting philosophical: some comments about the problems of physics, neuroscience, and biology or the chapter Quantum Mind, Magnetic Body, and Biological Body of of "TGD based view about living matter and remote mental interactions".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

More about badly behaving photons

I wrote about two years ago about strange halving of the unit of angular momentum for photons. The article had title Badly behaving photons and space-time as 4-surface).

Now I encountered a popular article (see this) telling about this strange halving of photon angular momentum unit two years after writing the above comments. I found nothing new but my immediate reaction was that the finding could be seen as a direct proof for heff=nh0 hierarchy, where h0 is the minimal value of Planck constants, which need not be ordinary Planck constant h as I have often assumed in previous writings.

Various arguments indeed support for h=6h0. This hypothesis would explain the strange findings about hydrogen atom having what Mills calls hydrino states having larger binding energy than normal hydrogen atom (see this): the increase of the binding energy would follow from the proportionality of the binding energy to 1/heff2. For n0=6→ n<6 the binding energy is scale up as (n/6)2. The values of n=1,2,3 dividing n are preferred. Second argument supporting h=6h0 comes from the model for the color vision (see this).

What is the interpretation of the ordinary photon angular momentum for n=n0= 6? Quantization for angular momentum as multiples of hbar0 reads as l= l0hbar0= (l0/6)hbar, l0=1,2... so that fractional angular momenta are possible. l0=6 gives the ordinary quantization for which the wave function has same value for all 6 sheets of the covering. l0=3 gives the claimed half-quantization.

See the article Badly behaving photons and space-time as 4-surface.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, July 30, 2018

TGD approach to the hen-or-egg problems of biology

Standard biology suffers from several hen-or-egg problems. Which came first: genes or metabolism? The problem is that genes require metabolism and metabolism requires genes! Genes-first leads to the vision about RNA world and metabolism-first to lipids world idea.

The emergence of basic biomolecules is the second problem. What selected these relatively few basic molecules from huge multitude of molecules? And again the hen-egg problems emerge. Which came first: proteins or the translation machinery producing them from RNA? Did RNA arrive before proteins or did proteins and RNAs necessary for their transcription and translation machinery emerge first. One can argue that ribozymes served as catalysts for RNA replication but how RNAs managed to emerge without replication machinery involving ribozymes? What about DNA: did it emerge before RNA or could it have emerged from RNA? It seems that something extremely important is missing from the picture.

TGD predicts the existence of dark variants of basic biomolecules DNA, RNA, tRNA, and amino-acids (AAs).
One can ask whether something very simple could be imagined by utilizing the potential provided by dark variants of bio-molecules present already from beginning and providing both genes and metabolism simultaneously.

One can start from a couple of observations which forced myself to clarify myself some aspects of TGD view and also to develop an alternative vision about prebiotic period.

  1. Viruses are probable predecessors of cellular life. So called positive sense single stranded RNA (ssRNA) associated with viruses can form temporarily double strands and in this state replicate just like DNA (see this). The resulting single stranded RNA can in turn be translated to proteins by using ribosomal machinery. RNA replication takes place in so called viral replication complexes associated with internal cell membranes, and is catalyzed by proteins produced by both virus and host cell.

    Could ribozyme molecules have catalyzed RNA replication during RNA era? For this option AA translation would have emerged later and the storage of genetic information to DNA only after that. There is however the question about the emergence of AAs and of course, DNA and RNA. Which selected just them from enormous variety of options.

  2. Lipid membranes are formed by self-organization process from lipids and emerge spontaneously without the help of genetic machinery. It would be surprising if prebiotic life would not have utilized this possibility. This idea leads to the notion of lipid life as a predecessor of RNA life. In this scenario metabolism would have preceded genes (see this and this).

Consider now the situation in TGD.
  1. The dark variants of DNA, RNA, AA, and tRNA would provide the analogs of genes and all basic biomolecules present from the beginning together with lipid membranes whose existence is not a problem. They would also provide a mechanism of metabolism in which energy feed by (say) solar radiation creates so called exclusion zones (EZs) of Pollack in water bounded by a hydrophilic substance. EZs are negatively charged regions of water giving rise to a potential gradient (analog of battery) storing chemically the energy provided by sunlight and the formation of these regions gives rise to dark nuclei at magnetic flux tubes with scaled down binding energy.

    When the p-adic length scale of these dark nuclei is liberated binding energy is liberated as metabolic energy so that metabolic energy feed giving basically rise to states with non-standard value heff/h=n of Planck constant is possible. For instance, processes like protein folding and muscle contraction could correspond to this kind of reduction of heff liberating energy and also a transformation of dark protons to ordinary protons and disappearance of EZs.

    The cell interiors are negatively charged and this is presumably true for the interiors of lipid membranes in general and they would therefore correspond to EZs with part of protons at magnetic flux tubes as dark nuclei representing dark variants of basic biomolecules. Already this could have made possible metabolism, the chemical storage of metabolic energy to a potential gradient over the lipid membrane, and also the storing of the genetic information to dark variants of biomolecules at the magnetic flux tubes formed in Pollack effect.

  2. Biochemistry would have gradually learned to mimic dark variants of basic processes as a kind of shadow dynamics. Lipid membranes could have formed spontaneously in water already during prebiotic phase when only dark variants of DNA, RNA, AAs and tRNA, water, and lipids and some simple bio-molecules could have been present. The dark variants of replication, transcription and translation would have been present from the beginning and would still provide the templates for these processes at the level of biochemistry.

    Dark-dark pairing would rely on resonant frequency pairing by dark photons and dark-ordinary pairing to resonant energy pairing involving transformation of dark photon to ordinary photon. The direct pairing of basic biomolecules with their dark variants by resonance mechanism could have led to their selection explaining the puzzle of why so few biomolecules survived.

    This is in contrast with the usual view in which the emergence of proteins would have required the emergence of translation machinery in turn requiring enzymes as catalyzers so that one ends up with hen-or-egg question: which came first, the translation machinery or proteins. In RNA life option similar problem emerges since RNA replication must be catalyzed by ribozymes.

  3. Gradually DNA, RNA, tRNA, and AA would have emerged by pairing with their dark variants by resonance mechanism. The presence of lipid membranes could have been crucial in catalyzing this pairing. Later ribozymes could have catalyzed RNA replication by the above mentioned mechanism during RNA era: note however that the process could be only a shadow of much simpler replication for dark DNA. One can even imagine membrane RNAs as analogs of membrane proteins serving as receptors giving rise to ionic channels. Note however that in TGD framework membrane proteins could have emerged very early via their pairing with dark AA associated with the membrane. These membrane proteins and their RNA counterparts could have evolved into transcription and translation machineries.

    DNA molecules would have emerged through pairing with dark DNA molecules. The difference between deoxi-ribose and ribose would correspond to the difference between dark RNA and dark DNA manifesting as different cyclotron frequencies and energies making possible the resonant pairing for frequencies and energies. Proteins would have emerged as those proteins able to pair resonantly with dark variants of amino-acid sequences without any pre-existing translational machinery. It is difficult to say in which order the basic biomolecules would have emerged. They could have emerged even simultaneously by resonant pairing with their dark variants.

See either the article New results in the model of bio-harmony or the article Getting philosophical: some comments about the problems of physics, neuroscience, and biology.

For a summary of earlier postings see Latest progress in TGD.

Sunday, July 29, 2018

Homeostasis and zero energy ontology

Homeostasis means that system is able to preserve its flow equilibrium under changing conditions. This involves many-layered hierarchies of pairs of control signals with opposite effects so that the system stays in equilibrium. For instance, we could not stand without this control system as one can easily check by using non-living test body! For instance, in bio-chemical homeostasis the ratios of concentrations remain constant. It is not at all clear whether ordinary chemistry can explain homeostasis.

In zero energy ontology (ZEO) one can imagine very fundamental mechanism of homeostasis.

  1. Zero energy states are pairs of ordinary 3-D states with members located at opposite boundaries of causal diamond (CD). Their total quantum numbers are opposite, which is only a manner to say that conservation laws hold true. The space-time surfaces connecting the 3-surfaces are preferred extremals of the action principle.

    In quantum field theory this picture can be seen only as a book keeping trick and one assumes that space-time continues beyond causal diamond. There is however no need for this in TGD framework although it is natural to assume that there is some largest CD beyond which space-time surfaces do not continue. CDs form a hierarchy and sub-CDs of this CD can be connected by minimal surfaces, which are analogs of external particles. One obtains networks analogs to twistor Grassmannian diagrams.

  2. Conscious entities (selves) correspond in ZEO to a sequences of state function reductions having interpretation as weak measurements, "small" state function reductions. In given weak measurement the members of the zero energy state at the passive boundary of CD are not affected: this is essentially Zeno effect associated with repeated measurements in ordinary quantum theory. The members of the state pairs at the active boundary of CD change and also the temporal distance between the tips of CD increases: this assigns a clock time to the experienced flow of time as sequence of state function reductions.

    Eventually it becomes impossible to find observables, whose measurement would leave the passive parts of the zero energy state invariant. First "big" state function reduction changing the roles of active and passive boundaries of CD takes place and time begins to run in opposite direction since the formerly passive boundary recedes away from the formerly active boundary which is now stationary. Self dies and re-incarnates with an opposite arrow of time. In TGD biology these two time-reversed selves are proposed to correspond to motor actions and sensory perceptions.

    Already Fantappie realized that two arrows of time seem to be present in living matter (consider only spontaneous assembly of bio-molecules as decay in opposite direction) and introduced the notion of syntropy as time-reversed entropy. For an observer with given arrow of time, a system with opposite arrow of time seems to break the second law. Temperature and concentration gradients develop, system self-organizes.

  3. These two quantal time evolutions with opposite arrows of time look very much like competing control signals in homeostasis. The 4-D conscious entities corresponding to control signals would have finite lifetime so that in their ensemble the effects of the signals with opposite arrows of time tend to compensate. This would give rise to homeostasis.

See the article Getting philosophical: some comments about the problems of physics, neuroscience, and biology or the chapter Quantum Mind, Magnetic Body, and Biological Body.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Which came first: genes or metabolism?

The key hen-egg question in biology is which came first, genes or metabolism. Both seem to need the other one. Genes-first view has led to the notion of RNA era as prebiotic biology. Metabolism-first view assumes that lipid life preceded the life controlled by basic biomolecules. In TGD framework this hen-egg question disappears.

  1. Dark variants of DNA, RNA, tRNA, and AAs are dark proton sequences representing dark nuclei with scaled down nuclear binding energy. They come in different length scales and their decay to dark nuclei with shorter scale liberates dark nuclear binding energy, which could have been used as metabolic energy during the prebiotic period and could even in recent biology. Therefore genes and metabolic genes would have emerged simultaneously.

  2. In so called Pollack effect the irradiation of a water bounded by a hydrophilic material creates negatively charged exclusion zone (EZ). Positive charge must go somewhere and the TGD based proposal is that it goes to magnetic flux tubes and forms dark proton sequences provoding a representation of DNA,RNA,tRNA,AA and of vertebrate genetic code.

    This effect occurs for several light wavelengths but infrared light present in thermal radiation provides the most effective manner to generate the charge separation. One can say that charge separation generates a battery so that the energy of light is transformed to a usable chemical energy. EZs have presumably preceded cell membrane (cell interior has negative charge).

    According to Pollack protein unfolding and folding could involve Pollack effect and its reverse. Also muscle contraction liberating energy could involve the reverse of Pollack effect. This mechanism works also for humans and one can even say that photosynthesis which induces charge separation by splitting water to H+ and OH- works also for animals but in somewhat more complex manner. This could explain the positive effects of sunlight on well-being. Some spiritually oriented people are even claimed to survive by using only sunlight as metabolic energy: light and water would be enough.

  3. In this framework one can consider the possibility of lipid membranes living in symbiosis with dark variants DNA, RNA, tRNA, and AAs during prebiotic era. Membranes would have generated charge separation by Pollack effect in order to transform the energy of light to chemical energy usable as metabolic energy. The resulting dark nuclei at magnetic flux tubes would have given rise to a dark realization of genetic code realized later chemically perhaps in terms of some simpler molecules attached to the dark variants of DNA, RNA or AA. Replication at this period could have been simply splitting of the membrane to two pieces perhaps induced by the splitting of the magnetic body in the manner described.

See the article Getting philosophical: some comments about the problems of physics, neuroscience, and biology or the chapter Quantum Mind, Magnetic Body, and Biological Body.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, July 27, 2018

Two new findings related to high Tc super-conductivity

I learned simultaneously about two findings related to high Tc super-conductivity leading to a proposal of a general mechanism of bio-control in which small signal can serve as a control knob inducing phase transition producing macroscopically quantum coherent large heff phases in living matter.

1. High Tc superconductivity at room temperature and pressure

Indian physicists Kumar Thapa and Anshu Pandey have found evidence for superconductivity at ambient (room) temperature and pressure in nanostructures (see this). There are also earlier claims about room temperature superconductivity that I have discussed in my writings.

1.1 The effect

Here is part of the abstract of the article of Kumar Thapa and Anshu Pandey.

We report the observation of superconductivity at ambient temperature and pressure conditions in films and pellets of a nanostructured material that is composed of silver particles embedded into a gold matrix. Specifically, we observe that upon cooling below 236 K at ambient pressures, the resistance of sample films drops below 10-4 Ohm, being limited by instrument sensitivity. Further, below the transition temperature, samples become strongly diamagnetic, with volume susceptibilities as low as -0.056. We further describe methods to tune the transition to temperatures higher than room temperature.

During years I have developed a TGD based model of high Tc superconductivity and of bio-superconductivity (see this and this).

Dark matter is identified as phases of ordinary matter with non-standard value heff/h=n of Planck constant (see this) (h=6h0 is the most plausible option). Charge carriers are heff/h0=n dark macroscopically quantum coherent phases of ordinary charge carriers at magnetic flux tubes along which the supra current can flow. The only source of dissipation relates to the transfer of ordinary particles to flux tubes involving also phase transition changing the value of heff.

This superconductivity is essential also for microtubules exhibit signatures for the generation of this kind of phase at critical frequencies of AC voltages serving as a metabolic energy feed providing for charged particles the needed energy that they have in heff/h0=n phase.

Large heff phases with same parameters than ordinary phase have typically energies large than ordinary phase. For instance. Atomic binding energies scale like 1/heff2 and cyclotron energies and harmonic oscillator energies quite generally like heff. Free particle in box is however quantum critical in the sense that the energy scale E= hbareff2/2mL2 does not depend on the heff if one has L∝ heff. At space-time level this is true quite generally for external (free) particles identified as minimal 4-surfaces. Quantum criticality means independence on various coupling parameters.

What is interesting is that Ag and Au have single valence electron. The obvious guess would be that valence electrons become dark and form Cooper pairs in the transition to superconductivity. What is interesting that the basic claim of a layman researcher David Hudson is that ORMEs or mono-atomic elements as he calls them include also Gold. These claims are not of course taken seriously by academic researchers. In the language of quantum physics the claim is that ORMEs behave like macroscopic quantum systems. I decided to play with the thought that the claims are correct and this hypothesis served later one of the motivations for the hypothesis about dark matter as large heff phases: this hypothesis follows from adelic physics (see this), which is a number theoretical generalization of ordinary real number based physics.

TGD explanation of high Tc superconductivity and its biological applications strongly suggest that a feed of "metabolic" energy is a prerequisite of high Tc superconductivity quite generally. The natural question is whether experimenters might have found something suggesting that the external energy feed - usually seen as a prerequisite for self-organization - is involved with high Tc superconductivity. During same day I got FB link to another interesting finding related to high Tc superconductivity in cuprates and suggesting positive answer to this question!

1.2 The strange observation of Brian Skinner about the effect

After writing the above comments I learned from a popular article (see this) about and objection (see this) challenging the claimed discovery (see this). The claimed finding received a lot of attention and physicist Brian Skinner in MIT decided to test the claims. At first the findings look quite convincing to him. He however decided to look for the noise in the measured value of volume susceptibility χV. χV relates the magnetic field B in superconductor to the external magnetic field Bext via the formulate B= (1+χV)Bext (in units with μ0=1 one has Bext=H, where H is used usually).

For diamagnetic materials χV is negative since they tend to repel external magnetic fields. For superconductors one has χV=-1 in the ideal situation. The situation is not however ideal and stepwise change of χV from χV=0 to χV to some negative value but satisfying |μV| <1 serves as a signature of high Tc superconductivity. Both superconducting and ordinary phase would be present in the sample.

Figure 3a of the article of authors gives χV as function of temperature for some values of Bext with the color of the curve indicating the value of Bext. Note that μV depends on Bext, whereas in strictly linear situtation it would not do so. There is indeed transition at critical temperature Tc= 225 K reducing χV=0 to negative value in the range χV ∈ [-0.05 ,-.06 ] having no visible temperature dependence but decreasing somewhat with Bext.

The problem is that the fluctuations of χV for green curve (Bext=1 Tesla) and blue curve (Bext=0.1 Tesla) have the same shape. With blue curve only only shifted downward relative to the green one (shifting corresponds to somewhat larger dia-magnetism for lower value of Bext). If I have understood correctly, the finding applies only to these two curves and for one sample corresponding to Tc= 256 K. The article reports superconductivity with Tc varying in the range [145,400] K.

The pessimistic interpretation is that this part of data is fabricated. Second possibility is that human error is involved. The third interpretation would be that the random looking variation with temperature is not a fluctuation but represents genuine temperature dependence: this possibility looks infeasible but can be tested by repeating the measurements or simply looking whether it is present for the other measurements.

1.3 TGD explanation of the effect found by Skinner

One should understand why the effect found by Skinner occurs only for certain pairs of magnetic fields strengths Bext and why the shape of pseudo fluctuations is the same in these situations.

Suppose that Bext is realized as flux tubes of fixed radius. The magnetization is due to the penetration of magnetic field to the ordinary fraction of the sample as flux tubes. Suppose that the superconducting flux tubes assignable 2-D surfaces as in high Tc superconductivity. Could the fraction of super-conducting flux tubes with non-standard value of heff - depends on magnetic field and temperature in predictable manner?

The pseudo fluctuation should have same shape as a function temperature for the two values of magnetic fields involved but not for other pairs of magnetic field strengths.

  1. Concerning the selection of only preferred pairs of magnetic fields Haas-van Alphen effect gives a
    clue. As the intensity of magnetic field is varied, one observes so called de Haas-van Alphen effect (see this) used to deduce the shape of the Fermi sphere: magnetization and some other observables vary periodically as function of 1/B. In particular, this is true for χV.

    The value of P is

    PH-A== 1/BH-A= 2π e/hbar Se ,

    where Se is the extremum Fermi surface cross-sectional area in the plane perpendicular to the magnetic field and can be interpreted as area of electron orbit in momentum space (for illustration see this).

    Haas-van Alphen effect can be understood in the following manner. As B increases, cyclotron orbits contract. For certain increments of 1/B n+1:th orbit is contracted to n:th orbit so that the sets of the orbits are identical for the values of 1/B, which appear periodically. This causes the periodic oscillation of say magnetization. From this one learns that the electrons rotating at magnetic flux tubes of Bext are responsible for magnetization.


  2. One can get a more detailed theoretical view about de Haas-van Alphen effect from the article of Lifschitz and Mosevich (see this). In a reasonable approximation one can write

    P= e× ℏ/meEF = [4α/32/3π1/3]× [1/Be] , Be == e/ae2 =[x-216 Tesla ,

    ae= (V/N)1/3= =xa , a=10-10 m .

    Here N/V corresponds to valence electron density assumed to form free Fermi gas with Fermi energy EF= ℏ2(3pi2N/V)2/3/2me. a=10-10 m corresponds to atomic length scale. α≈ 1/137 is fine structure constant. For P one obtains the approximate expression

    P≈ .15 x2 Tesla-1 .

    If the difference of Δ (1/Bext) for Bext=1 Tesla and Bext=.1 Tesla correspond to a k-multiple of P, one obtains the condition

    kx2 ≈ 60 .

  3. Suppose that Bext,1=1 Tesla and Bext,1=.1 Tesla differ by a period P of Haas-van Alphen effect. This would predict same value of χV for the two field strengths, which is not true. The formula used for χV however holds true only inside given flux tube: call this value χV,H-A.

    The fraction f of flux tubes penetrating into the superconductor can depend on the value of Bext and this could explain the deviation. f can depend also on temperature. The simplest guess is that that two effects separate:

    χV= χV,H-A(BH-A/Bext)× f(Bext,T) .

    Here χV,H-A has period PH-A as function of 1/Bext and f characterizes the fraction of penetrated flux tubes.

  4. What could one say about the function f(Bext,T)? BH-A=1/PH-A has dimensions of magnetic field and depends on 1/Bext periodically. The dimensionless ratio Ec,H-A/T of cyclotron energy Ec,H-A= hbar eBH-A/me and thermal energy T and Bext could serve as arguments of f(Bext,T) so that one would have

    f(Bext,T)=f1(Bext)f2(x) ,

    x=T/EH-A(Bext)) .

    One can consider also the possibility that Ec,H-A is cyclotron energy with hbareff=nh0 and larger than otherwise. For heff=h and Bext= 1 Tesla one would have Ec= .8 K, which is same order of magnitude as variation length for the pseudo fluctuation. For instance, periodicity as a function of x might be considered.

    If Bext,1=1 Tesla and Bext,1=.1 Tesla differ by a period P one would have

    χV(Bext,1,T)/χV(Bext,2,T) =f1(Bext,1)/f1(Bext,2)

    independently of T. For arbitrary pairs of magnetic fields this does not hold true. This property and also the predicted periodicity are testable.


2. Transition to high Tc superconductivity involves positive feedback

The discovery of positive feedback in the transition to hight Tc superconductivity is described in the popular article " Physicists find clues to the origins of high-temperature superconductivity" (see this). Haoxian Li et al at the University of Colorado at Boulder and the Ecole Polytechnique Federale de Lausanne have published a paper on their experimental results obtained by using ARPES (Angle Resolved Photoemission Spectroscopy) in Nature Communications (see this).

The article reports the discovery of a positive feedback loop that greatly enhances the superconductivity of cupra superconductors. The abstract of the article is here.

Strong diffusive or incoherent electronic correlations are the signature of the strange-metal normal state of the cuprate superconductors, with these correlations considered to be undressed or removed in the superconducting state. A critical question is if these correlations are responsible for the high-temperature superconductivity. Here, utilizing a development in the analysis of angle-resolved photoemission data, we show that the strange-metal correlations don’t simply disappear in the superconducting state, but are instead converted into a strongly renormalized coherent state, with stronger normal state correlations leading to stronger superconducting state renormalization. This conversion begins well above Tc at the onset of superconducting fluctuations and it greatly increases the number of states that can pair. Therefore, there is positive feedback––the superconductive pairing creates the conversion that in turn strengthens the pairing. Although such positive feedback should enhance a conventional pairing mechanism, it could potentially also sustain an electronic pairing mechanism.

The explanation of the positive feedback in TGD TGD framework could be following. The formation of dark electrons requires "metabolic" energy. The combination of dark electrons to Cooper pairs however liberates energy. If the liberated energy is larger than the energy needed to transform electron to its dark variant it can transform more electrons to dark state so that one obtains a spontaneous transition to high Tc superconductivity. The condition for positive feedback could serve as a criterion in the search for materials allowing high Tc superconductivity.

The mechanism could be fundamental in TGD inspired quantum biology. The spontaneous occurrence of the transition would make possible to induce large scale phase transitions by using a very small signal acting therefore as a kind of control knob. For instance, it could apply to bio-superconductivity in TGD sense, and also in the transition of protons to dark proton sequences giving rise to dark analogs of nuclei with a scaled down nuclear binding energy at magnetic flux tubes explaining Pollack effect. This transition could be also essential in TGD based model of "cold fusion" based also on the analog of Pollack effect. It could be also involved with the TGD based model for the finding of macroscopic quantum phase of microtubules induced by AC voltage at critical frequencies (see this).

See the article Two new findings related to high Tc super-conductivity or the chapter Quantum criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, July 17, 2018

How to describe family replication phenomenon gauge theoretically?

In TGD framework family replication phenomenon is described topologically (see this). The problem is to modify the gauge theory approach of the standard model to model to describe family replication phenomenon at QFT limit.

1. Identification of elementary particles

1.1 Original picture

The original view about family replication phenomenon assumed that fermions correspond to single boundary component of the space-time surface (liquid bubble is a good analogy) and thus characterized by genus g telling the number of handles attached to the sphere to obtain the bubble topology.

  1. Ordinary bosons would correspond to g=0 (spherical) topology and the absorption/emission of boson would correspond to 2-D topological sum in either time direction. This interpretation conforms with the universality of ordinary ew and color interactions.

  2. The genera of particle and antiparticle would have formally opposite sign and the total genus would be conserved in the reaction vertices. This makes sense if the annihilation of fermion and anti-fermion to g=0 boson means that fermion turns backwards in time emitting boson. The vertex is essentially 2-D topological sum at criticality between two manifold topologies. In the vertex 2-surface would be therefore singular manifold. The analogy to closed string emission in string model is obvious.

1.2 The recent vision

Later the original picture was replaced with a more complex identification.

  1. Fundamental particles - partons - serving as building bricks of elementary particles are partonic 2-surfaces identified as throats of wormhole contacts at which the Euclidian signature of the induced metric of the wormhole contact changes to Minkowskian one. The orbit of partonic 2-surface corresponds to a light-like 3-surface at which the Minkowskian signature of the induced metric changes to Euclidian, and carries fermion lines defining of boundaries of string world sheets. Strings connect different wormhole throats and mean generalization of the notion of point like particle leading to the notion of tensor network (see this).

    Elementary particles are pairs of two wormhole contacts. Both fermions and bosons are pairs of string like flux tubes at parallel space-time sheets and connected at their ends by CP2 sized wormhole contacts having Euclidian signature of induced metric. A non-vanishing monopole flux loop runs around the extrenely flattened rectangle loop connecting wormhole throats at both space-time sheets and traverses through the contacts.

  2. The throats of wormhole contacts are characterized by genus given by the number g of handles attached to sphere to get the topology. If the genera ga,gb of the opposite throats of given wormhole contact are same, one can assign genus to it : g=ga=gb. This can be defended by the fact, that the distance between the throats is given by CP2 length scale and thus extremely short so that ga≠ gb implies strong gradients and by Uncertainty Principle mass of order CP2 mass.

    If the genera of the two wormhole contacts are same: g1=g2, one one can assign genus g to the particle. This assumption is more questionable if the distance between contacts is of order of Compton length of the particle. The most general assumption is that all genera can be different.

  3. There is an argument for why only 3 lowest fermion generations are observed (see this). Assume that the genus g for all 4 throats is same. For g=0,1,2 the partonic 2-surfaces are always hyper-elliptic allowing thus a global conformal Z2 symmetry. Only these 3 2-topologies would be realized as elementary particles whereas higher generations would be either very heavy or analogous to many-particle states with a continuum mass spectrum. For the latter option g=0 and g=1 state could be seen as vacuum and single particle state whereas g=2 state could be regarded as 2-particle bound state. The absence of bound n-particle state with n>2 implies continuous mass spectrum.

  4. Fundamental particles would wave function in the conformal moduli space associated with its genus (Teichmueller space). For fundametal fermions the wave function would be strongly localized to single genus. For ordinary bosons one would have maximal mixing with the same amplitude for the appearance of wormhole throat topology for all genera g=0,1,2. For the two other u(3)g neutral bosons in octet one would have different mixing amplitudes and charge matrices would be orthogonal and universality for the couplings to ordinary fermions would be broken for them. The evidence for the breaking of the universality (see this) is indeed accumulating and exotic u(3)g neutral gauge bosons giving effectively rise to two additional boson families could explain this.

2. Two questions related to bosons and fermions

What about gauge bosons and Higgs, whose quantum numbers are carried by fermion and anti-fermion (or actually a superposition of fermion-anti-fermion pairs). There are two options.

  1. Option I: The fermion and anti-fermion for elementary boson are located at opposite throats of wormhole contact as indeed assumed hitherto. This would explain the point-likeness of elementary bosons. u(3) charged bosons having different genera at opposite throats would have vanishing couplings to ordinary fermions and bosons. Together with large mass of ga≠ gb wormhole contacts this could explain why ga≠ gb bosons and fermions are not observed and would put the Cartan algebra of u(3)g in physically preferred position. Ordinary fermions would effectively behave as u(3)g triplet.

  2. Option II: The fermion and anti-fermion for elementary boson are located at throats of different wormhole contacts making them non-point like string like objects. For hadron like stringy objects, in particular graviton, the quantum numbers would necessarily reside at both ends of the wormhole contact if one assumes that single wormhole throats carries at most one fermion or anti-fermion. For this option also ordinary fermions could couple to (probably very massive) exotic bosons different genera at the second end of the flux tube.

There are also two options concerning the representation of u(3)g assignal to fermions corresponding ot su(3)g triplet 3 and 8⊕ 1.

Option I: Since only the wormhole throat carrying fermionic quantum numbers is active and since fundamental fermions naturally correspond to u(3)g triplets, one can argue that the wormhole throat carrying fermion quantum number determines the fermionic u(3)g representation and should be therefore 3 for fermion and 3bar anti-fermion.

At fundamental level also bosons would in the tensor products of these representations and many-sheeted description would use these representations. Also the description of graviton-like states involving fermions at all 4 wormhole throats would be natural in this framework. At gauge theory limit sheets would be identified and in the most general case one would need U(3)g× U(3)g× U(3)g× U(3)g with factors assignable to the 4 throats.

  1. The description of weak massivation as weak confinement based on the neutralization of weak isospin requires a pair of left and right handed neutrino located with νL and νbarR or their CP conjugates located at opposite throats of the passive wormhole contact associated with fermion. Already this in principle requires 4 throats at fundamental level. Right-handed neutrino however carries vanishing electro-weak quantum numbers so that it is effectively absent at QFT limit.

  2. Why should fermions be localized and su(3)g neutral bosons delocalized with respect to genus? If g labels for states of color triplet 3 the localization of fermions looks natural, and the mixing for bosons occurs only in the Cartan algebra in u(3)g framework: only u(3)g neutral states an mix.

Option II: Also elementary fermions belong to 8+1. The simplest assumption is that both fermions and boson having g1≠ g2 have large mass. In any case, g1≠ g2 fermions would couple only to u(3)g charged bosons. Also for this option ordinary bosons with unit charge matrix for u(3)g would couple in a universal manner.
  1. The model for CKM mixing (see this) would be modified in trivial manner. The mixing of ordinary fermions would correspond to different topological mixings of the three states su(3)-neutral fermionic states for U and D type quarks and charged leptons and neutrinos. One could reduce the model to the original one by assuming that fermions do not correspond to generators Id, Y, and I3 for su(3)g but their linear combinations giving localization to single valued of g in good approximation: they would correspond to diagonal elements eaa, a=1,2,3 corresponding to g=0,1,2.

  2. p-Adic mass calculations (see this) assuming fixed genus for fermion predict an exponential sensitivity on the genus of fermion. In the general case this prediction would be lost since one would have weighted average over the masses of different genera with g=2 dominating exponentially. The above recipe would cure also this problem. Therefore it seems that one cannot distinguish between the two options allowing g1≠ g2. The differences emerge only when all 4 wormhole throats are dynamical and this is the case for graviton-like states (spin 2 requires all 4 throats to be active).

The conclusion seems to be that the two options are more or less equivalent for light fermions. In the case of exotic fermions expected to be extremely heavy the 8+1 option looks more natural. At this limit however QFT limit need not make sense anymore.

3. Reaction vertices

Consider next the reaction vertices for the option in which particles correspond to string like objects identifiable as pairs of flux tubes at opposite space-time sheets and carrying monopole magnetic fluxes and with ends connected by wormhole contacts.

  1. Reaction vertex looks like a simultaneous fusing of two open strings along their ends at given space-time sheets. The string ends correspond to wormhole contacts which fuse together completely. The vertex is a generalization of a Y-shaped 3-vertex of Feynman diagram. Also 3-surfaces assignable to particles meet in the same manner in the vertex. The partonic 2-surface at the vertex would be non-singular manifold whereas the partonic orbit would be singular manifold in analogy with Y shaped portion of Feynman diagram.

  2. In the most general case the genera of all four throats involved can be different. Since the reaction vertex corresponds to a fusion of wormhole contacts characterized in the general case by (g1,g2), one must have (g1,g2)=(g3,g4). The rule would correspond in gauge theory description to the condition that the quark and antiquark su(3)g charges are opposite at both throats in order to guarantee charge conservation as the wormhole contact disappears.

  3. One has effectively pairs of open string fusing along their and and the situation is analogous to that in open string theory and described in terms of Chan-Paton factors. This suggests that gauge theory description makes sense at QFT limit.

    1. If g is same for all 4 throats, one can characterize the particle by its genus. The intuitive idea is that fermions form a triplet representation of u(3)g assignable to the family replication. In the bosonic sector one would have only u(3)g neutral bosons. This approximation is expected to be excellent.

    2. One could allow g1≠ g2 for the wormhole contacts but assume same g for opposite throats. In this case one would have U(3)g× U(3)g as dynamical gauge group with U(3)g associated with different wormhole contacts. String like bosonic objects (hadron like states) could be therefore seen as a nonet for u(3)g. Fermions could be seen as a triplet.

      Apart from topological mixing inducing CKM mixing fermions correspond in good approximation to single genus so that the neutral members of u(3)g nonet, which are superpositions over several genera must mix to produce states for which mixing of genera is small. One might perhaps say that the topological mixing of genera and mixing of u3(g) neutral bosons are anti-dual.

    3. If all throats can have different genus one would have U(3)g× U(3)g× U(3)g× U(3)g as dynamical gauge group U(3)g associated with different wormhole throats. This option is probably rather academic.
      Also fermions could be seen as nonets.

4. What would the gauge theory description of family replication phenomenon look like?

For the most plausible option bosonic states would involve a pair of fermion and anti-fermion at opposite throats of wormhole contact. Bosons would be characterized by adjoint representation of u(3)g=su(3)g× u(1)g obtained as the tensor product of fermionic triplet representations 3 and 3bar.

  1. u(1)g would correspond to the ordinary gauge bosons bosons coupling to ordinary fermion generations in the same universal manner giving rise to the universality of electroweak and color interactions.

  2. The remaining gauge bosons would belong to the adjoint representation of su(3)g. One indeed expects symmetry breaking: the two neutral gauge bosons would be light whereas charged bosons would be extremely heavy so that it is not clear whether QFT limit makes sense for them.

    Their charge matrices Qgi would be orthogonal with each other (Tr(QgiQgj)=0, i≠ j) and with the unit charge matrix u(1)g charge matrix Q0∝ Id (Tr(Qgi)=0) assignable to the ordinary gauge bosons.These charge matrices act on fermions and correspond to the fundamental representation of su(3)g. They are expressible in terms of the Gell-Mann matrices λi (see
    this).

How to describe family replication for gauge bosons in gauge theory framework? A minimal extension of the gauge group containing the product of standard model gauge group and U(3)g does not look promising since it would bring in additional generators and additional exotic bosons with no physical interpretation. This extension would be analogous to the extension of the product SU(2)× SU(3) of the spin group SU(2) and Gell-Mann's SU(3) to SU(6)). Same is true about the separate extensions of U(2)ew and SU(3)c.
  1. One could start from an algebra formed as a tensor product of standard model gauge algebra g= su(3)c× u(2)ew and algebraic structure formed somehow from the generators of u(3)g. The generators would be

    Ji,a= Ti ⊗ Ta ,

    where i labels the standard model Lie-algebra generators and a labels the generators of u(3)g.

    This algebra should be Lie-algebra and reduce to the same as associated with standard model gauge group with generators Tb replacing effectively complex numbers as coefficients. Mathematician would probably say, that standard model Lie algebra is extended to a module with coefficients given by u(3)g Lie algebra generators in fermionic representation but with Lie algebra product for u(3)g replaced with a product consistent with the standard model Lie-algebra structure, in particular with the Jacobi-identities.

  2. By writing explicitly commutators and Jacobi identifies one obtains that the product must be symmetric: Ta• Tb= Tb• Ta and must satisfy the conditions Ta• (Tb• Tc)= Tb• (Tc• Ta)= Tc• (Ta• Tb) since these terms appear as coefficients of the double commutators appearing in Jacobi-identities

    [Ji,a,[Jj,b],Jk,c]]+[Jj,b,[Jk,c],Ji,a]] + [Jk,c,[Ji,a],Jj,b]]=0 .

    Commutativity reduces the conditions to associativity condition for the product •. For the sub-algebra u(1)3g these conditions are trivially satisfied.

  3. In order to understand the conditions in the fundamental representation of su(3), one can consider the product the su(3)g product defined by the anti-commutator in the matrix representation provided by Gell-Mann matrices λa (see this and this):

    ab}= 43δa,b Id + 4dabcλc , & Tr(λaλb) =2δab , & dabc= Tr(λaλbc)

    dabc is totally symmetric under exchange of any pair of indices so that the product defined by the anti-commutator is both commutative and associative. The product extends to u(3)g by defining the anti-commutator of Id with λa in terms of matrix product. The product is consistent with su(3)g symmetries so that these dynamical charges are conserved. For complexified generators this means that generator and its conjugate have non-vanishing coefficient of Id.

    Remark: The direct sum u(n)⊕ u(n)s formed by Lie-algebra u(n) and its copy u(n)s endowed with the anti-commutator product • defines super-algebra when one interprets anti-commutator of u(n)s elements as an element of u(n).

  4. Could su(3) associated with 3 fermion families be somehow special? This is not the case. The conditions can be satisfied for all groups SU(n), n≥ 3 in the fundamental representation since they all allow completely symmetric structure constants dabc as also higher completely symmetric higher structure constants dabc... up to n indices. This follows from the associativity of the symmetrized tensor product: ((Adj⊗ Adj)S⊗ Adj)S =(Adj⊗ (Adj⊗ Adj)S)S for the adjoint representation.

To sum up, the QFT description of family replication phenomenon with the extension of the standard model gauge group would bring to the theory the commutative and associative algebra of u(3)g as a new mathematical element. In the case of ordinary fermions and bosons and also in the case of u(3)g neutral bosons the formalism would be however rather trivial modification of the intuitive picture.

See the article Topological description of family replication and evidence for higher gauge boson generations.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, July 15, 2018

Two problems with a common solution: the problem of two Hubble constants and enormous value of cosmological constant

The discrepancy of the two determinations of Hubble constant has led to a suggestion that new physics might be involved (see this).

  1. Planck observatory deduces Hubble constant H giving the expansion rate of the Universe from CMB data something like 360,000 y after Big Bang, that is from the properties of the cosmos in long length scales. Riess's team deduces H from data in short length scales by starting from galactic length scale and identifies standard candles (Cepheid variables), and uses these to deduce a distance ladder, and deduces the recent value of H(t) from the redshifts.

  2. The result from short length scales is 73.5 km/s/Mpc and from long scales 67.0 km/s/Mpc deduced from CMB data. In short length scales the Universe appears to expand faster. These results differ too much from each other. Note that the ratio of the values is about 1.1. There is only 10 percent discrepancy but this leads to conjecture about new physics: cosmology has become rather precise science!

TGD could provide this new physics. I have already earlier considered this problem but have not found really satisfactory understanding. The following represents a new attempt in this respect.
  1. The notions of length scale are fractality are central in TGD inspired cosmology. Many-sheeted space-time forces to consider space-time always in some length scale and p-adic length scale defined the length scale hierarchy closely related to the hierarchy of Planck constants heff/h0=n related to dark matter in TGD sense. The parameters such as Hubble constant depend on length scale and its value differ because the measurements are carried out in different length scales.

  2. The new physics should relate to some deep problem of the recent day cosmology. Cosmological constant Λ certainly fits the bill. By theoretical arguments Λ should be huge making even impossible to speak about recent day cosmology. In the recent day cosmology Λ is incredibly small.

  3. TGD predicts a hierarchy of space-time sheets characterized by p-adic length scales (Lk) so that cosmological constant Λ depends on p-adic length scale L(k) as Λ∝ 1/GL(k)2, where p ≈ 2k is p-adic prime characterizing the size scale of the space-time sheet defining the sub-cosmology. p-Adic length scale evolution of Universe involve as sequence of phase transitions increasing the value of L(k). Long scales L(k) correspond to much smaller value of Λ.

  4. The vacuum energy contribution to mass density proportional to Λ goes like 1/L2(k) being roughly 1/a2, where a is the light-cone proper time defining the "radius" a=R(t) of the Universe in the Robertson-Walker metric ds2=dt2-R2(t) dΩ2. As a consequence, at long length scales the contribution of Λ to the mass density decreases rather rapidly.

    Must however compare this contribution to the density ρ of ordinary matter. During radiation dominated phase it goes like 1/a4 from T∝ 1/a and form small values of a radiation dominates over vacuum energy. During matter dominated phase one has ρ∝ 1/a3 and also now matter dominates. During predicted cosmic string dominated asymptotic phase one has ρ∝ 1/a2 and vacuum energy density gives a contribution which is due to Kähler magnetic energy and could be comparable and even larger than the dark energy due to the volume term in action.

  5. The mass density is sum ρmd of the densities of matter and dark energy. One has ρm∝ H2. Λ∝ 1/L2(k) implies that the contribution of dark energy in long length scales is considerably smaller than in the recent cosmology. In the Planck determination of H it is however assumed that cosmological constant is indeed constant. The value of H in long length scales is under-estimated so that also the standard model extrapolation from long to short length scales gives too low value of H. This is what the discrepancy of determinations of H performed in two different length scales indeed demonstrate.

A couple of remarks are in order.
  1. The twistor lift of TGD suggests an alternative parameterization of vacuum energy density as ρvac= 1/L4(k1). k1 is roughly square root of k. This gives rise to a pair of short and long p-adic length scales. The order of magnitude for 1/L(k1) is roughly the same as that of CMB temperature T: 1/L(k1)∼ T. Clearly, the parameters 1/T and R correspond to a pair of p-adic length scales. The fraction of dark energy density becomes smaller during the cosmic evolution identified as length scale evolution with largest scales corresponding to earliest times. During matter dominated era the mass density going like 1/a3 would to dominate over dark energy for small enough values of a. The asymptotic cosmology should be cosmic string dominated predicting 1/GT2(k). This does not lead to contradiction since Kähler magnetic contribution rather than that due to cosmological constant dominates.

  2. There are two kinds of cosmic strings: for the other type only volume action is non-vanishing and for the second type both Kähler and volume action are non-vanishing but the contribution of the volume action decreases as function of the length scale.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Friday, July 13, 2018

Further evidence for the third generation of weak bosons


Matt Strassler had a blog posting about an interesting finding from old IceCube data revealed at thursday (July 12, 2018) by IceCube team. The conclusion supports the view that so called blasars, thin jets of high energy particles suggested to emerge as matter falls into giant black hole, might be sources of high energy neutrinos. In TGD framework one could also think that blazars originate from cosmic strings containing dark matter and energy. Blazars themselves could be associated with cosmic strings thickened to magnetic flux tubes. The channeling to flux tubes would make possible observation of the particles emerging from the source whatever it might be.

Only the highest energy cosmic neutrinos can enter the IceCube detector located deep under the ice. IceCube has already earlier discovered a new class of cosmic neutrinos with extremely high energy: Matt Strassler has written a posting also about this two years ago (see this): the energies of these neutrinos were around PeV. I have commented this finding from TGD point of view (see this).

Last year one of these blazars flared brightly producing high energy neutrinos and photons: neutrinos and photons came from the same position in the sky and occurred during the same period. IceCube detector detected a collision of one (!) ultrahigh energy neutrino with proton generating muon. The debris produced in the collision contained also photons, which were detected. IceCube team decided to check whether old data could contain earlier neutrino events assignable to the same blasar and found a dramatic burst of neutrinos in 2014-2015 data during period of 150 days associated with the same flare; the number of neutrinos was 20 instead of the expected 6-7. Therefore it seems that the ultrahigh energy neutrinos can be associated with blazars.

By looking the article of IceCube team (see this) one learns that neutrino energies are of order few PeV (Peta electron Volt), which makes 1 million GeV (proton has mass .1 GeV). What kind of mechanism could create these monsters in TGD Universe? TGD suggests scaled variants of both electroweak physics and QCD and the obvious candidate would be decays of weak bosons of a scaled variant of ew physics. I have already earlier considere a possible explanation interms of weak bosons of scaled up variant of weak physics characterizes by Mersenne prime $M_{61}=2^{61}-1}$ (see this).

  1. TGD "almost-predicts" the existence of three families of ew bosons and gluons. Their coupling matrices to fermions must be orthogonal. This breaks the universality of both ew and color interactions. Only the ordinary ew bosons can couple in the same manner to 3 fermion generations. There are indeed indications for the breaking of the universality in both quark and leptons sector coming from several sources such as B meson decays, muon anomalous anomalous (this is not a typo!) magnetic moment, and the the finding that the value of proton radius is different depending on whether ordinary atoms or muonic atoms are used to deduce it (see this).

  2. The scaled variant of W boson could decay to electron and monster neutrino having same energies in excellent approximation. Also Z0 boson could decay to neutrino-antineutrino pair. The essentially mono-chromatic energy spectrum for the neutrinos would serve as a unique signature of the decaying weak boson. One might hope of observing two kinds of monster neutrinos with mass difference of the order of the scaled up W-Z mass difference. Relative mass difference would same as for ordinary W and Z - about 10 per cent - and thus of order .1 PeV.

One can look the situation quantitatively using p-adic length scale hypothesis and assumption that Mersenne primes and Gaussian Mersennes define preferred p-adic length scales assignable to copies of hadron physics and electroweak physics.
  1. Ordinary ew gauge bosons correspond in TGD framework to Mersenne prime Mk= 2k-1, k=89. The mass scale is 90 GeV, roughly 90 proton masses.

  2. Next generation corresponds to Gaussian Mersenne Gaussian Mersenne prime MG,79= (1+i)79-1. There is indeed has evidence for a second generation weak boson corresponding to MG,79 (see this). The predicted mass scale is obtained by scaling the weak boson mass scale of about 100 GeV with the factor 2(89-79/2=32 and is correct.

  3. The next generation would correspond to Mersenne prime M61. The mass scale 90 GeV of ordinary weak physics is now scaled up by a factor 2(89-61)/2= 214 ≈ 64,000. This gives a mass scale 1.5 PeV, which is the observed mass scale for the neutrino mosters detected by Ice-Cube. Also the earlier monster neutrinos have the same mass scale. This suggests that the PeV neutrinos are indeed produced in decays of W(61) or Z(61).

See chapter New Particle Physics Predicted by TGD: Part I of "p-Adic physics".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, July 11, 2018

How does brain predict future?

Quanta Magazine is a real treasure trove. The gem was at this time titled " To Make Sense of the Present, Brains May Predict the Future" (see this). The article gives links to various research articles: here I mention only the article " Neural Prediction Errors Distinguish Perception and Misperception of Speech" by Blank et al > (see this) .

According to the article, brain acts as a prediction machine comparing predictions with what happened and modifying the predictions accordingly. Sensory perception would not be mere 3-D sensory time=constant snapshot as believed in last century but include also a prediction of future based on it that would be outcome of sensory perception and brain is able to modify the prediction by using the difference between prediction and reality.

In TGD framework one can go even further (see this). Sensory organs are the seats of sensory mental images constructed by repeated signalling between brain (maybe also magnetic body) and sensory organ using dark photons propagating forth and back with maximal signal velocity and contributing to the sensory input a virtual part. Nerve pulses would create by synaptic bridges connecting flux tubes to longer flux tubes acting as waveguides for dark photons to propagate. Sensory mental image would be essentially self organization pattern nearest to the actual sensory input. The percept itself would be artwork, a caricature selecting and emphasizing the features of sensory input important for the survival.

The term predictive coding used about the process reveals that the view about how brain achieves this relies on computational paradigm. This is one possible view. Personally I do cannot regard classical computation as a plausible option. A more neutral view relies on rather obvious assumption that that temporal sequences of associations giving rise to predictions. But how does this happen?

Neuroscientists speculate about deep connections between emotions and learning: the dopaminergic neurons are indeed very closely related to the neural reward system. If the difference between the predicted and actually perceived is large the reward is small - one might also call it punishment. "Surprise" would be rather neutral word to express it. Big discrepancy causes big surprise. The comparison of predicted and what really happened would be essential. This is was one of the first predictions of TGD and might apply to simple emotions but - as I have proposed - emotions such as experience of beauty, compassion or love need not correspond to emotions need not be mere reactions.

The finding suggests a connection with the ideas about the fundamental role of emotions in learning. I have already developed this theme in this article.

  1. The first finding made for snails (see this) was that RNA somehow codes the experience and induces epigenetic change at the level of DNA in turn inducing a change in behavior. The popular article " Scientists Sucked a Memory Out of a Snail and Stuck It in Another Snail" tells about the finding (see this).

    This led to a TGD based model based on the notion of bio-harmony for music of dark photon triplets representing 3-chords predicting genetic code correctly. Music expresses and creates emotions: same would happen already at RNA level. DNA would get in the same mood and by resonating with the 3-chords of RNA music and changing its harmony/mood coded by resonance frequencies of nuclei, which would slightly change. Epigenetic change would take place as a consequence and change the genetic expression in turn changing the behaviour.

    This brings in something genuinely new: TGD based view about dark matter, realizations of genetic code by dark proton sequences defining the dark analogs of DNA, RNA, tRNA, and amino-acids at the magnetic flux tubes of magnetic body of living system plus realization of the genetic code.

    It must be emphasized that magnetic body is 4-D and corresponds to a preferred extremals connecting to two 3-surfaces at the boundaries of causal diamond. Hence the basic objects are deterministic time evolutions, analogous to programs or behavioral patterns. The sequence of associations assignable to percept could be seen as space-time surface, a predicted space-time time evolution.


  2. Just a couple of days before writing this I learned about slime molds (this) , which are monocellulars, which contrary to expectations learn new behaviours. Nervous system is not therefore necessary for learning. Emotional RNA could be at work also here.

  3. RNA would be naturally also behind the learning in CNS as a change of synaptic strengths generating effectively different synchronously firing neuron groups representing mental images and new sequences of associations providing predictions. The mismatch between prediction and real percept would we represented in terms of dopamine concentration and this in turn would generate at RNA level emotion, which would be negative for mismatch and induce corresponding DNA emotion generating epigenetic change in turn changing synaptic strengths in turn changing the prediction as a sequence of associations regarded as temporal sequence in turn changing the behavior! Long sequence of causations!

Also the speculated unification of motor control and sensory perceptions is mentioned in the popular article. In sensory perception internal environment as a model for external environment is updated. In motor action it is external environment. Connection with arrow of time? Motor action as perception of changing environment where own biological body is part of environment. In TGD framework sensory perception and motor action would be time reversals of each other at the level of sensory mental images. This view is allowed by ZEO and encouraged by the discovery of Libet that volitional act is preceded by neural activity by a fraction of second.

Motor action would be generated by a negative energy signal to the geometric past which would correspond to mental images with reversed arrow of time in TGD inspired theory of consciousness. This duality would mean that in opposite time direction motor action would be a perceptions about say hand moving in desired direction! The counterpart of predictive coding would take care of comparisons and modifying the predicted "sensory percept" so that it corresponds to reality. This sounds strange but maybe the motor actions is just passive perception from the point of view of time reversed self!

See the article Emotions as sensory percepts about the state of magnetic body?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, July 10, 2018

How do slime molds learn?

Quanta Magazine is a treasure trove of popular articles about hot topics in basic research and biology and neuroscience are the hottest topics now. The article "Slime Molds Remember — but Do They Learn?" about learning of slime molds (see this) serves as a good example of pleasant surprises popping up on weekly basis.

  1. The popular article tells that slime molds are monocellulars - for long time believed to belong to fungi - but actually somewhat like amoebas. They have neither neurons nor brains. The neuroscientific dogma says that neurons are necessary for learning so that slime molds should not learn. They should only adapt by selecting behaviors from a genetically inherited repertoir. Same would be true about plants, which are also known to learn.

    For physicist these beliefs look strange. Both animals and plants and also slime molds share the basic aspects about what it is to be alive, why should they be unable to learn? The research of biologist Audrey Dussutour and her team described in the article indeed shows that slime molds are indeed able to learn.

  2. Conditioning is the basic mechanism of learning, which by definition leads to a creation of a new kind of behavior rather than selecting some behavior from an existing repertoir as happens in adaptation. Typically the conditioning is created by associating unpleasant sensory stimulus such as electric shock to some other stimulus, which can be pleasant, say information about the presence of food. This leads to avoidance behavior and the mere presence of food can induce the avoidance behavior.

  3. It was found that slime mold learns a habit of avoiding the unpleasant stimulus - habituation is said to take place. Habituation generates of new behavior and is not mere adaption. For instance, habituation can mean stopping noticing stimulus like smell if it is not dangerous or important for survival. In the experiments the slime molds were conditioned to avoid noxious substances (having bitter "taste") and they remembered the behavior after a year of physiologically disruptive enforced sleep as the technical terms expresses it.

  4. Central nervous system has been believed to be responsible for habituation since neurons receive and process the sensory the stimuli, build kind of cognitive representations about them, and generate motor response. Neuroscientist believe that learning means strengthening of synaptic contacts eventually giving rise to a learned motor response to a sensory stimulus by a sequence of associations

    Against this background the ability of slime molds to learn looks mysterious. How do they perceive the stimulus, how do they process it, how do they respond to it? We know actually little about cognition and learning: we know a lof about the neural correlates of cognition but not what cognition is.

Forgetting for a moment the question about what cognition is, one can just ask what could lead to the change of behaviour of the slime mold. Some time ago I learned about another fascinating finding related to learning from the article "Scientists Sucked a Memory Out of a Snail and Stuck It in Another Snail" (see this). What was found that one can take RNA of a snail that has been conditioned by some painful stimulus and transfer it to another snail by scattering RNA on its brain neurons! Same can be achieved also by feeding snail with the conditioned snail. RNA must somehow represent memories. If this is true for snail it can be true also for the slime mold.

Usually learning is assigned with cognition regarded as kind of linguistic cognition. One speaks also of emotional intelligence: could learning be based on emotions? The TGD based model for emotions (see this) inspired by the model of music harmony (see this and this) leading to a model of genetic code predicting correctly vertebrate coderelies on this idea and leads to a model for what learning could be also in the case of slime molds.

  1. Music expresses and creates emotions coded in its harmony (think of major and minor scales as simple examples). This could be true in much more general sense. Not only music made of sound but also of light - dark photons in TGD framework - could realize these functions of music. DNA would have a representation in terms of a collection of 3-chords made of three dark photons with frequencies in proportions allowed by the harmony.

  2. The model of harmony based on icosahedral and tetrahedral geometries predicts a large number of harmonies representing emotional states, moods. The music of light makes possible communication between DNA, RNA, amino-acids (AAs), even tRNAs and their dark variants DDNA, DRNA, DAA, DtRNA. Communications are possible if the three chords can resonate note by not: ideal situation occurs if the harmony defining the mood is same in sender and receiver. Emphatics are those, who experience also the sufferings of the other people. Moods can be transferred from RNA to DNA and here they can induce epigenetic change leading to a change in behavior.

  3. The painful conditioning of snail would induce a new mood of RNA of snail (probably rather depressive!) and this would in turn infect the DNA of the snail (strong emotions are infective) and the mood of DNA would induce the epigenetic change leading to the avoidance behavior (see this and this). Emotions would be behind the learning and learning would take place at DNA level as epigenetic changes changing the gene expression. Habitutation would involve epigenetic changes and adaptation involve only activation of appropriate inherited genes.

It must be added that TGD also leads to a vision about the role of neurons in many aspects different from the neuroscientific view although agreeing with the basic facts and explaining quite a number of anomalies (see this).
  1. The notion of magnetic body (MB) containing dark matter as heff/h0=n phases of ordinary matter is central. The networks having as nodes objects consisting of ordinary matter (molecules, organelles, organs, even organisms) connected to a network made of flux tubes containing dark matter would give rise to both cellular and neuronal networks. Magnetic flux tube connecting two nodes would serve as a correlate of attention and communication pathway using supra currents or dark photons. Also classical signals can propagate along it.

  2. The primary function of nerve pulse activity at the level of CNS would not be communication between neurons but building of communication pathways from flux tubes along which dark photon signals can propagate with maximal signal velocity. The situation would be same in travel phone connections: the communication pathway would be created first and only then the communications with light velocity would begin. Synaptic transmission would build a bridge between otherwise non-connected flux tubes​. This would give rise to long waveguides. Dark photons transforming to ordinary photons would yield bio-photons, which have remained mysterious in standard bio-chemistry since their spectrum is not consistent with the discrete spectrum of lines produced if they were generated in molecular transitions.

  3. Sensory experiences would be basically at the level of sensory organs and sensory percepts would involve pattern recognition involving repeated feedback signal from brain an leading a standard perception nearest to the sensory input. The new view about time provided by zero energy ontology allows to circumvent the counter argument inspired by phantom leg phenomenon.

  4. Nerve pulse patterns would frequency modulate the generalized Josephson frequencies assignable to the membrane proteins acting as Josephson junctions and generating dark Josephson radiation as part of EEG propagating to the MB of the system. Thus nerve pulse patterns would code information but this information would be sent to MB.

  5. It is quite possible that the proposed RNA level mechanism is the microscopic mechanism behind strengthening of synaptic connections believed to be behind neuron level learning although also here new findings suggests that situation is not quite it has been believed to be (see this).

This did not say anything about cognition yet. TGD leads also to a view about mathematical correlates of cognition requiring profound generalization of the mathematical structure of theoretical physics. Real number field is tailor made for the description of the sensory world but how to describe the correlates of cognition. Here p-adic number fields come in rescue and in TGD framework one ends up to a unification of real physics and their p-adic analogs to what I call adelic physics (see this and this).

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, July 08, 2018

Two different values for the metallicity of Sun and heating of solar corona: two puzzles with a common solution?

Solar corona could be also a seat of dark nucleosynthesis and there are indications that this is the case (see this) . The metallicity of stellar objects gives important information about its size, age, temperature, brightness, etc... The problem is that measurements give two widely different values for the metallicity of Sun depending on how one measures it. One obtains 1.3 per cent from the absorption lines of the radiation from Sun and 1.8 from solar seismic data. Solar neutrinos give also the latter value. What could cause the discrepancy?

Problems do not in general appear alone. There is also a second old problem: what is the origin of the heating of the solar corona. Where does the energy needed for the heating come from?

TGD proposal is based on a model, which emerged initially as a model for "cold fusion" (not really) in terms of dark nucleosynthesis, which produced dark scaled up variants of ordinary nuclei as dark proton sequences with much smaller binding energy. This can happen even in living matter: Pollack effect involving irradiation by IR light of water bounded by gel phase creates negatively charged regions from which part of protons go somewhere. They could go to magnetic flux tubes and form dark nuclei. This could explain the reported transmutations in living matter not taken seriously by academic nuclear physicists.

TGD proposal is that the protons transform to dark proton sequences at magnetic flux tubes with nonstandard value of Planck constant heff/h0=n. Dark nuclei with scaled up size. Dark nuclei can transform to ordinary nuclei by heff→ h (h= 6h0 is the most plausible option) and liberate almost all nuclear binding energy in the process. The outcome would be "cold fusion".

This leads to a vision about pre-stellar evolution. First came the dark nucleosynthesis, which heated the system and eventually led to a temperature at which the ordinary nuclear fusion started. This process could occur also outside stellar cores - say in planet interiors - and a considerable part of nuclei could be created outside star.

A good candidate for the site of dark nucleosynthesis would be solar corona. Dark nucleosynthesis could heat the corona and create metals also here. They would absorb the radiation coming from the solar core and reduce the measured effective metallicity to 1.3 per cent.

See the chapter Non-locality in quantum theory, in biology and neuroscience, and in remote mental interactions: TGD perspective or the article Morphogenesis in TGD Universe .

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, July 07, 2018

Morphogenesis and metabolism in astrophysical scales?

The proposed general picture has interesting implications for the TGD view about stars and planets. Minimal surfaces have vanishing mean curvature vector Hk defined by the trace of the second fundamental form. The external curvatures sum up to zero and the surface looks like saddle surface locally. This strongly suggests that one cannot have (spherically symmetric) closed 3-surfaces obtained by taking two almost copies of 3-surface having a boundary and gluing them together along boundaries as the assumption that there are not boundaries requires. Could stars and planets be flow equilibria analogous to soap bubbles for which pressure difference is necessary and is provided by an external energy feed (blowing the bubble). When the energy feed ceases, the bubble collapses? The analogy with the stellar dynamics leading eventually to a collapse to a blackhole is obvious.

Morphogenesis and metabolic energy feed in astrophysical scales as explanations for puzzling findings?

The analogy with morphogenesis could allow to build a more coherent picture from several puzzling observations related to TGD made during years.

  1. One cannot obtain an imbedding of Schwartschild exterior metric without the presence of long range induced gauge field behaving like 1/r2. Any object with long range gravitational field must have also electroweak gauge charge. The charge can be made arbitrarily small but must be non-vanishing. The natural guess was that em charge - closely related to Kähler charge - is in question. If flow equilibrium analogous to soap bubble is in question, the charge must be Kähler charge with the energy momentum currents of Kähler field feeding energy to prevent gravitational collapse.

  2. During 1990s I did considerable amount of work t in attempts to construct spherically symmetric solutions of field equations using only Kähler action but failed. In this case, the field equations state the vanishing of the divergences of energy-momentum and color currents. All known extremals of both Kähler action and its twistor lift involving also volume term analogous to cosmological term are minimal surfaces and extremals of both Kähler action and volume term.

    The failure to discover extremals which are not minimal surface might be simply due to the fact that they are not simple. One can however ask whether there are actually no radially symmetric stationary extremals of Kähler action? Could volume term be needed to stabilize them?

  3. 4-surfaces with vanishing induced Kähler field are necessarily minimal surfaces. The vanishing of induced Kähler field is however not necessary. In fact all non-vacuum extremals of Kähler action are minimal surfaces. The known repertoire of minimal surfaces includes cosmic strings, massless extremals representing radiation, and CP2 type extremals with Euclidian signature of induced metric representing elementary particles. For these Kähler action is present but minimal surface field equations give extremal property separately in volume and Kähler degrees of freedom.

    Cosmic strings would dominate in the very early cosmology before space-time as a 4-surface with 4-D M4 projection had emerged. The vision is that the thickening of their M4 projection during cosmic expansion generated Kähler magnetic flux tubes carrying magnetic monopole fluxes. The thickening of cosmic strings need not leave them minimal surfaces but one expects that this is true approximately.

    The feed of energy and particles from flux tubes (suggesting that they are not minimal surfaces) would have generated visible matter and led to the formation of stars. The flux tubes would take the role of inflaton field in standard approach. Flux tubes would have also second role: they would carry the quanta of gravitational and gauge fields and thus would be mediators of various interactions.

    Dark matter identified as phases with non-standard value of Planck constant heff/h0=n having purely number theoretical origin in adelic physics > would reside at magnetic flux tubes and the general vision about TGD inspired biology is that it controls the ordinary biomatter, which would involve metabolic energy feed as a stabilizer of the flow equilibrium. This picture suggests a generalization.

  4. The vision about dark nucleosynthesis, which emerged from the model of "cold fusion" has led to the proposal that dark nucleosynthesis preceded ordinary nucleosynthesis. Dark proton sequences were generated first by the analog of Pollack effect at magnetic flux tubes suffering also weak decays to produce states involving dark neutrons. These states decayed to dark nuclei with smaller value of heff/h=n and eventually this process led to the formation of ordinary nuclei. This process liberated practically all nuclear energy and heated the system and led eventually to the ordinary nuclear fusion occurring in the cores of stars.

    In living systems dark nuclei realized as dark proton sequences realize dark analogs of DNA, RNA, amino-acids, and tRNA and would provide the fundamental realization of the genetic code. This picture predicts a hierarchy of dark nuclear physics and dark realizations of the genetic code and analogs of the basic biomolecules. Could biology be replaced by a hierarchy of "biologies" in a more general sense.

  5. In the generalized biology stellar cores would provide metabolic energy realized basically as energy flow associated with Kähler field in stellar core making possible to realize star as an analog of cell membrane as flow equilibrium. Also the flow of Kähler charge, presumably in radial direction, would be involved if the energy momentum current of the induced Kähler field is non-vanishing and could relate to the mass loss of stars.

    Even in the case of planets dark nucleosynthesis could provide a radial energy flow to guarantee stability. Nucleosynthesis could have occurred inside planets and have produced heavier nuclei. The standard picture about stars as providers of heavier elements and supernova explosions giving rise to fusion generating elements heavier than Fe could be wrong.

  6. This picture conforms with what we know about dark matter. Dark matter would consist of heff/h0=n phases of ordinary mater at magnetic flux tubes. If also magnetic flux tubes are minimal surfaces in good approximation, gravitational degrees of freedom assignable to the volume action as analog of Einstein-Hilbert action and stringy action would not interact with Kähler degrees of freedom appreciably except in the events in which dark energy and matter are transformed to ordinary matter. These events could be induced by collisions of magnetic flux tubes. The energy exchange would be present only in systems not representable as minimal surfaces. Dark matter in TGD sense has key role in TGD inspired quantum biology.

Blackhole collapse as an analog of biological death?

Before one can say something interesting about blackholes in this framework and must look more precisely what cosmic strings are. There are two kinds of cosmic strings identifiable as preferred extremals of form X2× Y2⊂ M4× CP2. X2 is minimal surface.

  1. Y2 can be homologically non-trivial complex sub-manifold of CP2 for which second fundamental form vanishes identically. Induced Kähler form is non-vanishing and defines monopole flux. Both Kähler and volume term (cosmological constant term formally at least) contribute to energy density but the energy momentum currents and also tensors have vanishing divergence so that there is no energy flux between gravitational and Kähler degrees of freedom.

  2. Y2 can be also homologically trivial geodesic sphere for which Kähler form and therefore Kähler energy density vanishes identically. In this case only cosmological constant Λ represents a non-vanishing contribution to the energy so that energy transfer between gravitational and Kähler degrees of freedom is trivially impossible.

What could happen in blackhole collapse?
  1. Blackhole is not able to produce "metabolic energy" anymore and preserve the spherically symmetric configuration anymore. The outcome of blackhole collapse could be a highly folded flux tube very near to minimal surface or perhaps, or even a cosmic string. The latter option is not however necessary.

  2. Is this string homologically non-trivial having large string tension or homologically trivial and almost vacuum for small values of Λ? The huge mass density of blackhole does not favour the latter option. This leaves under consideration only the homologically non-trivial cosmic strings or their deformations to flux tubes.

    The string tension for cosmic string is estimated to be a fraction of order 10-7 about the effective string tension of order 1/G determined by blackhole mass which is proportional to the Scwartschild radius. Therefore the cosmic string should be spaghetti like structure inside the horizon having length about 107 time the radius of blackhole. Note that TGD predicts also second horizon below Schwartschild horizon: the signature of the induced metric becomes Euclidian at this horizon and this could explain the echoes claimed to be associated with the observed blackhole formation.

  3. One could say that Big bang starting from homologically non-trivial cosmic strings would end with Big crunch ending with similar objects.

Living systems are conscious and there is indeed a strong analogy to TGD inspired theory of consciousness. One could say that the particular sub-cosmology corresponds to a conscious entity (many-sheeted space-time predicts a Russian doll hierarchy of them) which repeatedly lives and dies and re-incarnates with opposite arrow of time.
  1. In zero energy ontology (ZEO) key role is played by causal diamonds (CDs) carrying analogs of initial and final states at their boundaries are in key role. The M4 projection of CD is intersection of future and past directed light-cones. The shape of CD strongly suggests Big Bang followed by Big Crunch.

  2. TGD inspired theory of consciousness predicts that conscious entities - selves - correspond to a generalized Zeno effect. Self is identified as a sequence of "small" state function reductions (weak measurements) increasing gradually the size of CD by shifting the active boundary of CD farther away from that passive boundary which is not changed (Zeno effect).

    The states at the active boundary are affected unlike those at the passive boundary. Self dies when the first "big" state function reduction to the active boundary occurs and the roles of the active and passive boundary are changed. The arrow of geometric time identified as the distance between the tips of CD changes and the CD starts to grow in opposite time direction. The evolution of self is a sequence of births and deaths followed by a re-incarnation.

  3. In astrophysical context this evolution would be a sequence of lifes beginning with a Big Bang and ending with a Big Crunch with two subsequent evolutions taking in opposite time directions. Somewhat like breathing. This breathing would take place in all scales and gradually lead to a development of sub-Universes as the size of CD increases.

  4. In ZEO the first big state function reduction to active boundary of CD occurs when all weak measurements have been done and there are no observables commuting with the observables, whose eigenstates the states at the passive boundary are. Self dies and reincarnates.

    One can also try to build a classical view about what happens. Measurement involves always a measurement interaction generating entanglement. Could the transfer of quantum numbers and conserved quantities (also color charges besides Poincare charges) between Kähler and volume degrees of freedom define the measurement interactions in practice. When this transfer vanishes, there is no measurement interaction and no further measurements are possible. Also metabolism ceases and self dies in biological sense.

See the chapter Non-locality in quantum theory, in biology and neuroscience, and in remote mental interactions: TGD perspective or the article Morphogenesis in TGD Universe .

See also the article Getting philosophical: some comments about the problems of physics, neuroscience, and biology.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.