Sunday, October 29, 2017

Summary of the model of dark nucleosynthesis

The books of Steven Krivit (see Hacking the atom, Fusion fiasco, and Lost history ) have been of enormous help in polishing the details of the model of dark nucleosynthesis explaining the mysterious aspects of what has been called cold fusion or LENR (Low energy nuclear reactions). Here

Summary of the model of dark nucleosynthesis model

Recall the basic ideas behind dark nucleosynthesis.

  1. Dark nuclei are produced as dark proton sequences at magnetic flux tubes with distance between dark protons with heff/h= 211 (approximately proton/electron mass ratio) very near to electron Compton length. This makes possible formation of at least light elements when dark nuclei transform to ordinary ones and liberate almost entire nuclear binding energy.

  2. Also more complex nuclei can form as nuclei of nuclei from ordinary nuclei and sequences of dark protons are at magnetic flux tubes. In particular, the basic rule (A,Z)→ (A+1,Z+1) of Widom-Larsen model is satisfied although dark beta decays would break this rule.

    In this case the transformation to ordinary nuclei produces heavier nuclei, even those heavier than Fe. This mechanism could make possible the production of heavy nuclei outside stellar interiors. Also dark beta decays can be considered. They would be fast: the idea is that the Compton length of weak bosons is scaled up and within the region of size scale of Compton length weak interactions have essentially the same strength as electromagnetic interactions so that weak decays are fast and led to dark isotopes stable against weak interactions.

  3. The transformation of dark nuclei to ordinary nuclei liberates almost all nuclear binding energy. This energy could induce the fission of the daughter nucleus and emission of neurons causing the decay of ordinary nuclei, at least those heavier than Fe.

  4. Also the dark weak process e-+p→ n+ν liberating energy of order electron mass could kick out neutron from dark nucleus. This process would be TGD counterpart for the corresponding process in WL but having very different physical interpretation. This mechanism could explain production of neutrons which is by about 8 orders slower than in cold fusion model.

  5. The magnetic flux tubes containing dark nuclei form a positively charged system attracted by negatively charged surfaces. The cathode is where the electrons usually flow to. The electrons can generate negative surface charge, which attracts the flux tubes so that flux tubes end up to the cathode surface and dark ions can enter to the surface. Also ordinary nuclei from the cathode could enter temporarily to the flux tube so that more complex dark nuclei consisting of dark protons and nuclei are formed. Dark nuclei can also leak out of the system if the flux tube ends to some negatively charged surface other than cathode.

The findings described in the the books of Krivit, in particular the production of neutrons and tritium, allow to sharpen the view about dark nucleosynthesis.
  1. The simplest view about dark nucleosynthesis is as a formation of dark proton sequences in which some dark protons transform by beta decay (emission of positron) to neutrons. The objection is that this decay is kinematically forbidden if the masses of dark proton and neutron are same as those of ordinary proton and neutron (n-p mass difference is 1.3 MeV). Only dark proton sequences would be stable.

    Situation changes if also n-p mass difference scales by factor 2-11. The spectra of dark and ordinary nuclei would be essentially identical. For scaled down n-p mass difference, neutrons would be produced most naturally in the process e-+p→ n+ν for dark nuclei proceeding via dark weak interactions. The dark neutron would receive a large recoil energy about me≈ .5 MeV and dark nucleus would decay. The electrons inducing the neutron emission could come from the negatively charged surface of cathode after the flux tube has attached to it. The rate for e-+p→ n+ν is very law for ordinary weak Planck constant. The ratio n/T ∼ 10-8 allows to deduce information about heff/h: a good guess is that dark weak process is in question.

  2. Tritium and other isotopes would be produced as several magnetic flux tubes connect to a negatively charged hot spot of cathode. A reasonable assumption is that the ordinary binding energy gives rise to an excited state of the ordinary nucleus. This can induce the fission of the final state nucleus and also neutrons can be produced. Also scaled down variants of pions can be emitted, in particular the pion with mass of 17 MeV (see this)

  3. The ordinary nuclear binding energy minus the n-p mass difference 1.3 MeV multiplied by the number of neutrons would be released in the transformation of dark nuclei to ordinary ones. The table below gives the total binding energies and liberated energies for some lightest stable nuclei.


    The ordinary nuclear binding energies EB for light nuclei and the energies Δ E liberated in dark → ordinary transition.
    Element 4He 3H T D
    EB/MeV 28.28 7.72 8.48 2.57
    Δ E/MeV 25.70 6.41 5.8 1.27


    Gamma rays are not wanted in the final state. For instance, for the transformation of dark 4He to ordinary one, the liberated energy would be about 25.7 MeV. If the final state nucleus is in excited state unstable against fission, the binding energy can go to the kinetic energy of the final state and not gamma ray pairs are observed. If two 17 MeV pions π113 are emitted the other one or both must be on mass shell and decay weakly. The decay of off-mass π113 could however proceed via dark weak interactions and be fast so that the rate for this process could be considerably faster than for the emission of two gamma rays.

The relationship of dark nucleosynthesis to ordinary nucleosynthesis

One can raise interesting questions about the relation of dark nucleosynthesis to ordinary nucleosynthesis.

  1. The temperature at solar core is about 1.5× 107 K corresponding to energy about 2.25 keV. This temperature is obtained by scaling factor 2-11 from 5 MeV which is binding energy scale for ordinary nuclei. That this temperature corresponds to the binding energy scale of dark nuclei might not be an accident.

    That the temperature in the stellar core is of the same order of magnitude as dark nuclear binding energy is a highly intriguing finding and encourages to ask whether dark nuclear fusion could be the key step in the production of ordinary nuclei.

    Could dark nucleosynthesis in this sense occur also pre-stellar evolution and thus proceed differently from the usual p-p-cycle involving fusion processes? The resulting ordinary nuclei would undergo only ordinary nuclear reactions and decouple from the dark dynamics. This does not exclude the possibility that the resulting ordinary nuclei form nuclei of nuclei with dark protons: this seems to occur also in nuclear transmutations.

  2. There would be two competing effects. The higher the temperature, the less stable dark nuclei and the longer the dark nuclear strings. At lower temperatures dark nuclei are more stable but transform to ordinary nuclei decoupling from the dark dynamics. The liberated nuclear binding energy however raises the temperature and makes dark nuclei less stable so that the production of ordinary nuclei in this manner would slow down.

    At what stage ordinary nuclear reactions begin to dominate over dark nucleosynthesis? The conservative and plausible looking view is that p-p cycle is indeed at work in stellar cores and has replaced dark nucleosynthesis when dark nuclei became thermally unstable.

    The standard view is that solar temperature makes possible tunnelling through Coulomb wall and thus ordinary nuclear reactions. The temperature is few keVs and surprisingly small as compared to the height of Coulomb wall Ec∼ Z1Z2e2/L, L the size of the nucleus. There are good reasons to believe that this picture is correct. The co-incidence of the two temperatures would make possible the transition from dark nucleosynthesis to ordinary nucleosynthesis.

  3. What about dark nuclear reactions? Could they occur as reconnections of long magnetic flux tubes? For ordinary nuclei reconnections of short flux tubes would take place (recall the view about nuclei as two-sheeted structures). For ordinary nuclear the reactions at energies so low that the phase transition to dark phase (somewhat analogous to the de-confinement phase transition in QCD) is not energetically possible, the reactions would occur in nuclear scale.

  4. An interesting question is whether dark nucleosynthesis could provide a new manner to achieve ordinary nuclear fusion in laboratory. The system would heat itself to the temperatures required by ordinary nuclear fusion as it would do also during the pre-stellar evolution and when nuclear reactor is formed spontaneously (see Oklo reactor.

This is only rough overall view and it would be unrealistic to regard it as final one: one can indeed imagine variations. But even its recent rough form it seems to be able explain all the weird looking aspects of CF/LENR/dark nucleosynthesis. To pick up one particular interesting question: how significantly dark nucleosynthesis could contribute to the generation of elements heavier than Fe (and also lighter elements)? It is assumed that the heavier elements are generated in so called r-process involving creation of neutrons fusing with nuclei. One option is that r-process accompanies supernova explosions but SN1987A did not provide support for this hypothesis: the characteristic em radiation accompanying r-process was not detected. Quite recently the observation of gravitational waves from the fusion of two neutron stars generated also visible radiation, so called kilonova (see this), and the radiation accompanying r-process was reported. Therefore this kind of collisions generate at least part of the heavier elements.

See the article Cold fusion again or the chapter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy" with the same title. See also the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, October 26, 2017

Dark Matter Day and my wish for Christmas present

October 31st is Dark Matter Day. This is a natural continuation for my 67th birth day festivities at October 30;-).

Maybe the date of dark matter day involves symbolism. Also for a century ago the world was in critical state. Atomic physics, nuclear physics, and quantum theory were developing and revolutionizing the world view. There was a lot of political turmoil. October revolution in Russia took precisely 100 years ago October 25. Maybe also the results deduced from GW170817 and published during October motivated the choice of the date. The results eliminate large class of models trying to explain dark matter without dark matter.

Also I have spent a lot of time in developing a vision about dark matter as heff=n×h phases of ordinary matter. Vision involves magnetic flux tubes carrying dark matter and would apply in all scales. Also in quantum biology dark matter would play a key role. Galactic dark matter and also dark energy would reside at cosmic strings: this predicts the velocity spectrum of distant stars without any further assumptions. One also ends up to a rather detailed fractal vision about formation of galaxies and larger scale structures in terms of cosmic strings. Galaxies would be along long cosmic strings like pearls in necklace: these linear structures have been observed long time ago. Pearls could be knotted regions of this long cosmic string having constant density of dark matter indeed observed in galactic cores.

A side comment about GW170817 is in order. Neutrinos were not observed. A possible explanation inspired by SN1987A is that they move along different space-time sheets than photons and gravitational radiation. Estimating from SN1987A time lag between gamma rays and neutrinos and from the distances of SN1987A and GW170817 and assuming sama Δ c/c, one gets the first estimate that the lag is 118 days. The neutrinos would travel 4 months (of lenth 30 days) -2 days. For details see this.

If I managed to count correctly from my web calendar, the neutrino signal should arrive at December 14. It would be a nice Christmas present but is there any hope that any astrophysicists believes in Santa Claus? Or who knows: perhaps astrophysicists remember SN1987A and are eagerly waiting for the Christmas present!

October by the way "Lokakuu" in finnish. "Loka" translates to "Dirt". As an academic loser whose work fails to satisfy all imaginable criteria for science (as a couple of finnish colleagues so eloquently expressed it) I sometimes feel that my birth month was an omen. The next month is "Marraskuu". "Marras" translates to "Death"! But then comes "Joulukuu": "Joulu" translates to "Christmas". I keep fingers crossed and try to behave and I have already written to Santa Claus a letter, the contents of which should be clear from above.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

The lost history from TGD perspective

The third volume in " Explorations in Nuclear Research" is about lost history (see this): roughly the period 1910-1930 during which there was not yet any sharp distinction between chemistry and nuclear physics. After 1930 the experimentation became active using radioactive sources and particle accelerators making possible nuclear reactions. The lost history suggests that the methods used determine to unexpected degree what findings are accepted as real. After 1940 the hot fusion as possible manner to liberate nuclear energy became a topic of study but we are still waiting the commercial applications.

One can say that the findings about nuclear transmutations during period 1912-1927 became lost history although most of these findings were published in highly respected journals and received also media attention. Interested reader can find in the book detailed stories about persons involved. This allows also to peek to the kitchen side of science and to realize that the written history can contain surprising misidentifications of the milestones in the history of science. Author discusses in detail an example about this: Rutherford is generally regarded as tje discover of the first nuclear transmutation but even Rutherford himself did not make this claim.

It is interesting to look what the vision about the anomalous nuclear effects based on dark nucleosynthesis can say about the lost history and whether these findings can provide new information to tighten up the TGD based model, which is only qualitative. Therefore I go through the list given in the beginning of book from the perspective of dark nucleosynthesis.

Before continuing it is good to recall the first the basic ideas behind dark nucleosynthesis.

  1. Dark nuclei are produced as dark proton sequences at magnetic flux tubes with distance between dark protons with heff= 211 (approximately proton/electron mass ratio) very near to electron Compton length. This makes possible formation of at least light elements when dark nuclei transform to ordinary ones and liberate almost entire nuclear binding energy.

  2. Also more complex nuclei can from in which ordinary nuclei and sequences of dark protons are at magnetic flux tubes. In particular, the basic rule (A,Z)→ (A+1,Z+1) of Widom-Larsen model is satisfied although dark beta decays would break this rule.

    In this case the transformation to ordinary nuclei produces heavier nuclei, even those heavier than Fe. This mechanism could actually make possible production of heavy nuclei outsider stellar interiors. Also dark beta decays can be considered. They would be fast: the idea is that the Compton length of weak bosons is scaled up and within the region of size scale of Compton length weak interactions have essentially the same strength as electromagnetic interactions so that weak decays are fast and led to dark isotopes stable against weak interactions.

  3. The transformation of dark nuclei to ordinary nuclei liberates almost all nuclear binding energy. The transformation liberates large nuclear energy, which could lead to a decay of the daughter nucleus and emission of neurons causing e the decay of ordinary nuclei, at least those heavier than Fe.

    Remark: Interestingly, the dark binding energy is of order few keV and happens to be of the same order of magnitude as the thermal energy of nuclei in the interior of Sun. Could dark nuclear physics play some role in the nuclear fusion in solar core?

  4. The magnetic flux tubes containing dark nuclei form a positively charged system attracted by negatively charged surfaces. The cathode is where the electrons usually flow to. The electrons can generate negative surface charge, which attracts the flux tubes so that flux tubes end up to the cathode surface and dark ions can enter to the surface. Also ordinary nuclei from the cathode could enter temporarily to the flux tube so that more complex dark nuclei consisting of dark protons and nuclei are formed. Dark nuclei can also leak out of the system if the flux tube ends to some negatively charged surface other than cathode.

Production of noble gases and tritium

During period 1912-1914 several independent scientists discovered the production of noble gases 4He, neon (Ne), and argon (Ar) using high voltage electrical discharges in vacuum or r through hydrogen gas at low pressures in cathode-ray tubes. Also an unidentified element with mass number 3 was discovered. It was later identified as tritium. Two of the researchers were Nobel laureates. 1922 two researchers in University of Chicago reported production of 4He. Sir Joseph John Thomson explained the production of 4He using occlusion hypothesis. In understand occlusion as a contamination of 4He to the tungsten wire. The question is why not also hydrogen.

Why noble gases would have been produced? It is known that noble gases tend to stay near surfaces. In one experiment it was found that 4He production stopped after few days, maybe kind of saturation was achieved. This suggests that isotopes with relatively high mass numbers were produced from dark proton sequences (possibly containing also neutrons resulting in the dark weak decays). The resulting noble gases were caught near the electrodes and therefore only their production was observed.

Production of 4He in experiments of Wendle and Irion

In 1222 Wendle and Irion published results from the study of exploding current wires. Their arrangement involved high voltage of about 3× 104 V and di-electric breakdown through air gap between the electrodes producing sudden current peak in a current wire made of tungsten (W with (Z,A)=(74,186) for the most abundant isotope) at temperature about T=2× 104 C, which corresponds to a thermal energy 3kT/2 of about 3 eV. Production of 4He was detected.

Remark: The temperature at solar core is about 1.5× 107 K corresponding to energy about 2.25 keV and 3 orders of magnitude higher than the temperature used. This temperature is obtained by scaling factor 2-11 from 5 MeV which is binding energy scale for ordinary nuclei. That this temperature corresponds to the binding energy scale of dark nuclei might not be an accident.

The interpretation of the experimentalists was that the observed 4He was from the decay of tungsten in the hot temperature making it unstable. This explanation is of course not consistent with what we known at about nuclear physics. No error in the experimental procedure was found. Three trials to replicate the experiment of Wendle and Irion were made with a negative result. The book discusses these attempts in detail and demonstrates that they were not faithful to the original experimental arrangement.

Rutherford explained the production of 4He in terms of 4He occlusion hypothesis of Thomson. In the explosion the 4He contaminate would have liberated. But why just helium contamination, why not hydrogen? By above argument one could argue that 4He as noble gas could indeed form stable contaminates.

80 yeas later Urutskoev repeated the experiment with exploding wires and observed besides 4He also other isotopes. The experiments of Urutskoev demonstrated that there are 4 peaks for the production rate of elements as function of atomic number Z. Furthermore, the amount of mass assignable to the transmuted elements is nearly the mass lost from the cathode. Hence also cathode nuclei should end up to flux tubes.

How dark nucleosynthesis could explain the findings? The simplest model relies on a modification of the occlusion hypothesis: a hydrogen contaminate was present and the formation of dark nuclei from the protons of hydrogen at flux tubes took place in the exploding wire. The nuclei of noble gases tended to remain in the system and 4He was observed.

Production of Au and Pt in arc discharges in Mercury vapor

In 1924 German chemist Miethe, better known as the discoverer of 3-color photography found trace amount of Gold (Au) and possibly Platinum (Pt) in Mercury (Hg) vapor photography lamp. Scientists in Amsterdam repeated the experiment but using lead (Pb) instead of Hg and observed production of Hg and Thallium (Tl). The same year a prominent Japanese scientist Nagaoka reported production of Au and something having the appearance of Pt. Nagaoka used a an electric arc discharge between tungsten (W) electrodes bathed in dielectric liquid "laced" with liquid Hg.

The nuclear charges and atomic weights for isotopes involved are given in the table below.

The nuclear charge and mass number (Z,A) for the most abundant isotopes of W, Pt, Au,Hg, Tl and Pb.

Element W Pt Au Hg Tl Pb
(Z,A) (74,186) (78,195) (79,197) (80,202) (81,205) (82,208)

Could dark nucleosynthesis explain the observations? Two mechanisms for producing heavier nuclei relying one the formation of dark nuclei from the nuclei of the electrode metal and dark protons and their subsequent transformation to ordinary nuclei.

  1. Dark nuclei are formed from the metal associated with cathode and dark protons. In Nagaoka's experiment this metal is W with (Z,A)=(74,186). Assuming that also dark beta decays are possible this would lead to the generation of heavier beta stable elements Au with (Z,A)= (79,197) or their stable isotopes. Unfortunately, I could not find what the electrode metal used in the experiments of Miethe was.

  2. In the experiments of Miethe the nuclei of Hg transmuted to Au ((80,202)→ (79,197)) and to Pt ((80,202)→ (78,195)). In Amsterdam experiment of Pb transmuted to Hg ((82,208) → (80,202)) and Tl
    ((82,208) → (81,205)). This suggests that the nuclei resulted in the decay of Hg (Pb) induced by the nuclear binding energy liberated in the transformation of dark nuclei formed from the nuclei of cathode metal and dark protons to ordinary nuclei. Part of the liberated binding energy could have induced the fission of the dark nuclei. The decay of dark nuclei could have also liberated neutrons absorbed by the Hg (Pb) nuclei and inducing the decay to lighter nuclei. Thus also the analog of r-process could have been present.

Paneth and Peters' H→ 4He transmutation

In 1926 German chemists Paneth and Peters pumped hydrogen gas into a chamber with finely divided palladium powder and reported the transmutation of hydrogen to helium. This experiment resembles the "cold fusion" experiment of Pons and Fleischman in 1989. The explanation would be the formation of dark 4He nuclei consisting of dark protons and transformation to ordinary 4He nuclei.

See the chapter Cold fusion again or the article with the same title. See also the article Cold fusion, low energy nuclear reactions, or dark nuclear synthesis?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, October 25, 2017

Some TGD inspired comments related to quantum measurement theory

In the following some TGD inspired comments on quantum measurement theory inspired by FB discussions.

Does the analog of repeated second quantization take place at the level of WCW?

The world of classical worlds (WCW) is the basic structure of quantum TGD. It can be said to be the space of 3-surfaces consisting of pairs of (not necessarily connected 3-surfaces) at the boundaries of causal diamond (CD) and connected by a not necessarily connected 4-surface. 4- surface defines the interaction between the states associated with the 3-surfaces. The state associated with given 3-surface correspond to WCW spinor and one has modes of WCW spinor fields. WCW decomposes to sub-WCWs assignable to CDs and effectively the universe reduces to CD.

The key idea is that the WCW spinor fields are purely classical spinor fields. No second quantization is performed for them. Second quantization of induced spinor fields at space-time level is however carried out and gamma matrices of
WCW anticommuting to its Kähler metric are linear combinations of fermionic oscillator operators.

The classicality of WCW spinor fields looks somewhat problematic.

  1. The classicality of WCW spinor fields has implications for quantum measurement theory. State function reduction involves reduction of entanglement between systems at different points of space-time and therefore also many-particle states and second quantization are involved. However, second quantization does not take place at the level of WCW and it seems that entanglement between two 3-surfaces is not possible. Therefore measurements at WCW level should correspond to localizations not involving a reduction of entanglement. Measurements could not be interpreted as measurements of the universal observable defined by density matrix of subsystem. This looks problematic.

  2. At the space-time level second quantization is counterpart for the formation of many-particle states. Particles are pointlike and one of the outcomes is entanglement between point like particles. Since the point of WCW is essentially point-like particle extended to 3-surface, one would expect that second quantization in some sense takes place at the level of WCW although the theory is formally purely classical.

  3. Also the hierarchy of infinite primes suggests an infinite hierarchy of second quantizations. Could it have counterpart at the level of WCW: can WCW spinor field be second quantized and classical simultaneously?

Could the counterpart for the hierarchy of infinite primes and second quantization be realized automatically at WCW level? One can indeed interpret the measurements at WCW as either localizations or as reductions of entanglement between states associated with different points of WCW. The point is that the disjoint union of 3-surfaces X3 and Y3 can be regarded either as a pair (X3,Y3) of 3-surfaces in WCW× WCW or as a 3-surface Z3=X3 ∪ Y3 ⊂ WCW. The general identity behind this duality WCW= WCW× WCW= ...= WCWn =... .

One could think the situation in terms of (X3,Y3) ∈ WCW× WCW in which case one can speak of entanglement between WCW spinor modes associated with X3 and Y3 reduced by the measurement of density matrix. Second interpretation as a localization of wave function of Z3=X3∪ Y3∈ WCW.

About the notion of observable

In ordinary quantum theory observables are hermitian operators and their eigenvalues representing the values of observables are real.

In TGD using M4× CP2 picture the gauge coupling strengths are complex and therefore also classical Noether charges are complex. This should be the case also for quantum observables. Total quantum numbers could be still real but single particle quantum numbers complex. I have proposed that this is true for conformal weights and talked about conformal confinement.

Also in ordinary twistor approach virtual particles are on mass shell and thus massless but complex. Same is expected in TGD for 8-momenta so that one obtains particles massive in 4-D sense but massless in 8-D sense: this is absolutely crucial for the generalization of twistor approach to 8-D context. Virtual momenta could be massless in 8-D sense but complex but total momenta would be real. This would apply to all quantal charges, which for Cartan algebra are identical with classical Noether charges.

I learned also a very interesting fact about normal operators for which operator and its hermitian conjugate commute. As the author mentions, this trivial fact has remained unknown even for professionals. One can assign to a normal operator real and imaginary parts, which are commuting as hermitian operators so that - according to the standrd quantum measurement theory - they can be measured simultaneously.

For instance, complex values of various charge predicted by twistor lift of TGD would therefore in principle be allowed even without the assumption that the total charges are real ( total charges as hermitian operators). Combining the two ideas one would have that single particle charges are complex and represented by normal operators and total charges are real and represented by hermitian operators.

What does amplification process in quantum measurement mean?

Quantum measurement involves an amplification process amplifying the outcome of state function reduction at single particle level to a macroscopic effect. This aspect of quantum measurement theory is poorly understood at fundamental level and is usually though to be unessential concerning the calculation of the predictions of quantum theory.

The intuitive expectation is that the amplification is made possible by criticality - I would suggest quantum criticality - and involves the analog of a phase transition generated by seed. This is like the change for a direction of single spin in magnet at criticality inducing change of the magnetization direction.

Quantum criticality involves long range fluctuations and correlations for which heff/h=n serves as a mathematical description in terms of adelic physics in TGD framework. Long range correlations would make possible the classical macroscopic state characterizing the pointer. This large heff/h=n aspect would naturally correspond to the presence of intellligent observer: heff indeed closely relates to the description of not only sensory but also cognitive aspects of existence and has number theoretic interpretation as a measure for what might be called IQ of the system.

If this is tge case, one cannot build proper quantum measurement theory in the framework of standard quantum mechanics, which is unable to say anything interesting about cognition and observer. A theory of consciousness is required for this and ZEO based quantum measurement theory is also a theory of consciousness.

Zero energy ontology and Afshar experiment

Afshar experiment challenges Copenhagen and many-universe interpretations and it is interesting to look how it can be understood in zero energy ontology (ZEO).

Consider first the experimental arrangement of Afshar.

  1. A modification of double slit experiment is in question. One replaces the screen with a lense, which reflects from slit 1 to detector 1' and from slit 2 to detector 2'. Lense thus selects the photon path that is the slit through which the photon came.

    The detected pattern of clicks at detectors consists of two peaks: this means particle behavior. One can say that at single photon level either detector/path/slit is selected.

  2. One adds a grid of obstacles to the nodes (zeros) of the interference pattern at imagined screen behind the lense. The photons entering the points of grid are absorbed. Since grid is at nodes of the interference pattern this does not affect the detected pattern, when both slits are open but affects the pattern when either slit is closed (grids points are not nodes anymore). This in turn means wave like behavior. This conflicts with principle of complementary stating that either of these behaviors is realized but not both.

Consider the analysis of the situation in the usual positive energy ontology and assuming that state function reduction occurs at the detectors.
  1. Photon wave function Ψ in the region between slits and lense is superposition of two parts: Ψ= Ψ12 with Ψi assignable to slit i=1,2. The lense guides Ψ1 to detector 1 and Ψ2 to detector 2. State function reduction occurs and Ψ is projected to Ψ1 or Ψ2. Either detector 1 or 2 fires and photon path is selected.

    It however seems that state function reduction - choice of the path/slit - can occur only in the region in front of the grid. In the region between slits and grid one should still have Ψ1+Ψ2 since for Ψi the grid would have effect to the outcome. This effect is however absent. This does not fit with Copenhagen interpretation demanding that the path of photon is selected also behind the grid. This is the problem.

  2. What about the interpretation in zero energy ontology (ZEO)? After state function reduction - detection at detector 1 say - the time evolution between opposite boundaries of CD is relaced with a time reversed one. To explain the observations of Afshar (no deterioration of the pattern at detector caused by grid), one must have time evolution in which the photons coming from the detectors in reversed time direction have wave functions which vanish at the points of grid. This determines the "initial" values for the reversed time evolution: they are most naturally at grid so that grid corresponds naturally to a surface at boundary of CD in question. This is of course not the only choice since one can use the determinism of classical field equations to choose the intersection with CD differently. If time reversal symmetry holds true, the final state in geometric past corresponds to a signal coming from slit 1 (in the case considered as example). There would be no problem! Afshar experiment would be the first laboratory experiment selecting between Copenhagen interpretation and ZEO based quantum measurement theory.

See the article Some comments related to quantum measurement theory according to TGD or the chapter About the nature of time of "TGD inspired theory of consciousness".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, October 23, 2017

Some layman considerations related to the fundamentals of mathematics

I am not a mathematician and therefore should refrain from consideration of anything related to fundamentals of mathematics. In the discussions with Santeri Satama I could not avoid the temptation to break this rule. I however feel that I must confess my sins and in the following I will do this.

  1. Gödel's problematics is shown to have a topological analog in real topology, which however disappears in p-adic topology which raises the question whether the replacement of the arithmetics of natural numbers with that of p-adic integers could allow to avoid Gödel's problematics.

  2. Number theory looks from the point of view of TGD more fundamental than set theory and inspires the question whether the notion of algebraic number could emerge naturally from TGD. There are two ways to understand the emergence of algebraic numbers: the hierarchy of infinite primes in which ordinary primes are starting point and the arithmetics of Hilbert spaces with tensor product and direct sum replacing the usual arithmetic operations. Extensions of rationals give also rise to cognitive variants of n-D spaces.

  3. The notion of empty set looks artificial from the point of view of physicist and a possible cure is to take arithmetics as a model. Natural numbers would be analogous to nonempty sets and integers would correspond to pairs of sets (A,B), A⊂ B or B⊂ A with equivalence (A,B)== (A∪ C,B∪ C). Empty set would correspond to pairs (A,A). In quantum context the generalization of the notion of being member of set a∈ A suggests a generalization: being an element in set would generalize to being single particle state which in general is de-localized to the set. Subsets would correspond to many-particle states. The basic operation would be addition or removal of element represented in terms of oscillator operator. The order of elements of set does not matter: this would generalize to bosonic and fermionic many particle states and even braid statistics can be considered. In bosonic case one can have multiple points - kind of Bose-Einstein condensate.

  4. One can also start from finite-D Hilbert space and identify set as the collection of labels for the states. In infinite-D case there are two cases corresponding to separable and non-separable Hilbert spaces. The condition that the norm of the state is finite without infinite normalization constants forces selection of de-localized discrete basis in the case of a continuous set like reals. This inspires the question whether the axiom of choice should be given up. One possibility is that one can have only states localized to finite or at least discrete set of points which correspond points with coordinates in an extension of rationals.

1. Geometric analog for Gödel's problematics

Goedel's problematics involves statements which cannot be proved to be true or false or are simultaneously true and false. This problematics has also a purely geometric analog in terms of set theoretic representation of Boolean algebras when real topology is used but not when p-adic topology is used.

The natural idea is that Boolean algebra is realized in terms of open sets such that the negation of statement corresponds to the complement of the set. In p-adic topologies open sets are simultaneously also closed and there are no boundaries: this makes them and - more generally Stone spaces - ideal for realizing Boolean algebra set theoretically. In real topology the complement of open set is closed and therefore not open and one has a problem.

Could one circumvent the problem somehow?

  1. If one replaces open sets with their closures (the closure of open set includes also its boundary, which does not belong to the open set) and closed complements of open sets, the analog of Boolean algebra would consist of closed sets. Closure of an open set and the closure of its open complement - stament and its negation - share the common boundary. Statement and its negation would be simultaneously true at the boundary. This strange situation reminds of Russell's paradox but in geometric form.

  2. If one replaces the closed complements of open sets with their open interiors, one has only open sets. Now the sphere would represent statement about which one cannot say whether it is true or false. This would look like Gödelian sentence but represented geometrically.

    This leads to an already familiar conclusion: p-adic topology is natural for the geometric correlates of cognition, in particular Boolean cognition. Real topology is natural for the geometric correlates of sensory experience.


  3. Gödelian problematics is encountered already for arithmetics of natural numbers although naturals have no boundary in the discrete topology. Discrete topology does not however allow well-ordering of natural numbers crucial for the definition of natural number. In the induced real topology one can order them and can speak of boundaries of subsets of naturals. The ordering of natural numbers by size reflects the ordering of reals: it is very difficult to think about discrete without implicitly bringing in the continuum.

    For p-adic integers the induced topology is p-adic. Is Gödelian problematics is absent in p-adic Boolean logic in which set and its complement are both open and closed. If this view is correct, p-adic integers might replace naturals in the axiomatics of arithmetics. The new element would be that most p-adic integers are of infinite size in real sense. One has a natural division of them to cognitively representable ones finite also in real sense and non-representable ones infinite in real sense. Note however that rationals have periodic pinary expansion and can be represented as pairs of finite natural numbers.

In algebraic geometry Zariski topology in which closed sets correspond to algebraic surfaces of various dimensions, is natural. Open sets correspond to their complements and are of same dimension as the imbedding space. Also now one encounters asymmetry. Could one say that algebraic surfaces characterize "representable" (="geometrically provable"?) statements as elements of Boolean algebra and their complements the non-representable ones? 4-D space-time (as possibly associative/co-associative ) algebraic variety in 8-D octonionic space would be example of representable statement. Finite unions and intersections of algebraic surfaces would form the set of representable statements. This new-to-me notion of representability is somehow analogous to provability or demonstrability.

2. Number theory from quantum theory

Could one define or at least represent the notion of number using the notions of quantum physics? A natural starting point is hierarchy of extensions of rationals defining hierarchy of adeles. Could one obtain rationals and their extensions from simplest possible quantum theory in which one just constructs many particle states by adding or removing particles using creation and annihilation operators?

2.1 How to obtain rationals and their extensions?

Rationals and their extensions are fundamental in TGD. Can one have quantal construction for them?

  1. One should construct rationals first. Suppose one starts from the notion of finite prime as something God-given. At the first step one constructs infinite primes as analogs for many-particle states in super-symmetric arithmetic quantum field theory. Ordinary primes label states of fermions and bosons. Infinite primes as the analogs of free many-particle states correspond to rationals in a natural manner.

  2. One obtains also analogs of bound states which are mappable to irreducible polynomials, whose roots define algebraic numbers. This would give hierarchy of algebraic extensions of rationals. At higher levels of the hierarchy one obtains also analogs of prime polynomials with number of variables larger than 1. One might say that algebraic geometry has quantal representation. This might be very relevant for the physical representability of basic mathematical structures.

2.2 Arithmetics of Hilbert spaces

The notions of prime and divisibility and even basic arithmetics emerge also from the tensor product and direct sum for Hilbert spaces. Hilbert spaces with prime dimension do not decompose to tensor products of lower-dimensional Hilbert spaces. One can even perform a formal generalization of the dimension of Hilbert space so that it becomes rational and even algebraic number.

For some years ago I indeed played with this thought but at that time I did not have in mind reduction of number theory to the arithemetics of Hilbert spaces. If this really makes sense, numbers could be replaced by Hilbert spaces with product and sum identified as tensor product and direct sum!

Finite-dimensional Hilbert space represent the analogs of natural numbers. The analogs of integers could be defined as pairs (m,n) of Hilbert spaces with spaces (m,n) and (m+r,n+r) identified (this space would have dimension m-n. This identification would hold true also at the level of states. Hilbert spaces with negative dimension would correspond to pairs with (m-n)<0: the canonical representives for m and -m would be (m,0) and (0,m). Rationals can be defined as pairs (m,n) of Hilbert spaces with pairs (m,n) and (km,kn) identified. These identifications would give rise to kind of gauge conditions and canonical representatives for m and 1/m are (m,1) and (1,m).

What about Hilbert spaces for which the dimension is algebraic number? Algebraic numbers allow a description in terms of partial fractions and Stern-Brocot (S-B) tree (see this and this) containing given rational number once. S-B tree allows to see information about algebraic numbers as constructible by using an algorithm with finite number of steps, which is allowed if one accepts abstraction as basic aspect of cognition. Algebraic number could be seen as a periodic partial fraction defining an infinite path in S-B tree. Each node along this path would correspond to a rational having Hilbert space analog. Hilbert space with algebraic dimension would correspond to this kind of path in the space of Hilbert spaces with rational dimension. Transcendentals allow identification as non-pediodic partial fraction and could correspond to non-periodic paths so that also they could have Hilbert spaces counterparts.

2.3 How to obtain the analogs higher-D spaces?

Algebraic extensions of rationals allow cognitive realization of spaces with arbitrary dimension identified as algebraic dimension of extension of rationals.

  1. One can obtain n-dimensional spaces (in algebraic sense) with integer valued coordinates from n-D extensions of rationals. Now the n-tuples defining numbers of extension and differing by permutations are not equivalent so that one obtains n-D space rather than n-D space divided by permutation group Sn. This is enough at the level of cognitive representations and could explain why we are able to imagine spaces of arbitrary dimension although we cannot represent them cognitively.

  2. One obtains also Galois group and orbits of set A of points of extension under Galois group G as G(A). One obtains also discrete coset spaces G/H and alike. These do not have any direct analog in the set theory. The hierarchy of Galois groups would bring in discrete group theory automatically. The basic machinery of quantum theory emerges elegantly from number theoretic vision.

  3. In octonionic approach to quantum TGD one obtains also hierarchy of extensions of rationals since space-time surface correspond zero loci for RE or IM for octonionic polynomials obtained by algebraic continuation from real polynomials with coeffficients in extension of rationals (see this).

3. Could quantum set theory make sense?

In the following my view point is that of quantum physicist fascinated by number theory and willing to reduce set theory to what could be called called quantum set theory. It would follow from physics as generalised number theory (adelic physics) and have ordinary set theory as classical correlate.

  1. From the point of quantum physics set theory and the notion of number based on set theory look somewhat artificial constructs. Nonempty set is a natural concept but empty set and set having empty set as element used as basic building brick in the construction of natural numbers looks weird to me.

  2. From TGD point of view it would seem that number theory plus some basic pieces of quantum theory might be more fundamental than set theory. Could set theory emerge as a classical correlate for quantum number theory already considered and could quantal set theory make sense?

3.1 Quantum set theory

What quantum set theory could mean? Suppose that number theory-quantum theory connection really works. What about set theory? Or perhaps its quantum counterpart having ordinary set theory as a classical correlate?

  1. A purely quantal input to the notion of set would be replacement of points delocalized states in the set. A generic single particle quantum state as analog of element of set would not be localized to a single element of set. The condition that the state has finite norm implies in the case of continuous set like reals that one cannot have completely localized states. This would give quantal limitation to the axiom of choice. One can have any discrete basis of state functions in the set but one cannot pick up just one point since this state would have infinite norm.

    The idea about allowing only say rationals is not needed since there is infinite number of different choices of basis. Finite measurement resolution is however unvoidable. An alternative option is restriction of the domains of wave functions to a discrete set of points. This set can be chosen in very many manners and points with coordinates in extension of rationals are very natural and would define cognitive representation.

  2. One can construct also the analogs of subsets as many-particle states. The basic operation would be addition/removal of a particle from quantum state represented by the action of creation/annihilation operator.

    Bosonic states would be invariant under permutations of single particle states just like set is the equivalence class for a collection of elements (a1,...,an) such that any two permutations are equivalent. Quantum set theory would however bring in something new: the possibility of fermionic statistics. Permutation would change the state by phase factor -1. One would have fermionic and bosonic sets. For bosonic sets one could have multiple elements ("Bose-Einstein condensation"): in the theory of surfaces this could allow multiple copies of the same surface. Even braid statistics is possible. The phase factor in permutation could be complex. Even non-commutative statistics can be considered.

    Many particle states formed from particles, which are not identical are also possible and now the different particle types can be ordered. On obtains n-ples decomposing to ordered K-ple of ni-ples, which are consist of identical particles and are quantum sets. One could talk about K-sets as a generalization of set as analogs of classical sets with K-colored elements. Group theory would enter into the picture via permutation groups and braid groups would bring in braid statistics. Braids strands would have K colors.

3.2 How to obtain classical set theory?

How could one obtain classical set theory?

  1. Many-particle states represented algebraically are detected in lab as sets: this is quantum classical correspondence. This remains to me one of the really mysterious looking aspects in the interpretation of quantum field theory. For some reason it is usually not mentioned at all in popularizations. The reason is probably that popularization deals typically with wave mechanics but not quantum field theory unless it is about Higgs mechanism, which is the weakest part of quantum field theory!

  2. From the point of quantum theory empty set would correspond to vacuum. It is not observable as such. Could the situation change in the presence of second state representing the environment? Could the fundamental sets be always non-empty and correspond to states with non-vanishing particle number. Natural numbers would correspond to eigenvalues of an observable telling the cardinality of set. Could representable sets be like natural numbers?

  3. Usually integers are identified as pairs of natural numbers (m,n) such that integer corresponds to m-n. Could the set theoretic analog of integer be a pair (A,B) of sets such that A is subset of B or vice versa? Note that this does not allow pairs with disjoint members. (A,A) would correspond to empty set. This would give rise to sets (A,B) and their "antisets" (B,A) as analogs of positive and negative integers.

    One can argue that antisets are not physically realizable. Sets and antisets would have as analogs two quantizations in which the roles of oscillator operators and their hermitian conjugates are changed. The operators annihilating the ground state are called annilation operators. Only either of these realization is possible but not both simultaneously.

    In ZEO one can ask whether these two options correspond to positive and negative energy parts of zero energy states or to the states with state function reduction at either boundary of CD identified as correlates for conscious entities with opposite arrows of geometric time (generalized Zeno effect).

  4. The cardinality of set, the number of elements in the set, could correspond to eigenvalue of observable measuring particle number. Many-particle states consisting of bosons or fermions would be analogs for sets since the ordering does not matter. Also braid statistics would be possible.

    What about cardinality as a p-adic integer? In p-adic context one can assign to integer m, integer -m as m× (p-1)× (1+p+p2+...). This is infinite as real integer but finite as p-adic integer. Could one say that the antiset of m-element as analog of negative integer has cardinality -m= m(p-1)(1+p+p2+..). This number does not have cognitive representation since it is not finite as real number but is cognizable.

    One could argue that negative numbers are cognizable but not cognitively representable as cardinality of set? This representation must be distinguished from cognitive representations as a point of imbedding space with coordinates in extension of rationals. Could one say that antisets and empty set as its own antiset can be cognized but cannot be cognitively represented?

Nasty mathematician would ask whether I can really start from Hilbert space of state functions and deduce from this the underlying set. The elements of set itself should emerge from this as analogs of completely localized single particle states labelled by points of set. In the case of finite-dimensional Hilbert space this is trivial. The number of points in the set would be equal to the dimension of Hilbert space. In the case of infinite-D Hilbert space the set would have infinite number of points.

Here one has two views about infinite set. One has both separable (infinite-D in discrete sense: particle in box with discrete momentum spectrum) and non-separable (infinite-D in real sense: free particle with continuous momentum spectrum) Hilbert spaces. In the latter case the completely localized single particle states would be represented by delta functions divided by infinite normalization factors. They are routinely used in Dirac's bra-ket formalism but problems emerge in quantum field theory.

A possible solution is that one weakens the axiom of choice and accepts that only discrete points set (possibly finite) are cognitively representable and one has wave functions localized to discrete set of points. A stronger assumption is that these points have coordinates in extension of rationals so that one obtains number theoretical universality and adeles. This is TGD view and conforms also with the identification of hyper-finite factors of type II1 as basic algebraic objects in TGD based quantum theory as opposed to wave mechanics (type I) and quantum field theory (type III). They are infinite-D but allow excellent approximation as finite-D objects.

This picture could relate to the notion of non-commutative geometry, where set emerges as spectrum of algebra: the points of spectrum label the ideals of the integer elements of algebra.

See the article Some layman considerations related to the fundamentals of mathematics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

What is the IQ of neutron star?

" Humans and Supernova-Born Neutron Stars Have Similar Structures, Discover Scientists" is the title of a popular article about the finding that neutron stars and eukaryotic (not only human) cells contain geometrically similar structures. In cells the cytoplasma between cell nucleus and cell membrane contains a complex highly folded membrane structure known as endoplasmic reticulum (ER). ER in turn contains stacks of evenly spaced sheets connected by helical ramps. They resemble multistory parking garages (see the illustration of the popular article). These structures are referred to as parking places for ribosomes, which are the machinery for the translation of mRNA to amino-acids. The size scale of these structures must be in the range 1-100 microns.

Computer simulations for neutron stars predict geometrically similar structures, whose size is however million times larger and therefore must be in the range of 1-100 meters.The soft condensed-matter physicist Greg Huber from U.C. Santa Barbara and nuclear physicist Charles Horowitz from Indiana University have worked together to explore the shapes (see this and this).

The physical principles leading to these structures look quite different. At nuclear physics side one has strong and electromagnetic interaction at microscopic level and in the model used they give rise to these geometric structures in macroscopic scales. In living matter the model assumes basically entropic forces and the basic variational principle is minimization of the free energy of the system - second law of thermodynamics for a system coupled to thermal bath at constant temperature. The proposal is that some deeper principle might be behind these intriguing structural similarities.

In TGD framework one is forced to challenge the basic principles behind these models as really fundamental principles and to consider deeper reasons for the geometric similarity. One ends up challenging even the belief that neutron stars are just dead matter.

  1. In TGD framework space-time identified as 4-D surface in H=M4× CP2 is many-sheeted fractal structure. In TGD these structures are topological structures for the space-time itself as a 4-surface rather than for the distribution of the matter in topologically trivial almost empty Minkowski space.

    TGD space-time is also fractal characterized by the hierarchy of p-adic length scales assignable to primes near powers of two and to a hierarchy of Planck constants. Zero energy ontology (ZEO) predicts also a hierarchy of causal diamonds (CDs) as regions inside which space-time surfaces are located.

    The usual length scale reductionism is replaced with fractality and the fractality of the many-sheeted space-time could explain the structural similarity of structures with widely different size scales.


  2. Dark matter is identified as a hierarchy of phases of ordinary matter labelled by the value heff=n× h of Planck constant. In adelic physics heff/h=n has purely number theoretic interpretation as a measure for the complexity of extension of rationals - the hierarchy of dark matters would correspond to the hierarchy of these extensions and evolution corresponds to the increase of this complexity. It would be dark matter at the flux tubes of the magnetic body of the system that would make the system living and intelligent. This would be true for all systems, not only for those that we regard as living systems. Perhaps even neutron stars!

  3. In adelic physics (see this p-adic physics for various primes as physics of cognition and ordinary real number based physics are fused together. One has a hierarchy of adeles defined by extensions of rational numbers (not only algebraic extensions but by those using roots of e). The higher the complexity of the extension, the larger the number of common points shared by reals and p-adics: they correspond to space-time points with coordinates in an extension of rationals defining the adele. These common points are identified as cognitive representations, something in the intersection of cognitive and sensory. The larger the number of points, the more complex the cognitive representations. Adeles define thus an evolutionary hierarchy.

    The points of space-time surface defining the cognitive representation are excellent candidates for the carriers of fundamental fermions since many-fermion states allow interpretation in terms of a realization of Boolean algebra. If so then the complexity of the cognitive representation characterized by heff/h increases with the density of fundamental fermions! The larger the density of matter, the higher the intelligence of the system if this view is correct!

This view inspires interesting speculative questions.
  1. In TGD inspired theory of consciousness conscious entities form a fractal hierarchy accompanying geometric fractal hierarchies. Could the analogies between neutron stars and cells be much deeper than merely geometric? Could neutron stars be super-intelligent systems possessing structures resembling those inside cells? What about TGD counterparts of black holes? For blackhole like structures the fermionic cognitive representation would contain even more information per volume than those for neutron star. Could blackholes be super-intelligences instead of mere cosmic trashbins?

    Living systems metabolize. The interpretation is that the metabolic energy allows to increase the value of heff/h and generate negentropic entanglement crucial for cognition. Also blackholes "eat matter from their environment: is the reason the same as in the case of living cell?

    Living systems communicate using flux tubes connecting them and serving also as correlates of attention. In TGD frame flux tubes emanates from all physical systems, in particular stars and blackholes and mediate gravitational interactions. In fact, flux tubes replace wormholes in ER-EPR correspondence in TGD framework or more precisely: wormhole contacts replace flux tubes in GRT framework.

  2. Could also blackhole like structures possess the analog of endoplasmic reticulum replacing the cell membrane with an entire network of membranes in the interior of cell? Interpretation as minimal surface is very natural in TGD framework. Could the predicted space-time sheet within blackhole like structure having Euclidian signature of the induced metric serve as the analog for cell nucleus? In fact, all systems - even elementary particles - possess the space-time sheet with Euclidian signature: this sheet is analogous to the line of Feynman diagram. Could the space-time sheet assignable to cell nucleus have Euclidian signature of the induced metric? Could cell membrane be analogous to blackhole horizon?

  3. What abut genetic code? In TGD inspired biology genetic code could be realized already at the level of dark nuclear physics in terms of strings of dark protons: also ordinary nuclei are identified as strings of nucleons. Biochemical representation would be only a secondary representation and biochemistry would be a kind of shadow for the deeper dynamics of dark matter and magnetic flux tubes. Dark 3-proton states correspond naturally to DNA, RNA, tRNA and amino-acids and dark nuclei as polymers of these states (see this).

    Could neutron stars containing dark matter as dark nuclei indeed realize genetic code? This view about dark matter leads also to a proposal that the so called cold fusion could actually correspond to dark nucleosynthesis such that the resulting dark nuclei with rather small nuclear binding energy transform to ordinary nuclei and liberate most of the ordinary nuclear binding energy in this process (see this). Could dark nucleosynthesis produce elements heavier than Fe and also part of the lighter elements outside stellar interiors. Could this happen also in the fusion of neutron stars to neutron star like entity as the recent simultaneous detection of gravitational waves (GW170817 event) and em radiation from this kind of fusion suggests (see this).

  4. How can one understand cell (or any system) as a trashbin like structure maximizing its entropy on one hand and as an intelligent system on one hand? This can make sense in TGD framework where the amount of conscious information, negentropy, is measured by the sum of p-adic variants of entanglement entropies and is negative(!) thanks to the properties of p-adic norm. Neutron stars, blackholes and cells would be entropic objects if one limits the consideration to real sector of adeles but in p-adic sectors they would carry conscious information. The sum of real and p-adic entropies tends to be negative. Living cell would be very entropic object in real sense but very negentropic in p-adic sense: even more, the sum of negative p-adic negentropies associated with cognition in adelic physics would overcome this entropy (see this).

See the article Cold fusion again.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, October 21, 2017

Some comments about GW170817

The observation of GW170817 was perhaps the event of the year in astronomy. Both gravitational waves and electromagnetic radiation from the collision of two neutron stars fusing to single object were detected. The event occurred at distance of order 130 Mly (size scale of large voids). This was a treasure trove of information.

The first piece of information relates to the question about the synthesis of elements heavier than Fe. It is l assumed that the heavier elements are generated in so called r-process involving creation of neutrons fusing with nuclei. One option is that the r-process accompanies supernova explosions but SN1987A did not provide support for this hypothesis: the characteristic em radiation accompanying r-process was not detected. GW170817 generated also em radiation, so called kilonova (see this), and the em radiation accompanying r-process was reported. Therefore this kind of collisions would generate at least part of the heavier elements. In TGD framework also so called dark nucleosynthesis occurring outside stellar interiors and explaining so called nuclear transmutations, which are now rather well-established phenomenon, would also contribute to he generation of heavier elements (and also the lighter ones) (see this).

Second piece of information was that in GW170817 both gravitational waves and gamma ray signal were detected, and the difference between the arrival times was about 1.7 seconds: gamma rays arrived slightly after the gravitational ones. From this the difference between effective propagation velocities between gravitational and em waves is extremely small.

Note that similar difference between neutrino signal and gamma ray signal was measured for SN1987A. Even gamma rays arrived at two separate pulses from SN1987A. In this case the delay was longer and a possible TGD explanation is that the signals arrived along different space-time sheets (one can certainly tailor also other explanations).

  1. In the recent case it would seem and gravitons and photons arrived along the same space-time sheet (magnetic flux tubes) or at least that the difference for effective light velocity was extremely small if the sheets were different. Perhaps this is the case for all exactly massless particles. In the case of SN1987A neutrino burst was observed 3 hours after gamma ray burst.

  2. From the distance of about .17 MLy one can estimate Δ c/c. If Δ c/c has the same value for GW17081, the neutrino burst for it should have arrived after 2846 hours making 118 days (day=24 hours). This would explain why neutrinos were not detected in the case of GW170817. The explanation has been that the direction was such that neutrino pulse was to weak to be detected in that direction. In any case, if colleagues would take TGD seriously, they would be eagerly waiting for the arrival of neutrino pulse!
So called modified gravity theories claim that dark matter and dark energy are not real (for instance MOND suggesting a more or less ad hoc modification of gravitation at very small accelerations and Verlinde's model, which has received a lot of attention recently). Certain class of these models predict a breaking of Equivalence Principle. Gravitons would couple only to the metric created by ordinary matter as predicted by GRT whereas ordinary matter would couple to that created by dark and ordinary matter as predicted by GRT.

Although this kind of models look hopelessly ad hoc (at least to me), they have right to be shown wrong and GW170817 did it (see this). The point is that the coupling to dark matter besides ordinary matter implies that gamma rays experience additional delay and arrive later than gravitons coupling only to the ordinary matter. This causes what is called Shapiro delay of about 1000 days much longer than the observed 1.7 seconds. Thus these models are definitely excluded. I do not know what this means fro MOND and Verlinde's model.

There is an amazing variety of MOND like models there to be killed and another article about what GW170817 managed to do can be found (see this). Theoretical physics is drowning to a flood of ad hoc models: this is true also in particle physics where great narratives have been dead for four decades now. GW170817 looks therefore like a godly intervention similar to what happened with Babel's tower.

There is a popular article titled "Seeing One Example Of Merging Neutron Stars Raises Five Incredible Question" (see this) telling that GW100817 seems to be very badly behaving guy challenging the GRT based models for the collisions of neutron stars. Something very fishy seems to be going on and this might be the change for TGD to challenge GRT based models.

  1. The naive estimate for the rate of these events is 10 times higher than estimated (suggesting that colliding objects were connected by flux tube somewhat like biomolecules making them possible to find each other in the molecular soup).

  2. The mass ejected from the object was much larger than predicted. The signal in UV and optical parts of the spectrum should have lasted about one day. It lasted for two days before getting dimmer.

  3. The final state should have been blackhole or magnetar collapsing rapidly into blackhole. It was however supermassive neutron star with mass about 2.74 solar masses. The upper limit is about 2.5 solar masses for non-rotating neutron star so that the outcome should have been a blackhole without any ejecta!

    TGD view about blackholes differs from that of GRT. The core region of all stars (actually all physical objects including elementary particles) involves a space-time sheet for which the signature of the induced metric is Euclidian. The signature changes at light-like 3-surface somewhat analogous to blackhole horizon. For blackhole like entities there is also Schwartschild horizong above this horizon. Could this model provide a better model for the outcome of the fusion.

  4. Why gamma ray bursts were so strong and in so many directions instead of cone of angular width about 10-15 degrees? Although gamma ray burst was about 30 degrees from the line of sight, it was seen.

    Heavier elements cannot be produced by fusion in stellar interiors since the process requires energy. r-process in the fusions of neutron stars has been proposed as the mechanism, and the radiation spectrum from GW170817 is consistent with this proposal. The so called dark nucleosynthesis proposed in TGD framework to explain nuclear transmutations (or "cold fusion" or low energy nuclear reactions (LENR)) (see this). This mechanism would produce more energy than ordinary nuclear fusion: when dark proton sequence (dark nucleus) transforms to ordinary nucleus almost entire nuclear binding energy is liberated. Could the mechanism producing the heavier elements be dark nuclear fusion also in the fusion of neutron stars. This would have also produced more energy than expected.

See the article LIGO and TGD or the chapter Quantum astrophysics of "Physics in Many-sheeted Space-time".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.



Sunday, October 15, 2017

From RNA world to RNA-tRNA world to RNA-DNA-tRNA world to DNA-RNA-protein world: how it went?

I told already earlier about how the transition from RNA world to RNA-tRNA world to DNA-RNA-protein world might have taken place in TGD Universe. Last night I realized a more detailed mechanism for the last step of the transition relying on the TGD based general model model of bio-catalysis based on heff=n×h phases of ordinary matter at dark magnetic flux tubes. It also became clear that DNA-RNA-tRNA world very probably preceded the transition to the last world in the sequence. Therefore I glue below the appropriately modified earlier posting.

I encountered a highly interesting work related to the emergence of RNA world: warmly recommended. For a popular article see this.

First some basic terms for the possible reader of the article. There are three key enzymes involved in the process which is believed to lead to a formation of longer RNA sequences able to replicate.

  1. Ribozyme is a piece of RNA acting as catalyst. In RNA world RNA had to serve also as a catalyst. In DNA world proteins took this task but their production requires DNA and transcription-translation machinery.

  2. RNA ligase promotes a fusion of RNA fragments to a longer one in presence of ATP transforming to AMP and diphospate and giving metabolic energy presumably going to the fusion. In TGD fUniverse this would involve
    generation of an atom (presumably hydrogen) with non-standard value of heff=n×h having smaller binding energy scales so that ATP is needed. These dark bonds would be involved with all bio-catalytic processes.

  3. RNA polymerase promotes a polymerization of RNA from building bricks. It looks to me like a special kind of ligase adding only single nucleotide to an existing sequence. In TGD Universe heff=n×h atoms would be involved as also magnetic flux tubes carrying dark analog of DNA with codons replaced with dark proton triplets.

  4. RNA recombinase promotes RNA strands to exchange pieces of same length. Topologically this corresponds to two reconnections occurring at points defining the ends of piece. In TGD Universe these reconnections would occur for magnetic flux tubes containing dark variant of DNA and induce the chemical processes at the level of chemistry.

Self ligation should take place. RNA strands would serve as ligases for the generation of longer RNA strands. The smallest RNA sequences exhibiting self-ligation activity was found to be 40-nucleotide RNA and shorter than expected. It had lowest efficiency but highest functional flexibility to ligate substrates to itself. R18 - established RNA polymerase model - had highest efficiency and highest selectivity.

What I can say about the results is that they give support for the notion of RNA world.

The work is related to the vision about RNA world proposed to precede DNA-RNA-protein world. Why I found it so interesting is that it relates to on particular TGD inspired glimpse to what happened in primordial biology.

In TGD Universe it is natural to imagine 3 worlds. RNA world, RNA-tRNA world, and DNA-RNA-protein world. For an early rather detailed version of the idea about transition from RNA world to DNA-RNA-proteins world but not realizing the tRNA-RNA world as intermediate step see this.

  1. RNA world would contain only RNA. Protein enzymes would not be present in RNA world and RNA itself should catalyze the processes needed to for polymerization, replication, and recombination of RNA. Ribozymes are the RNA counterparts of enzymes. In the beginning RNA would itself act as ribozymes catalyzing these processes.

  2. One can also try to imagine RNA-tRNA world. The predecessors of tRNA molecules containing just single amino-acid could have catalyzed the fusion of RNA nucleotide to a growing RNA sequence in accordance with the genetic code. Amino-acid sequences would not have been present at this stage since there would be no machinery for their polymerisation.

  3. One can consider a transition from this world to DNA-RNA-tRNA world. This would storage of genetic information to DNA from which it would have been transcribed by using polymerase consisting of RNA. This phase would have required the presence of cell membrane like structure since DNA is stabilized inside membranes or at them. Transition to this world should have involved reverse transcription catalized by RNA based reverse-transcriptase. Being a big evolutionary step, this transition should involve a phase transition increasing the value of heff=n × h.

  4. My earlier proposal has been that a transition from RNA world to DNA-RNA-protein world took place. The transition could have also taken place from DNA-RNA-tRNA world to world containing also amino-acid sequences and have led to rapid evolution of catalysis based on amino-acid sequences.

    The amino-acid sequences originating from tRNA originally catalyzing RNA replication stole the place of RNA sequences as the end products from RNA replication. The ribosome started to function as a translator of RNA sequences to amino-acid sequences rather than replication of them to RNAs! The roles of protein and RNA changed! Instead of RNA in tRNA the amino-acid in tRNA joined to the sequence! The existing machinery started to produce amino-acid sequences!

    Presumably the modification of ribosome or tRNA involved addition of protein parts to ribosome, which led to a quantum critical situation in which the roles of proteins and RNA polymers could change temporarily. When protein production became possible even temporarily, the produced proteins began to modify ribosome further to become even more favorable for the production of proteins.

    But how to produce the RNA sequences? The RNA replication machinery was stolen in the revolution. DNA had to do that via transcription to mRNA! DNA had to emerge before the revolution or at the same time and make possible the production of RNA via transcription of DNA to mRNA. The most natural options corresponds to "before", that is DNA-RNA-tRNA world. DNA could have emerged during RNA-tRNA era together with reverse transcription of RNA to DNA with RNA sequences defining ribozymes acting as reverse transcriptase. This would have become possible after the emergence of predecessor of cell membrane. After that step DNA sequences and amino-acid sequences would have been able to make the revolution together so that RNA as the master of the world was forced to become a mere servant!

    The really science fictive option would be the identification of the reverse transcription as time reversal of transcription. In zero energy ontology (ZEO) this option can be considered at least at the level of dark DNA and RNA providing the template of dynamics for ordinary matter.

How the copying of RNA strand to its conjugate strand catalysed by amino-acid of tRNA could have transformed to translation of RNA to amino-acid sequence? Something certainly changed.
  1. The change must have occurred most naturally to tRNA or - less plausibly - to the predecessor of the ribosome machinery. The change in the chemical structure of tRNA is not a plausible option. Something more than chemistry is required and in TGD Universe dark matter localized at magnetic flux tubes is the natural candidate.

  2. Evolution corresponds in TGD Universe gradual increase of heff=n × h. A dramatic evolutionary step indeed took place. The increase of the value of heff for some structural element of tRNA could have occurred so that the catalysis for amino-acid sequence instead of that for RNA sequence started to occur.

  3. The general model for bio-catalysis in TGD Universe involves a contraction of magnetic flux tubes by a reduction of heff and bringing together the reacting molecules associated with flux tubes: this explains the magic looking ability of biomolecules to find each other in the dense molecular soup. The reduction of heff for some dark atom(s) of some reacting molecules(s) to a smaller value liberates temporarily energy allowing to kick the reactants over a potential wall so that the reaction can occur (atomic binding energies scale as 1/heff2). After than the liberated energy is absorbed and ordinary atom transforms back to dark atom.

    In the recent case heff associated with a dark atom (or atoms) of tRNA could have increased so that the binding energy liberated would have increased and allowed to overcome a higher potential wall than before. If the potential wall needed to overcome in the fusion of additional amino-acid to a growing protein is higher than that in the fusion of additional RNA to a growing RNA sequence, this model could work.

  4. The activation energy for the addition of amino-acid should be larger than that for RNA nucleotide. A calculated estimate for the activation energy for the addition of amino-acid is 63.2 eV. An estimate for the activation energy for the addition of RNA nucleotide at the temperature range 37-13 C is in the range 35.6 -70.2 eV . An estimate for the activation energy for the addition of DNA nucleotide is 58.7 eV. The value in the case RNA would be considerably smaller than that in the case of amino-acids at physiological temperature. For DNA and amino-acid the activation energy would be somewhat smaller than for amino-acid. This is consistent with the proposed scenario. I am not able to decide how reliable these estimates are.

The natural first guess is that the dark atoms are hydrogen atoms. It is however not at all clear whether "ordinary" hydrogen atoms correspond to n=heff/h=n=1.
  1. Randell Mills has proposed his notion of hydrino atom to explain anomalous energy production and EUV radiation in 10-20 nm range taking place in certain electrolytic system and having no chemical explanation. The proposal of Mills is that hydrogen atom can make in presence of a catalyst a transition to a lower energy state with a reduced size. I have already earlier considered some TGD inspired models for hydrino. The resemblance with the claimed cold fusion suggests that the energy production involved in the two cases might involve the same mechanism.

    I have considered two models for the findings (see this). The first model is a variant of cold fusion model that might explain the energy production and the observed radiation at EUV energy range. Second model is a variant of hydrino atom assuming that ordinary hydrogen atom corresponds to heff/h=nH>1 and that catalyst containing hydrogen atoms with lower value of nh<nH could induce a phase transition transforming hydrogen atoms to hydrinos with binding energy spectrum scaled up by scaling factor (nH/nh)2 and radii scaled down by (nh/nH)2. The findings of Mills favour the value nH=6.

  2. Suppose the transition corresponds to a transition analogous to photon emission so that it occurs between Δ J=1 transitions of hydrogen atom. There are two simple options: either the direction of electron spin change but orbital angular momentum remains unaffected or the angular momentum of electron changes by Δ L=1 but spin direction does not change.

    The simplest assumption is that the principal quantum numbers in the initial and final state are ni=1 and nf≥ ni. Assume first that initial state with (nHi,ni=1) having Li=0 and final state with (nHf,nf≥ ni).

  3. The energy difference between the initial state with (nHi,ni=1) and final state with (nHf, nf). The initial binding energy is the ordinary binding of thought-to-be hydrogen atom in the ground state: Ei= Ef(nHf/nHi)2 ≈ 13.6 eV. Here Ef denotes the final ground state binding energy. The final state binding energy is Efnf= Ef/nf2.

    The liberated energy defining the order of magnitude for the activation energy (thermodynamical quantity) is given by

    Δ E=Efnf-Ei= Efnf2- Ef(nHfnHi)2= Ei[(nHinHf)2 nf-2-1].

    The condition Δ E > 0 gives

    nHi/nHf >nf .

    For nHi/nHf=nf one has Δ E=0. For instance, this occurs for (nHi,nHf)∈ {(2,1),(6,3),(6,2)}. Δ E>0 condition gives nHi > 2.

  4. Consider first ni=nf=1 for which the spin direction of electron changes if the transition is analogous to photon emission. By putting nf=1 in above equation one obtains a formula for the transition energy in this case. For instance, (nNi,ni)=(6,1)→ (nHf,nf) =(3,1) would correspond to Δ E=40.8 eV perhaps assignable to RNA polymerization and the transition (nHi,ni)=(7,1)→ (nHf,nf)=(3,1) to Δ E= 60.4 eV perhaps assignable to amino-acid polymerization and DNA polymerization. Note that nH=6 is supported by the findings of Mills.

  5. The table below gives the liberated energies Δ E for transitions with (ni,nf)=(1,2) in some cases. The liberated energy in transition (nHi,ni=1)→ (nHf,nf=2) in some cases.


    (nHi,ni) (nHf,nf) Δ E/eV
    (3,1) (1,2) 17.0
    (4,1) (1,2) 40.8
    (4,1) (2,2) 0.0
    (5,1) (1,2) 71.4
    (5,1) (2,2) 7.7
    (6,1) (1,2) 109.0
    (6,1) (3,2) 17.0


    The transitions (4,1)→ (1,2) resp. (5,1)→ (1,2) might give rise to the
    activation energies associated with RNA resp. amino-acid polymerization.

  6. If ordinary hydrogen atom and atoms in general correspond to heff/h=n=1, the liberated energies would be below the ground state energy E0=13.6 eV of hydrogen atom and considerably below the above estimates. For heavier atoms the binding energy scale would be Z2-fold and already for carbon with Z=6 by a factor 36 higher. It is difficult to obtain Δ E in the scale suggested by the estimates for the activation energies.

One could try to test whether tRNA could be modified to a state in which RNA is translates to RNA sequences rather than proteins. This would require a reduction of heff=n× h for the dark atom in question.

See the article From RNA world to RNA-tRNA world to RNA-DNA-tRNA world to DNA-RNA-protein world: how it went? or the chapter Evolution in Many-Sheeted Space-Time of "Genes and Memes".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, October 12, 2017

Could the precursors of perfectoids emerge in TGD?

The work of Peter Stolze based on the notion of perfectoid has raised a lot of interest in the community of algebraic geometers. One application of the notion relates to the attempt to generalize algebraic geometry by replacing polynomials with analytic functions satisfying suitable restrictions. Also in TGD this kind of generalization might be needed at the level of M4× CP2 whereas at the level of M8 algebraic geometry might be enough. The notion of perfectoid as an extension of p-adic numbers Qp allowing all p:th roots of p-adic prime p is central and provides a powerful technical tool when combined with its dual, which is function field with characteristic p.

Could perfectoids have a role in TGD? The infinite-dimensionality of perfectoid is in conflict with the vision about finiteness of cognition. For other p-adic number fields Qq, q≠ p the extension containing p:th roots of p would be however finite-dimensional even in the case of perfectoid. Furthermore, one has an entire hierarchy of almost-perfectoids allowing powers of pm:th roots of p-adic numbers. The larger the value of m, the larger the number of points in the extension of rationals used, and the larger the number of points in cognitive representations consisting of points with coordinates in the extension of rationals. The emergence of almost-perfectoids could be seen in the adelic physics framework as an outcome of evolution forcing the emergence of increasingly complex extensions of rationals.

See the article Could the precursors of perfectoids emerge in TGD?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, October 09, 2017

What does cognitive representability really mean?

I had a debate with Santeri Satama about the notion of number leading to the question about what cognitive representability of number could mean. This inspired writing of an articling discussing the notion of cognitive representability. Numbers in the extensions of rationals are assumed to be cognitively representable in terms of points common to real and various p-adic space-time sheets (correlates for sensory and cognitive). One allows extensions of p-adics induced by extension of rationals in question and the hierarchy of adeles defined by them.

One can however argue that algebraic numbers do not allow finite representation as do rational numbers. A weaker condition is that the coding of information about algorithm producing the cognitively representable number contains a finite amount of information although it might take an infinite time to run the algorithm (say containing infinite loops). Furthermore, cognitive representations in TGD sense are also sensory representations allowing to represent algebraic numbers geometrically (21/2) as the diameter of unit square). Stern-Brocot tree associated with partial fractions indeed allows to identify rationals as finite paths connecting the root of S-B tree to the rational in question. Algebraic numbers can be identified as infinite periodic paths so that finite amount of information specifies the path. Transcendental numbers would correspond to infinite non-periodic paths. A very close analogy with chaos theory suggests itself.

See the article What does cognitive representability really mean?

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.