https://matpitka.blogspot.com/

Thursday, January 02, 2025

A new experimental demonstration for the occurrence of low energy nuclear reactions

A new experimental demonstration for the occurrence of low energy nuclear reactions

I learned of highly interesting new experimental results related to low energy luclear reactions (LENR) from a popular article published in New energy times (see this) giving a rather detailed view of what is involved. There is also a research article by Iwamura et al with the title "Anomalous heat generation that cannot be explained by known chemical reactions produced by nano-structured multilayer metal composites and hydrogen gas" published in Japanese Journal of Applied Physics (see this).

Note that LENR replaces the earlier term "cold fusion", which became a synonym for pseudoscience since standard nuclear physics does not allow these effects. In practice, the effects studied are however the same. LENR often involves Widom-Larsen theory (see this) based on the assumption that the fundamental step in the process not strong interaction but weak interaction producing of electron with a large effective mass (in condensed matter sense) with a proton producing a neutron which is very nearly at rest and is able to get near the target nucleus. The assumption that an electron has a large effective mass and is very nearly at rest can be challenged. The understanding of detailed mechanisms producing the observed nuclear transmutations is not understood in the model.

1. Experiments of the Tohoku group

Consider first the experimental arrangement and results.

  1. The target consists of alternating layers consisting of 6 Cu layers of thickness 2 nm and 6 Ni layers of thickness of thickness 14 nm. The thickness of this part is 100 nm. Below this layer structure is a bulk consisting of Ni. The thickness of the Ni bulk is 105 nm. The temperature of the hydrogen gas is varied during the experiment in the range 610 - 925 degrees Celsius. This temperature range is below the melting temperatures of Cu (1085 C) and Ni (1455 C).
  2. The target is in a chamber, pressurized by feeding hydrogen gas, which is slowly adsorbed by the target. Typically this takes 16 hours. In the second phase, when the hydrogen is fully adsorbed, air is evacuated from the chamber and heaters are switched on. During this phase excess heat is produced. For instance, in the first cycle the heating power was 19 W and the excess heat was 3.2 W and lasted for about 11 hours. At the end of the second cycle heat is turned off and the cycle is restarted.

    The experiment ran for a total of 166 hours, the input electric energy was 4.8 MJ and the net thermal energy output was .76 MJ.

  3. The figure of the popular article (see this) summarizes the temporal progress of the experiment and pressures and temperatures involved. Pressures are below 250 Pa: note that one atmosphere corresponds to 101325 Pa.

    The energy production is about 109 Joule per gram of hydrogen fuel. A rough estimate gives thermal energy production of about 10 keV per hydrogen atom. Note that the thermal energy associated with the highest temperature used (roughly 1000 K) is about .1 eV. In hot nuclear fusion the power gain is roughly 300 times higher and about 3 MeV per nucleon. The fraction of power gain to the input power is below 16 per cent typically in a given phase of the experiment.

  4. The second figure (see this) represents the the depth profiles in the range 0-250 nm for the abundances of Ni-,Cu-,C-, Si- and H- ions for the initial and final situations for an experiment in which excess heat of 9 W was generated. The original layered structure has smoothed out, which suggests that melting has occurred. This cannot be due to the feed of the heat energy. The melting of Ni requires a temperature above 1455 C.

    Earlier experiments were carried out in the adsorption phase. The recent experiments were performed in the desorption phase and the heat production was higher. The proposal is that the fact that the desorption is a faster process than the absorption could somehow explain this.

The Tohoku group has looked for changes in the abundances of elements and for unusual isotopic ratios after the experiments. Iwamura reports that they have seen many unusual accumulations.
  1. However, the most prevalent is an unusually high percentage of the element oxygen showing up below the surface of the multilayer composite, within the outer areas of the bulk.

    Pre-experiment analysis for the presence of oxygen concentration, after fabrication of the multilayer composite, has indicated a concentration of 0.5 to a few percent down to 1,000 nm from the top surface. The Tohoku group has observed many accumulations of oxygen in post-experimental analyses exceeding 50 percent in specific areas.

    Iwamura says that once the multilayer is fabricated, there is no way for atmospheric oxygen to leak below the top surface, at least beyond the first few nanometers. As a cross-check, researchers looked for nitrogen (which would suggest contamination from the atmosphere) but they detected no nitrogen in the samples.

  2. Coulomb wall makes the low energy reactions of protons with the nuclei of the target extremely slow. If one assumes that the Widom-Larsen model is a correct way to overcome the Coulobm wall, it is natural to look what kinds of stable end products the reactions p + Ni and p + Cu, made possible by the Widom-Larsen mechanism, could yield.The most abundant isotope of Ni has charge and mass number (Z,A=Z+N)=(28,59) (see this). Ni has other stable isotopes with A ∈ {58,60,61,62,64}. The reaction Ni+p could lead from stable Ni isotope (28,62)resp. (28,64) to stable Cu isotope (29,63) resp. (29,65).

    Cu has (Z,A)=(29,63) (see this) and stable isotopes with A ∈{63,65}. The reaction Cu+p could lead from (Z,A) ∈(29,{63,65}) to (Z,A) ∈ (30,{64,66}). This could be followed by alpha decay to (Z,A) ∈(28,{60,62}). Iron has 4 stable isotopes with A∈{54,56,57,58}. 60Fe is a radionuclide with half life of 2.6 million years decaying to 60Ni. The alpha particle could in turn induce the transmutation of C to O.

    2. Theoretical models

    Krivit has written a 3-part book "Hacking the atom: Explorations in Nuclear Research " about LENR (see this, this, and this). I have written an article (see this) about "cold fusion"/LENR in the TGD framework inspired by this book.

    The basic idea of Widom-Larsen theory (see this) is as follows. First, a heavy surface electron is created by electromagnetic radiation in the LENR cells. This heavy electron binds with a proton to form an ultra-low momentum (ULM) neutron and neutrino. A weak reaction would be basically in question. The heaviness of the surface electron implies that the kinetic tunnelling barrier due Uncertainty Principle is very low and allows electron and proton get very near to each other so that the weak transition p+e → n+ν can occur. Neutron has no Coulomb barrier and has very low momentum so that it can be absorbed by a target nucleus at a high rate.

    The difference of proton and neutron masses is mn-mp= 2.5 me. The final state neutron produced in p+e → n+ν is almost at rest. One can argue that at the fundamental level ordinary kinematics should be used. The straight forward conclusion would be that the energy of the electron must be 2.5 me so it would be relativistic.

    Second criticism relates to the heaviness of the surface electron. I did not find from the web any support for heavy electrons in Cu and Ni. Wikipedia article (see this) and web search suggest that they quite generally involve f electrons and they are absent in Cu and Ni.

    I also found a second model involving heavy electrons but no weak interactions (see this). Heavy electrons would catalyze nuclear transmutations. There would be three systems involved: electron, proton and nucleus. There would be no formation of an ultralow energy neutron. An electron would form a bound state with a proton with nuclear size. Although Coulomb attraction is present, the Uncertainty Principle would prevent the tunnelling of ordinary electrons to a nuclear distance. It is argued that a heavy electron has a much smaller quantum size and can tunnel to this distance. After this, the electron is kicked out of the system and by energy conservation its energy is compensated by a generation of binding energy between proton and nucleus so that heavier nucleus is formed. The same objection applies to both the Widom-Larsen model and this model.

    What about the TGD based model derived to exlain the electrolysis based "cold fusion" (see this). The findings indeed allow to sharpen the TGD based model for "cold fusion" based on generation of dark nuclei as dark proton sequences with binding energies in keV range instead of MeV range. One can understand what happens by starting from 3 mysteries.

    1. The final state contains negatively charged Ni-, Cu-, C-, S-i, O-, and H- ions. What causes their negative charge? In particular, the final state target contains O- ions although there is no oxygen present in the target in the initial state!
    2. A further mystery is that the Pollack effect requires water. Where could the water come from?
    Could O2 and 2 H2 molecules present in the chamber in the initial state give rise to oxygen ions in the final state? Could the spontaneously occurring reaction 2H2+O2 → 2H2O in the H2 pressurized chamber liberating energy of about 4 eV generate the water in the target volume so that the Pollack effect, induced by heating could take place for the water. Note that the reverse of this reaction occurs in photosynthesis. It would transform ordinary protons to dark protons and generate a negatively charged exclusion zone involving Ni-, Cu-, C-, S-i, O-, and H- ions in the final state. The situation would effectively reduce to that in systems involving electrolyte studied in the original "cold fusion" experiments.

    The spontaneous transformation of dark nuclei to ordinary ones would liberate essentially all the ordinary nuclear binding energy. It is of course not obvious whether the transformation to ordinary nuclei is needed to explain the heat production: it is however necessary to explain the nuclear transmutations, which are not discussed in the article of Tohoku group. The resulting dark nuclei could be rather stable and the X-ray counterpart for the emission of gamma rays could explain the heating. That gamma rays of ordinary nuclear physics have not been observed in "cold fusion" is the killer objection against "cold fusion" based on standard nuclear physics. In TGD gamma rays would be replaced by X rays in keV range, which is also the average thermal energy produced per hydrogen atom.

    3. TGD inspired models of "cold fusion"/LENR or whatever it is

    TGD suggests dark fusion (see this and this) as the mechanism of "cold fusion". One can consider two models explaining these phenomena in the TGD Universe. Both models rely on the hierarchy of Planck constants heff=n× h (see this, this, this, this) explaining dark matter as ordinary matter in heff=n× h phases emerging at quantum criticality. heff implies scaled up Compton lengths and other quantal lengths making possible quantum coherence at longer scales than usual.

    The hierarchy of Planck constants heff=n× h has now a rather strong theoretical basis and reduces to number theory (see this). Quantum criticality would be essential for the phenomenon and could explain the critical doping fraction for cathode by D nuclei. Quantum criticality could help to explain the difficulties to replicate the effect.

    3.1 Simple modification of WL does not work

    The first model is a modification of WL and relies on dark variants of weak interactions. In this case LENR would be an appropriate term.

    1. Concerning the rate of the weak process e+p→ n+ν the situation changes if heff is large enough and rather large values are indeed predicted. heff could be large also for weak gauge bosons in the situation considered. Below their Compton length weak bosons are effectively massless and this scale would scale up by factor n=heff/h to almost atomic scale. This would make weak interactions as strong as electromagnetic interactions and long ranged below the Compton length and the transformation of proton to neutron would be a fast process. After that a nuclear reaction sequence initiated by neutrons would take place as in WL. There is no need to assume that neutrons are ultraslow but electron mass remains the problem. Note that also proton mass could be higher than normal perhaps due to Coulomb interactions.
    2. As such this model does not solve the problem related to the too small electron mass. Nor does it solve the problem posed by gamma ray production.

    3.2 Dark nucleosynthesis

    Also the second TGD inspired model involves the heff hierarchy. Now LENR is not an appropriate term: the most interesting things would occur at the level of dark nuclear physics, which is now a key part of TGD inspired quantum biology.

    1. One piece of inspiration comes from the exclusion ones (EZs) of Pollack (see this), which are negatively charged regions (see this and this). Also the work of the group of Prof. Holmlid (see this and this), not yet included in the book of Krivit, was of great help. TGD proposal (see this) is that protons causing the ionization go to magnetic flux tubes having interpretation in terms of space-time topology in the TGD Universe. At flux tubes they have heff=n× h and form dark variants of nuclear strings, which are basic structures also for ordinary nuclei.
    2. The sequences of dark protons at flux tubes would give rise to dark counterparts of ordinary nuclei proposed to be also nuclear strings but with dark nuclear binding energy, whose scale is measured using as natural unit MeV/n, n=heff/h, rather than MeV. The most plausible interpretation is that the field body/magnetic body of the nucleus has heff= n× h and is scaled up in size. n=211 is favoured by the fact that from Holmlid's experiments the distance between dark protons should be about electron Compton length.

      Besides protons also deuterons and even heavier nuclei can end up in the magnetic flux tubes. They would however preserve their size and only the distances between them would be scaled to about electron Compton length on the basis of the data provided by Holmlid's experiments (see this and this).

      The reduced binding energy scale could solve the problems caused by the absence of gamma rays: instead of gamma rays one would have much less energetic photons, say X rays assignable to n=211 ≈ m_p/m_e. For infrared radiation the energy of photons would be about 1 eV and the nuclear energy scale would be reduced by a factor about 10^{-6}-10^{-7}: one cannot exclude this option either. In fact, several options can be imagined since the entire spectrum of heff is predicted. This prediction is testable.

      Large heff would also induce quantum coherence is a scale between electron Compton length and atomic size scale.

    3. The simplest possibility is that the protons are just added to the growing nuclear string. In each addition one has (A,Z)→ (A+1,Z+1) . This is exactly what happens in the mechanism proposed by Widom and Larsen for the simplest reaction sequences already explaining reasonably well the spectrum of end products.

      In WL the addition of a proton is a four-step process. First e+p→ n+ν occurs at the surface of the cathode. This requires large electron mass renormalization and fine tuning of the electron mass to be very nearly equal but higher than the n-p mass difference.

      There is no need for these questionable assumptions of WL in TGD. Even the assumption that weak bosons correspond to large heff phase might not be needed but cannot be excluded with further data. The implication would be that the dark proton sequences decay rather rapidly to beta stable nuclei if a dark variant of p → n is possible.

    4. EZs and accompanying flux tubes could be created also in electrolyte: perhaps in the region near the cathode, where bubbles are formed. For the flux tubes leading from the system to the external world most of the fusion products as well as the liberated nuclear energy would be lost. This could partially explain the poor replicability for the claims about energy production. Some flux tubes could however end at the surface of the catalyst under some conditions. Flux tubes could end at the catalyst surface. Even in this case the particles emitted in the transformation to ordinary nuclei could be such that they leak out of the system and Holmlid's findings indeed support this possibility.

      If there are negatively charged surfaces present, the flux tubes can end to them since the positively charged dark nuclei at flux tubes and therefore the flux tubes themselves would be attracted by these surfaces. The most obvious candidate is catalyst surface, to which electronic charge waves were assigned by WL. One can wonder whether Tesla observed in his experiments the leakage of dark matter to various surfaces of the laboratory building. In the collision with the catalyst surface dark nuclei would transform to ordinary nuclei releasing all the ordinary nuclear binding energy. This could create the reported craters at the surface of the target and cause heating. One cannot of course exclude that nuclear reactions take place between the reaction products and target nuclei. It is quite possible that most dark nuclei leave the system.

      It was in fact Larsen, who realized that there are electronic charge waves propagating along the surface of some catalysts, and for good catalysts such as Gold, they are especially strong. This would suggest that electronic charge waves play a key role in the process. The proposal of WL is that due to the positive electromagnetic interaction energy the dark protons of dark nuclei could have rest mass higher than that of neutron (just as in the ordinary nuclei) and the reaction e + p → n+ν would become possible.

    5. Spontaneous beta decays of protons could take place inside dark nuclei just as they occur inside ordinary nuclei. If the weak interactions are as strong as electromagnetic interactions, dark nuclei could rapidly transform to beta stable nuclei containing neutrons: this is also a testable prediction. Also dark strong interactions would proceed rather fast and the dark nuclei at magnetic flux tubes could be stable in the final state. If dark stability means the same as the ordinary stability then also the isotope shifted nuclei would be stable. There is evidence that this is the case.
    Neither CF nor LENR is an appropriate term for the TGD inspired option. One would not have ordinary nuclear reactions: nuclei would be created as dark proton sequences and the nuclear physics involved is on a considerably smaller energy scale than usual. This mechanism could allow at least the generation of nuclei heavier than Fe not possible inside stars and supernova explosions would not be needed to achieve this. The observation that transmuted nuclei are observed in four bands for nuclear charge Z irrespective of the catalyst used suggest that the catalyst itself does not determine the outcome.

    One can of course wonder whether even "transmutation" is an appropriate term now. Dark nucleosynthesis, which could in fact be the mechanism of ordinary nucleosynthesis outside stellar interiors to explain how elements heavier than iron are produced, might be a more appropriate term.

    3.3 The TGD based model and the findings of Iwamura et al

    The presence of ions Ni-, Cu-, C-, Si- and H- ions in the target is an important guideline. LENR involves negatively charged surfaces at which the presence of electrons is thought to catalyze transmutations: the WL model relies on this idea. The question concerns the ionization mechanism.

    1. The appearance of Si- in the entire target volume could be understood in terms of melting. It is difficult to understand its appearance as being due to nuclear transmutations.
    2. What is remarkable is the appearance of O-. The Coulomb wall makes it very implausible that the absorption of an ordinary alpha particle in LENR could induce the transmutation of C to O.

      Could the oxygen be produced by dark fusion? It is difficult to see why oxygen should have such a preferred role as a reaction product in dark fusion favouring light nuclei?

      Could the oxygen enter the target during the first phase when the pressurized hydrogen gas is present together with air, as the statement that air was evacuated after the first stage suggests. Iwamura has also stated that nitrogen N, also present in air, is not detected in the target so that the leakage of O to the target looks implausible. Could the leakage of oxygen rely on a less direct mechanism?

    3. Oxygen resp. hydrogen appears as O2 resp. H2 molecules. O2 resp. H2 has a binding energy of 5.912 eV and resp. 4.51 eV. Therefore the reaction 2H2+O2→ 2H2O could occur during the pressurization phase. The energy liberated in this reaction is estimated to be about 4.88 eV (see this).
    4. What is remarkable is that water plays a key role in the Pollack effect interpreted as a formation of dark proton sequences. Pollack effect generates negatively exclusion zones as negatively charged regions and Ni-, Cu-, C-, Si- and H- ions would serve as a signature of these regions. In the "cold fusion" based on electrolysis, the water would be present from the beginning but now it would be generated by the proposed mechanism.

      The difference of the bonding energy of OH and binding energy of O- is about .33 eV in absence of electric fields and corresponds to the thermal energy at temperature of 630 C. This would suggest that the heating replaces IR photons in ordinary Pollack effect as energy source inducing the formation of dark protons and exclusion zones consisting of negative ions.

    5. In fact, Pollack effect suggests a deep connection between computers, quantum computers and living matter based on the notion of OH-O- + dark proton qubit and its generalizations (see this) .
    6. The earlier TGD based model for "cold fusion" as dark fusion suggests that the value of heff for dark protons is such that the Compton length is of order electron Compton length. Dark proton sequences as dark nuclei would spontaneously decay to ordinary nuclei and produce the heat. In TGD, ordinary nuclei also form nuclear strings as monopole flux tubes (see this).

      TGD assigns a large value of heff to systems having long range strong enough classical gravitational and electric fields (see this and this). For gravitational fields the gravitational Planck constant is very large and the gravitational Compton length is one half of the Schwartschild radius of the system with large mass (Sun or Earth). In biology, charged systems such as DNA, cells and Earth itself involve large negative charge and therefore large electric Planck constant proportional to the total charge of the system. Pollack effect generates negatively charged exclusion zones, which could be characterized by gravitational or electric Planck constant. In the recent case, the electric Compton length of dark protons should be of the order of electron Compton length so that heff/h≈ mp/me≈ 211 is suggestive.

    3.4 Summary

    In the TGD based model, the reaction 2H2+O2 → 2H2O transforms the situation to that appearing in electrolysis and Pollack effect would be also now the basic mechanism producing dark nuclei as dark proton sequences transforming spontaneously to ordinary nuclei. Whether this mechanism is involved should be tested.

    The TGD based model predicts much more than is reported in the article of Iwamura et al. A spectrum of light nuclei produced in the process and containing at least alpha particles but there is no information about this spectrum in the article.

    1. The article reports only the initial and final state concentrations of Ni-, Cu-, -C, O-, and H- but does not provide information about all nuclei produced by transmutations. Melting has very probably occurred for Ni and Cu.
    2. The heat production rate is higher during the desorption phase than during the adsorption phase. The TGD explanation would be that the dark proton sequences have reached a full length during desorption and can produce more nuclei as they decay.
    3. The finding that the maximum of the energy production per hydrogen atom is roughly 1/100 times smaller than the binding energy scale of nuclei of nuclei, forces to challenge dark fusion as a reaction mechanism. The explanation could be that the creation of dark nuclei from hydrogen atoms is the rate limiting step. If roughly 1 percent of hydrogen atoms generates dark protons, the rate of heat production could be understood.
    4. The basic prediction of Widom-Larsen model about (A,Z)→ (A+1,Z+1)→ .. follows trivially from TGD inspired model in which dark nuclei with binding energy scale much lower than for ordinary nuclei and Compton length of order electron Compton length are formed as sequences consisting of dark protons, deuterons or even heavier nuclei, which then transform to ordinary nuclei and liberate nuclear binding energy. This occurs at negatively charged surfaces (that of cathode for instance) since they attract positively charged flux tubes. On the other hand, the negative surface charge could be generated in the Pollack effect for the water molecules generating exclusion zone and dark protons at the monopole flux tubes.

      The energy scale of dark variants of gamma rays liberated in dark nuclear reactions is considerably smaller than that of gamma rays since it is scaled down from few MeV to few keV which indeed corresponds to the thermal energy liberated per hydrogen atom. This could explain why gamma rays are not observed. The questionable assumptions of the Widom-Larsen model are not needed.

      The maximum length of dark nucleon sequences determines how heavy nuclei can emerge. The minimum length corresponds to a single alpha nucleus and it could induce nuclear transformation such as the transmutation of C to O. Part of the dark nuclei could escape from the target volume and remain undetected. Dark nuclei could also directly interact with the target nuclei, in particular Ni and Cu.

    See A new experimental demonstration for the occurrence of low energy nuclear reactions or the chapter Cold Fusion Again.

    For a summary of the earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 30, 2024

From blackholes to time reversed blackholes: comparing the views of time evolution provided by general relativity and TGD

The TGD inspired very early cosmology is dominated by cosmic strings. Zero energy ontology (ZEO) suggests that this is the case also for the very late cosmology, or that emerging after "big" state function reduction (BSFR) changing the arrow of time. Could this be a counterpart for the general relativity based vision of black holes as the endpoint of evolution: now they would also be starting points for a time evolution as a mini cosmology with reversed time direction. This would conform with the TGD explanation for stars and galaxies older than the Universe and with the picture produced by JWST.

First some facts about zero energy ontology (ZEO) are needed.

  1. In ZEO space-time surfaces satisfying almost deterministic holography and having their ends at the boundaries of causal diamond (CD), which is the intersection of future and past directed light-cones and can have varying scale? The 3-D states at the passive boundary of CD are unaffected in "small" state function reductions (SSFRs) but change at the active boundary. In the simplest scenario CD itself suffers scaling in SSFRs.
  2. In BSFRs the roles of passive and active boundaries of CD change. The self defined by the sequence of SSFRs "dies" and reincarnates with an opposite arrow of geometric time. The hierarchy of effective Planck constants predicted by TGD implies that BSFRs occur even in cosmological scales and this could occur even for blackhole-like objects in the TGD counterpart of evaporation.
Also some basic ideas related to TGD based astrophysics and cosmology are in order.
  1. I have suggested that the counterpart of the GRT black hole in the TGD, I call it blakchole-like object (BH), is a maximally dense volume-filling flux tube tangle (see this). Actually an entire hierarchy of BHs with quantized string tension is predicted (see this) and ordinary BHs would correspond to flux tubes consisting of nucleons (they correspond to Mersenne prime M107 in TGD) and would be essentially giant nuclei.

    M89 hadron physics and corresponding BHs are in principle also possible and have string tension which is 512 higher than the flux tubes associated with ordinary blackholes. Surprisingly, they could play a key part in solar physics.

  2. The very early cosmology of TGD (see this) corresponds to the region near the passive boundary of CD that would be cosmic string dominated. The upper limit for the temperature would be Hagedorn temperature. Cosmic strings are 4-D objects but their CP2 projection is extremely small so that they look like strings in M4.

    The geometry of CD strongly suggests a scaled down analog of big bang at the passive boundary and of big crunch at the active boundary as time reversal of big bang as BSFR. This picture should also apply to the evolution of BH? Could one think that a gas of cosmic strings evolves to a BH or several of them?

  3. In ZEO, the situation at the active future boundary of the CD after BSFR should be similar to that at the passive boundary before it. This requires that the evaporation of the BH at the active boundary must occur as an analog of the big bang, and gives rise to a gas consisting of flux tubes as analog of cosmic string dominated cosmology. Symmetry would be achieved between the boundaries of the CD.
  4. In general relativity, the fate of all matter is to end up in blackholes, which possibly evaporate. What about the situation in TGD?: does all matter end up to a tangle formed by volume filling flux tubes which evaporates to a gas of flux tubes in an analog of Big Bang?

    Holography = holomorphy vision states that space-time surfaces can be constructed as roots for pairs (f1,f2) of analytic functions of 3 complex coordinates and one hypercomplex coordinate of H=M4× CP2. By holography the data would reside at certain 3-surfaces. The 3-surfaces at the either end of causal diamond (CD), the light-like partonic orbits, and lower-dimensional surfaces are good candidates in this respect.

    Could the matter at the passive boundary of CDs consist of monopole flux tubes which in TGD form the building bricks of blackhole-like objects (BHs) and could the BSFR leading to the change of the arrow of geometric time transform the BH at the active boundary of CD to a gas of monopole flux tubes? This would allow a rather detailed picture of what space-time surfaces look like.

Black hole evaporation as an analog of time reversed big bang can be a completely different thing in TGD than in general relativity.
  1. Let's first see whether a naive generalization of the GRT picture makes sense.
    1. The temperature of a black hole is T=ℏ/8πGM. For ordinary hbar it would therefore be extremely low and the black hole radiation would therefore be extremely low-energy.
    2. If hbar is replaced by GMm/β0, the situation changes completely. We get T= m/β0. The temperature of massive particles, mass m, is essentially m. Each particle in its own relativistic temperature. What about photons? They could have very small mass in p-adic thermodynamics.
    3. If m=M, we get T=M/β0. This temperature seems completely insane. I have developed the quantum model of the black hole as a quantum system and in this situation the notion of temperature does not make sense.
  2. Since the counterpart of the black hole would be a flux tube-like object, the Hagedorn temperature TH is a more natural guess for the counterpart of evaporation temperature and also blackhole temperature. In fact, the ordinary M107 BH would correspond to a giant nucleus as nuclear string. Also M89 BH can be considered. The straightforward dimensionally motivated guess for the Hagedorn temperature is suggested by p-adic length scale hypothesis as TH= xℏ/L(k) , where x is a numerical factor. For blackholes as k=107 objects this would give a temperature of order 224 MeV for x=1. Hadron Physics giving experimental evidence for Hagedorn temperature about T=140 MeV near to pion mass and near to the scale determined by ΛQCD, which would be naturally related to the hadronic value of the cosmological constant Λ.
  3. One can of course ask whether the BH evaporation in this sense is just the counterpart for the formation of a supernova. Could the genuine BH be an M89 blackhole formed as an M107 nuclear string transforms to an M89 nuclear string and then evaporates in a mini Big Bang? Could the volume filling property of the M107 flux tube make possible touching of string portions inducing the transition to M89 hadron physics just as it is proposed to do in the process which corresponds to the formation of QCD plasma in TGD (see this ).
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, December 28, 2024

Discussions about life and consciousness with Krista Lagus, Dwight Harris, Bill Ross, and Erik Jensen

There was a very entertaining and idea rich discussion with Krista Lagus (this), Dwight Harris, Bill Ross,and Erik Jensen at the FB page of Krista Lagus (see this) and also at my own FB page (see this).

Since the discussion stimulated several interesting observations and helped to add details to the formulation of my own views, I thought that it might be a good idea to summarize some of the basic points in order to not forget the new observations.

A. The topics discussed with Krista Lagus

We had an interesting discussion with Krista Lagus and I had an opportunity to explain the basic ideas of TGD inspired quantum view of consciousness and biology.

A.1 Touching

The discussion started from the post of Krista Lagus in which she talked about "quantum touch" meaning that physical touch would involve quantum effects, a kind of fusion of two quantum systems leading to entanglement.

In the attempt to understand what could happen in touch, the basic question is what living objects are. One of the key characteristics of a living system is coherence in the scale of an organism. Standard quantum theory does not support quantum coherence above molecular scales. What makes the living system coherent so that is more than mere molecular soup?

The guess is that quantum coherence at some, yet unidentified, level induces the coherence at the level of biomatter. Natural candidates are classical electromagnetic and gravitational fields. TGD leads to a new view of space-time and the notion of many-sheeted space-time with non-trivial topology in all scales. We see this non-trivial topology by bare eyes: the physical objects that we see around us correspond to space-time sheets. Also classical fields having long range have space-time correlates as topological field quanta: magnetic and electric flux quanta. One can speak of electric and magnetic bodies and also of gravitational magnetic bodies and they accompany biological bodies.

The second new TGD based element is that the field bodies carry phases of ordinary matter with non-standard and possibly very large values of effective Planck constant. The quantum coherence of field bodies induces the coherence of biomatter. Even the gravitational for the Earth and Sun and electric Planck constants for systems varying from DNA to Earth are involved and can be very large.

Also field bodies can touch each other and this would give rise to remote mental interactions. Also biological bodies can touch each other and this gives rise to a fusion of them as conscious entities, even in the scale of the organisms. Basically this touch would be at the level of field bodies in the scale of organisms. The formation of a monopole flux tube pair between two systems, be their molecules or organisms or their field bodies, would be a universal mechanism of touch. U-shaped tentacles for the two systems would reconnect (by touching) and form a flux tube pair connecting the systems. This mechanism would be behind biocatalysis and the immune system.

A. 2 Could statistical averaging have an analog at the level of conscious experience?

Krista Lagus criticized my proposal that the statistical averaging crucial for the applications of quantum theory could have an analog at the level of conscious experience.

The motivation for the statistical averaging at the level of sensory experiences of *subselves* determining mental images is that in absence of it the sensory experience is highly fluctuating if the outcomes of single SFR determines it. The averaging would do at the level of conscious experience the same as in quantum computations. At the level of self this would not occur.

Subself should correspond to an ensemble of nearly identical subselves or temporal sequence (ensemble as a time crystal made possible by classical non-determinism behind "small" SFRs) of states of single subself. I remember that there is evidence that this kind of temporal averaging takes place and I have even written of it. A good example is provided by the sensory organ defining a subself consisting of sensory receptors as its subselves. Color summation would reflect this kind of conscious averaging.

There is however an objection against this argument. There are two kinds of SFRs: "big" and "small" ones. In "big" SFRs as counterparts of ordinary quantum measurements the state of the system can change radically and also the arrow of geometric time changes in the TGD Universe. In a "small" ones, whose sequence is the TGD counterpart for a sequence of repeated measurements of the same observables, the changes are small and one might argue in this case the argument requiring a conscious counterpart for the statistical averaging does not hold true.

A.3. The importance of geometry

What follows is my reaction to the earlier comment of Krista Lagus mentioning the importance of geometry. Certainly the environment is important. Highly symmetric geometry favours resonances and if classical electromagnetic is involved, this can be important.

Suppose that the creation of connection to the electric and gravitational magnetic bodies is a prerequisite for achieving a higher level of consciousness (alpha rhythm of 10 Hz and gamma rhythm of 40 Hz are examples). This would require generation of particles with a large value of heff serving as a measure for algebraic complexity and scale of quantum coherence.

Suppose that the formation of OH-O- +dark proton qubits occurs and takes dark protons to the gravitational magnetic body and involves Pollack effect and its generalizations. This creates negatively charged regions as exclusion zones and strong electric fields so that also electric bodies become important (note that DNA has constant density of negative charge). Large negative electric charges increase the value of electric Planck constant. I had not noticed this, a trivial fact as such, earlier.

Quartz crystals are believed to induce altered states of consciousness and I have a personal experience of this. One could think that dark protons are transferred from quartz crystals (quartz is everywhere) to gravitational magnetic bodies and generate negative charge generating large scale electric quantum coherence. The geometry of holy buildings involves sharp tips (church towers). Also pyramids have sharp tips and highly symmetric geometry favouring resonances. In the presence of charge, these tips involve strong electric fields characterized by large electric Planck constant (something new) generating quantum coherence (see this).

There is experimental evidence about strange phenomena associated with this kind of building: the last one was in Finland this Christmas and was a dynamical light pillar associated with a church. So called UFOs are the second familiar example assigned with lines of tectonic activity and could be seen as a plasmoid life form. Also crop circles involve light balls.

A.4 Bioharmony

In the TGD framework a model for music harmony led to the notion of icosahedral harmony. The Icosahedron has 12 vertices and 20 triangular faces. Also the octahedron and tetrahedron have triangular faces. 12-note scale can be represented as a Hamiltonian cycle at icosahedron and one can classify them by their symmetries: there are 3 types of them and combining 3 Hamiltonian cycles one obtains union of three harmonies with 20+20+20 3-chords identified as vertices of triangles. The surprising finding was that one can identify these 3-chords as 60 DNA codons and that the numbers of triangles related by symmetry of the cycle correspond to the number of DNAs coding for a given amino acid. The addition of a tetrahedron gives the tetrahedral cycle and this gives 64 DNA codons.

The interpretation would be that the chords correspond to triplets of dark photons providing a realization of the genetic code. Music represents and induces emotions so that the bioharmonies would represent moods at the molecular level. DNA would give 6-qubit code and represent also emotions.

What did not look nice was that both icosahedron and tetrahedron were needed. Much later I realized that hyperbolic 3-space H^3 central in TGD (3-surface of Minkowski space with constant light-cone proper time or 3-surface with constant cosmic time) allows a completely unique tessellation (lattice) consisting of tetrahedrons, octahedrons and icosahedrons: usually only one Platonic solid is possible. This tessellation appears in various scales and genetic code could be a universal manner to represent information and would be realized in many scales in biology and also outside biology.

To get idea about the evolution of these ideas see for instance this, this , this, and this .

B. Dwight Harris and parity violation

There was a comparison of views about consciousness and biology with Dwight Harris. One of his key ideas is the importance of chiral selection and this stimulated a concrete idea about how could be forced by holography=holomorphy principle of TGD, in the same way as matter antimatter asymmetry would be forced by it (see this).

Dwight Harris assumes that parity violation is necessary for conscious memory. In TGD, this is not the case. The notion of conscious memory is an essential element of theories of quantum consciousness but standard QM does not allow it. In TGD the situation changes due to the replacement of the standard ontology of QM with what I call zero energy ontology.

Parity violation manifesting itself as a chiral selection is however essential for biological life. Chiral selection is a mystery in the standard model in which parity violation is extremely small above intermediate gauge boson Compton length Lw of order 10-17 meters. Weak bosons are however effectively massless below Lw and parity violation is large.

In the TGD framework the situation changes since large values of effective Planck constant scales up the scale of parity violation. Dark weak bosons are massless below the scale (heff/h)×Lw so that for large enough heff the parity violation can be larger in scales of order cell size and even longer. This could explain the chiral selection.

A highly interesting question is whether the chiral selection caused to parity violation is essential for all forms of life: TGD indeed predicts that life is present in all scales and biological life is only one particular special case. Why would parity violation be unavoidable? Why would the states with different chiralities have different energies?

Or could the explanation be at a much deeper level and based on holography=holomorphy principle? Could this principle allow only the second chirality. Complex conjugation is analogous to reflection. Could different chiralities be like analytic function f(z) and its conjugate f*(z*) For a given space-time region only one of these is possible. The glove cannot be simultaneously left- and right handed and the option with the minimal energy is selected.

I have already earlier proposed that holography=holomorphy principle forces matter antimatter asymmetry in the sense that the space-time surface decomposes to regions containing preferentially matter or antimatter (see for instance this).

C. Bill Ross and G-quadruples

The discussion with Bill Ross made me conscious of the notion of G-quadruples which I have certainly encountered but with any conscious response. I tried to get from the Wikipedia article (see this) some idea about what is involved.

There can be 4 separate short G-rich DNA strands, 2 DNA strands, which are looped so that there are 4 parallel strands, and a single strand which is doubly looped giving again locally 4 parallel strands. A kind of tangle is formed. Planar tetrads, G-quadruples, are formed between the 4 strand portions involving G letters each. For instance, the telomere regions contain these structures and they have biological functions. G-quadruples appear also elsewhere and seem to be important in the control of transcription of DNA.

I also understood that cations K+ stabilizing the system are between G-tetrads which can be formed in G-rich regions of DNA such as telomeres. There are pairs of cations. 2 negative carbonyls would mean 2 C=O-:s. A Cooper pair of electrons would come into mind as Bill Ross suggested and these kinds of pairs are associated with aromatic rings. Bill Ross was especially interested in what TGD could say about the situation.

This is what I can say. Since phosphates containing O- are there, and base pair for DNA strands corresponds to 3+3 6 dark codons, one would have a doubling of OH-O- + dark proton qubits, that is 6+6=12 qubits per codon quadruplets.

This could increase quantum computational power possibly associated with the codon sequences. In quantum computation N bits is related with 2N dimensional state space so that the doubling of qubits increase the dimension of state space by factor 2 although state function reductions halting the computation give rise to a state characterized by N bits but in a basic which depends on the observables measured.

There is an interesting analogy related to beta sheets of proteins. Each amino-acid would define a qubit associated with the COOH part. N parallel protein folds would increase the number of qubits by N. Could proteins of beta sheets perform quantum computations?

D. Erik Jensen and the two meanings of purification

Erik Jensen talked about purification as a process of purifying the mind. It is amusing that purification and distillation are terms used in quantum computationalism for the generation of pure states from mixed states, which are non-pure because they have entanglement with the environment so that a density matrix must be used to describe them. I learned just yesterday that without so-called magic states produced by distillation (see this), quantum computation could do only what classical computation can do. Meditation is usually seen as an attempt to reduce the attachment to the external world. Could the physical meaning of this be that it makes genuine quantum computational processes possible! Enlightenment as a transition from classical to quantum computation!;-)

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, December 26, 2024

Is dark energy needed at all?

I received links to two very interesting ScienceDaily articles from Mark McWilliams. The first article (see this) discusses dark energy. The second article (see this) discusses Hubble tension. Both articles relate to the article "Cosmological foundations revisited with Pantheon+." published by Lane ZG et al in Notices of Royal Astronomical Society (see this).

On the basis of their larger than expected redshift supernovae in distant galaxies appear to be farther than they should be and this inspires the notion of dark energy explaining accelerated expansion. The argument proposes a different explanation based on giving up the Friedmannin cosmology and giving up the notion of dark energy. This argument would also resolve the Hubble tension discussed in the second popular article. The argument goes as follows.

  1. Gravitation slows down the clocks. The clocks tick faster inside large voids than at their boundaries where galaxies reside. When light passes through a large void it ages more than if it passed through the same region with an average mass density.
  2. As the light from distant supernovae arrives it spends a longer time inside the voids than in passing through galaxies. This would mean that the redshift for supernovae in distant galaxies appears to be larger so that they are apparently farther away. Apparently the expansion accelerates.
  3. This is also argued to explain the Hubble tension meaning that the cosmic expansion rate characterized by the Hubble constant, for the objects in the early Universe is smaller than for the nearby objects.
This looks to me like a nice argument and is claimed to also explain the Hubble tension. The model forces us to give up the standard Robertson-Walker cosmology. Qualitatively the proposal conforms with the notion of many-sheeted space-time predicting Russian doll cosmology defined by space-time sheets condensed on larger space-time sheets. For the general view of TGD inspired cosmology see this.

I have written two articles about what I call magnetic bubbles and used the term "mini big bang" (see this and this). Supernova explosion would be one example of a mini big bang. Also planets would be created in mini big bangs.

But what about the galactic dark matter? TGD predicts an analog of dark energy as Kaehler magnetic and volume energy of cosmic strings and of monopole flux tubes generated as they thicken and generate ordinary matter in the process is identifiable as galactic dark matter. No dark matter halo is predicted, only the cosmic strings and the monopole flux tube with much smaller string tension appearing in all scales, even in biology.

It should be noticed that TGD also predicts phases of ordinary matter with non-standard value of Planck constant behaving like dark matter. The transformation of ordinary matter to these kinds of phases explains the gradual disappearance of baryonic (and also leptonic) matter. These phases are absolutely essential in TGD inspired quantum biology and reside at the field bodies of the organisms with a much larger size than the organism itself and their quantum coherence induces the coherence of biomatter.

See the article The blackhole that grew too fast, why Vega has no planets, and is dark energy needed at all? or the chapter About the recent TGD based view concerning cosmology and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 25, 2024

Oceans around quasars and the origin of life

One of the many astonishing recent findings in astrophysics is the discovery 10 trillion oceans of water circling a supermassive black hole of a quasar (see this). Despite being 300 trillion times less dense than Earth's atmosphere, the water vapour is five times hotter and hundreds of times denser than gas found in typical galaxies. The density ρ of the Earth's atmosphere is about 1/800 of that of water.

Consider first the average density of these oceans circling quasars.

  1. The number density n(H2O) of water molecules in condensed matter at room temperature is about n(H20)= .5× 1029 molecules/Angstrom3. Therefore the density of the atmosphere corresponds to natm=.4× 1029 water molecules/Angstrom3. The average number density of H2O molecules in the oceans accompanying quasars is therefore n= 10-15natm/3 ≈ .4× 1013 molecules/Angstrom3. The edge of a cube containing a single water molecule would be L=1/n1/3=.5× 10-4 m. This is the size scale of a neuron. A blob of water at the normal normal density has Planck mass and size about 10-4 m. Could this have some deeper meaning?
  2. Could the water molecules be dark or involve dark protons assignable with gravitational monopole flux tubes? At the surface of the Earth the monopole flux tubes give rise to the "endogenous" magnetic field, explaining the findings of Blackman and others about quantal effects of ELF radiation on vertebrate brains. They would carry a magnetic field of .2 Gauss and would have magnetic length (2ℏ/eB)1/2=5.6 μm serving as an estimate for the radius of the flux tube. The assumption that the local density of water equals the average density could of course be wrong: one could also consider a formation of water blobs.
The average temperature of the evaporated water is about -17 degrees Celsius and not far from the physiological temperature of about 36 degrees Celsius. What could this mean?
  1. The diffuse ionized gas (DIG) constitutes the largest fraction of the total ionized interstellar matter in star-forming galaxies. It is still unclear whether the ionization is driven predominantly by the ionizing radiation of hot massive stars, as in H II regions (in which ions are protons), or whether additional sources of ionization have to be considered.
  2. TGD inspired new physics suggests molecular ionization in which ionization energies are much lower than for atomic ionization. The TGD based model of (see this, this, this, this, and this) of Pollack effect (see this) is central in the TGD based model of life. Pollack effect occurs at physiological temperature range and is induced by photons in IR and visible range, which kick protons to the gravitational magnetic body of the system, where they become dark protons with non-standard value of effective Planck constant. TGD leads to generalizations of Pollack effect (see this). The most recent view of life forms relying on the notion of OH-O- qubit, discussed in (see this), predicts that any cold plasma can have life-like properties.
A more detailed formulation of this proposal is in terms of PAHs (see this). The list of the basic properties of PAHs can be found here. TGD suggests that the so called space scent could be induced by the IR radiation from PAHs (see this).
  1. PAHs (polycyclic aromatic compounds) are assigned with unidentified infrared bands (UIBs) and could induce Pollack effect. The IR radiation could be also induced by the reverse of the Pollack effect.
  2. The properties of PAHs have led to the PAH world hypothesis stating that PAHs are predecessors of the recent basic organic molecules. For instance, the distances of aromatic molecules appearing as basic building bricks are the same as distances of DNA base pairs.
  3. So called Unidentified Infrared Bands (UIBs) of radiation around IR energies E [.11 , .20, .375] eV arriving from the interstellar space are proposed to be produced by PAHs. The UIBs can be mimicked in the laboratory in reactions associated with photosynthesis producing PAHs.
  4. PAHs are detected in interstellar space. James Webb telescope found that PAHs exist in the very early cosmology 1 billion years before they should be possible in the standard cosmology! Furthermore, PAHs exist in regions, where there are no stars and no star formation (see this).
In the TGD inspired quantum biology, the transitions OH→ O- + dark proton at gravitational monopole flux tube, having interpretation as a flip of quantum gravitational qubit, play a fundamental role (see this) and would also involve Pollack effect. The difference of the bonding energy for OH and of binding energy of O- is .33 eV and is slightly above the thermal energy of .15 eV of photon at physiological temperature. Note that the energies of UIBs are just in the range important for the transitions flipping OH→ O- qubits.

Could IR radiation from PAHs at these energies induce these transitions and could the reversals of OH→ O- qubit liberate energy heating the water so that its temperature is 5 times higher than that of the environment? Note that the density of water is hundreds of times higher than the gas in typical galaxies and could make possible thermal equilibrium of water vapour. This leads to ask whether the water around quasars could have life-like properties.

See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Unlike in wave mechanics, chaos is possible in quantum TGD

Sabine Hossenfelder has an interesting Youtube video about chaos titled "Scientists Uncover Hidden Pattern in Quantum Chaos" (see this).

Standard Quantum Mechanics cannot describe chaos. TGD view allows its description. In the geometric degrees of freedom, quantums states correspond to quantum superpositions of space-time surfaces in H=M4×CP2 satisfying holography which makes it possible to avoid path integral spoiled by horrible divergences. Holography= holomorphy principle allows to construct space-time surfaces as roots of a pair (f1,f2) of analytic functions of 3 complex coordinates and one hypercomplex coordinate. These surfaces are minimal surfaces and satisfy field equations for any general coordinate invariant action constructible in terms of induced gauge fields and metric.

The iterations of (f1,f2)--> (g1(f1,f2),g2(f1,f2)) give rise to transition to chaos in 2 complex-D sense and as a special case one obtains analogs of Julia and Mandelbrot fractals when assumes that only g1 differs from identity. Hence chaos emerges because point-like particles are replaced with 3-surfaces in turn replaced by space-time surfaces obeying holography=holomorphy principle.

See for instance the articles About some number theoretical aspects of TGD and About Langlands correspondence in the TGD framework .

Tuesday, December 24, 2024

Speed of thought is 10 Hz: what does this mean?

The popular article "Scientists Quantified The Speed of Human Thought, And It's a Big Surprise" (see this) tells about the article "The unbearable slowness of being: Why do we live at 10 bits/s?" of Zheng and Meister (see this). The speed of human thought would be 1 step per .1 seconds. This time interval corresponds to 10 alpha rhythm.

The conclusion is rather naive and reflects the failure to realize that consciousness is a hierarchical structure. This failure is one of the deep problems of neuroscience and also of quantum theories of consciousness. Although the physical world has a hierarchical structure and although the structure of consciousness should reflect this, it seems impossible to realize that it indeed does so!

Only a very small part of this hierarchical structure is conscious to us. Conscious entities, selves, have subselves (associated with physical subsystems), which they experience as mental images. Also subselves have subselves as sub-subselves of us. The hierarchy continues downwards and upwards and the latter predicts collective levels of consciousness.

We do not experience these subsubselves as separate entities but only their statistical average. This makes possible statistical determinism of mental images so that they do not fluctuate randomly. This conforms with the fact that there is a large number of sensory receptors. For instance, this statistical averaging explains the summation of visual colors.

This applies also to cognition and quantum computation-like processes in which the outcomes are sub-sub-selves giving rise to cognitive mental image, self, conscious average. This averaging applies also in time direction since zero energy ontology predicts a slight failure of classical non-determinism. Averaging as a basic operation in quantum theory computations giving rise to predictions would have a counterpart at the level of conscious experience.

See the article Some objections against TGD inspired view of qualia and the chapter General Theory of Qualia.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, December 21, 2024

How could p-adicization and hyper-finite factors relate?

Factors of type I are von Neumann algebras acting in the ordinary Hilbert space allowing a discrete enumerable basis. Also hyperfinite factors of type II1 (HFFs in the sequel) play a central role in quantum TGD (see this and this). HFFs replace the problematic factors of type III encountered in algebraic quantum field theories. Note that von Neumann himself regarded factors of type III pathological.

HFFs have rather unintuitive properties, which I have summarized in (see this and this).

  1. The Hilbert spaces associated with HFFs do not have a discrete basis and one could say that the dimension of Hilbert spaces associated with HFFs corresponds to the cardinality of reals. However, the dimension of the Hilbert space identified as a trace Tr(Id) of the unit operator is finite and can be taken equal to 1.
  2. HFFs have subfactors and the inclusion of sub-HFFs as analogs of tensor factors give rise to subfactors with dimension smaller than 1 defining a fractal dimension. For Jones inclusions these dimensions are known and form a discrete set algebraic numbers. In the TGD framework, the included tensor factor allows an interpretation in terms of a finite measurement resolution. The inclusions give rise to quantum groups and their representations as analogs of coset spaces.
p-Adic numbers represent a second key notion of TGD.
  1. p-Adic number fields emerged in p-adic mass calculations (see this, this and this and this). Their properties led to a proposal that they serve as correlates of cognition. All p-adic number fields are possible and can be combined to form adele and the outcome is what could be called adelic physics (see this and this).
  2. Also the extensions of p-adic number fields induced by the extensions of rationals are involved and define a hierarchy of extensions of adeles. The ramified primes for a given polynomial define preferred p-adic primes. For a given space-time region the extension is assignable to the coefficients for a pair of polynomials or even Taylor coefficients for two analytic functions defining the space-time surface as their common root.
  3. The inclusion hierarchies for the extensions of rationals accompanied by inclusion hierarchies of Galois groups for extensions of extensions of .... are analogous to the inclusion hierarchies of HFFs.
Before discussing how p-adic and real physics relate, one must summarize the recent formulation of TGD based on holography = holography correspondence.
  1. The recent formulation of TGD allows to identify space-time surfaces in the imbedding space H=M4× CP2 as common roots for the pair (f1,f2) of generalized holomorphic functions defined in H. If the Taylor coefficients of fi are in an extension of rationals, the conditions defining the space-time surfaces make sense also in an extension of p-adic number fields induced by this extension. As a special case this applies to the case when the functions fi are polynomials. For the completely Taylor coefficients of generalized holomorphic functions fi, the p-adicization is not possible. The Taylor series for fi must also converge in the p-adic sense. For instance, this is the case for exp(x) only if the p-adic norm of x is not smaller than 1.
  2. The notion of Galois group can be generalized when the roots are not anymore points but 4-D surfaces (see this). However, the notion of ramified prime becomes problematic. The notion of ramified prime makes sense if one allows 4 polynomials (P1,P2,P3,P4) instead of two. The roots of 3 polynomials (P1,P2,P3) give rise to 2-surfaces as string world sheets and the simultaneous roots of (P1,P2,P3,P4) can be regarded as roots of the fourth polynomial and are identified as physical singularities identifiable as vertices (see this).

    Also the maps defined by analytic functions g in the space of function pairs (f1,f2) generate new space-time surfaces. One can assign Galois group and ramified primes to h if it is a polynomial P in an extension of rationals. The composition of polynomials Pi defines inclusion hierarchies with increasing algebraic complexity and as a special case one obtains iterations, an approach to chaos, and 4-D analogs of Mandelbrot fractals.

Consider now the relationship between real and p-adic physics.
  1. The connection between real and p-adic physics is defined by common points of reals and p-adic numbers defining a discretization at the space-time level and therefore a finite measurement resolution. This correspondence generalizes to the level of the space-time surfaces and defines a highly unique discretization depending only on the pinary cutoff for the algebraic integers involved. The discretization is not completely unique since the choice of the generalized complex coordinates for H is not completely unique although the symmetries of H make it highly unique.
  2. This picture leads to a vision in which reals and various p-adic number fields and their extensions induced by rationals form a gigantic book in which pages meet at the back of the book at the common points belonging to rationals and their extensions.
What it means to be a point "common" for reals and p-adics, is not quite clear. These common numbers belong to an algebraic extension of rationals inducing that of p-adic numbers. Since a discretization is in question, one can require that these common numbers have a finite pinary expansion in powers of p. For points with coordinates in an algebraic extension of rationals and having p-adic norm equal to 1, a direct identification is possible. In the general case, one can consider two options for the correspondence between p-adic discretization and its real counterpart.
  1. The real number and the number in the extension have the same finite pinary expansions. This correspondence is however highly irregular and not continuous at the limit when an infinite number of powers of p are allowed.
  2. The real number and its p-adic counterpart are related by canonical identification I. The coefficients of the units of the algebraic extension are finite real integers and mapped to p-adic numbers by xR=I(xp)= ∑ xnp-n → xp= ∑ xnpn. The inverse of I has the same form. This option is favored by the continuity of I as a map from p-adics to reals at the limit of an infinite number of pinary digits.

    Canonical identification has several variants. In particular, rationals m/n such that m and n have no common divisors and have finite pinary expansions can be mapped their p-adic counterparts and vice versa by using the map m/n→ I(m)/I(n). This map generalizes to algebraic extensions of rationals.

The detailed properties of the canonical identification deserve a summary.
  1. For finite integers I is a bijection. At the limit when an infinite number of pinary digits is allowed, I is a surjection from p-adics to reals but not a bijection. The reason is that the pinary expansion of a real number is not unique. In analogy with 1=.999...for decimal numbers, the pinary expansion [(p-1)/p]∑k≥ 0p-k is equal to the real unit 1. The inverse images of these numbers under canonical identification correspond to xp=1 and yp= (p-1)p∑k≥ 0 pk. yp has p-adic norm 1/p and an infinite pinary expansion. More generally, I maps real numbers x= ∑n<Nxnp-n +xNp-N and y=∑n<Nxnp-n +(xN-1)p-N +p-N-1(p-1)∑k≥ 0p-k to the same real number so that at the limit of infinite number of pinary digits, the inverse of I is two value for finite real integers if one allows the two representations. For rationals formed from finite integers there are 4 inverse images for I(m/n)= I(m)/I(n).
  2. One can consider 3 kinds of p-adic numbers. p-Adic integers correspond to finite ordinary integers with a finite pinary expansion. p-Adic rationals are ratios of finite integers and have a periodic pinary expansion. p-Adic transcendentals correspond to reals with non-periodic pinary expansion. For real transcendentals with infinite non-periodic pinary expansion the p-adic valued inverse image is unique since xR does not have a largest pinary digit.
  3. Negative reals are problematic from the point of view of canonical identification. The reason is that p-adic numbers are not well-ordered so that the notion of negative p-adic number is not well-defined unless one restricts the consideration to finite p-adic integers and the their negatives as -n=(p-1)(1-p)n=(p-1)(1+p+p2+...)n. As far as discretizations are considered this restriction is very natural. The images of n and -n under I would correspond to the same real integer but being represented differently. This does not make sense.

    Should one modify I so that the p-adic -n is mapped to real -n? This would work also for the rationals. The p-adic counterpart of a real with infinite and non-periodic pinary expansion and its negative would correspond to the same p-adic number. An analog of compactification of the real number to a p-adic circle would take place.

Both hyperfinite factors and p-adicization allow a description of a finite measurement resolution. Therefore a natural question is whether the strange properties of hyperfinite factors, in particular the fact that the dimension D of Hilbert space equals to the cardinality of reals on one hand and to a finite number (D=1 in the convention used) on the other hand, could have a counterpart in the p-adic sector. What is the cardinality of p-adic numbers defined in terms of canonical identification? Could it be finite?
  1. Consider finite real integers x=∑n=0N-1xnpn but with x=0 excluded. Each pinary digit has p values and the total cardinality of these numbers of this kind is pN-1. These real integers correspond to two kinds of p-adic integers in canonical identification so that the total number is 2pN-2. One must also include zero so that the total cardinality is M=2pN-1. Identify M as a p-adic integer. Its padic norm equals 1.
  2. As a p-adic number, M corresponds to Mp=2pN+(p-1)(1+p+p2+...)= pN+pN+1 +(p-1)(1+p+...-pN)). One can write Mp=pN+pN+2+ (p-1)(1+p+...-pN-pN+1). One can continue in this way and obtains at the limit N→ ∞ pN→ ∞(1 + p+ ...)+ (p-1)(1+ p+ ... + pN-1). The first term has a vanishing p-adic norm. The canonical image of this number equals p at the limit N→ ∞. The cardinality of p-adic numbers in this sense would be that of the corresponding finite field! Does this have some deep meaning or is it only number theoretic mysticism?
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.