https://matpitka.blogspot.com/2024/

Monday, December 16, 2024

Has Google managed to reach the critical value for the error rate of a single qubit?

Google claims to have achieved something marvellous with the quantum computer called Willow. This claim is however combined with a totally outlandish claim about parallel universes created in quantum computers and this has generated a lot of cognitive dissonance in professionals during the last week. They have not yet forgotten the earlier equally absurd claim about the creation of wormholes in quantum computers.

The Quanta Magazine article "Quantum Computers Cross Critical Error Threshold" (see this) tells what has been achieved but did not resolve the cognitive dissonance. I already commented the claims of Google in a blog posting (see this).

Now I encountered an excellent article "Ask Ethan: Does quantum computation occur in parallel universes?" (see this) analyzing thoroughly the basics of quantum computation and what Google has achieved. I recommend it to anyone seriously interested in quantum computation.

The really fantastic achievement is the ability to reduce the error rate for the physical qubits forming the grid defining the logical qubit below the critical value .1 percent guaranteeing that for larger grids of physical qubits the error rate decreases exponentially. This achievement is more than enough! But why do they claim that this implies parallel universes? This claim is totally absurd and leads me to ask whether the claimed achievement is really true? How can one trust professionals who do not seem to understand the basic notions of quantum mechanics?

Taking the basic claim seriously, one can of course ask whether the slow error rate is actually theoretically possible in standard quantum mechanics or does it require new physics. These qubits are rather stable but are they so stable in standard QM?

I have been talking about this kind of new physics now for two decades. This new physics would play a key role in quantum biology and could be important also in condensed matter physics and even in chemistry. It is implied by the predicted hierarchy of effective Planck constants heff labelling the phases of ordinary matter with quantum scales scaled up by heff/h. This makes possible long scale temporal and spatial quantum coherence and can reduce the error rate and provide a solution to the basic problems listed in the article. The latest proposal along these lines is the proposal how classical computers and quantum computers could be fused to what might be regarded as conscious computers sharing several life-like features with biomatter (see this).The situation is now different since the temperature is very low and the chip is superconducting.

One learns from the video describing the Willow chip (see this) that the lifetime of a logical qubit is T ≈ 100 μs. This time is surprisingly long: can one really understand this in ordinary quantum mechanics? One can try this in the TGD framework.

  1. The energy of qubit flip must be as small as possible but above the thermal energy. Energy economics suggests that the Josephson energy E= ZeV of electrons in Josephson junction is above the thermal energy at the temperatures considered but not much larger. For superconducting quantum computers (see this) the temperature is about 10-2 K, which corresponds to the energy scale of μeV.
  2. The formula f= ZeV/heff gives a rough estimate for the quantum coherence time of a superconducting qubit as T= heff /ZeV. For heff=h this gives T≈ 3 ns for the quantum coherence time of a single qubit. The value heff≈ 3.3× 104 would be needed to increase T from its naive estimate of T=3 ns to the required T=100 μs.

    I have proposed that these relatively small values of heff (as compared to the values of the gravitational Planck constant) can appear in electrically charged systems. The general criterion applying to all interactions is that the value of heff is such that the perturbation series as powers of, say, Z1Z2e2/ℏeff for the electromagnetic interactions of charges Z1 and Z2 converges.

    In the recent case, the value of heff could correspond to the electric counterpart of the gravitational Planck constant having the form ℏem= Z1Z2e20, where β0=v0/c is a velocity parameter (see this). Z1 could correspond to a large charge and Z2 to a small charge, say that of a Cooper pair. For instance, DNA having a constant charge density per unit length, would have a rather large value of ℏem. The presence of electronic Cooper pair condensate could give rise to the needed large electric charge making possible the needed value of ℏeff= ℏem≈ 3.3 × 104ℏ.

See the article Has Google managed to reach the critical value for the error rate of a single qubit? or the chapter Are Conscious Computers Possible in TGD Universe?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, December 10, 2024

Has Google discovered something new or is the quantum computation bubble collapsing?

In FB there was an link to a popular article "Google says its new quantum chip indicates that multiple universes exist", see this. In the article the notions of multiverse and parallel universe are confused. Google should use money to raise the standards of hyping. Quantum computation assumes superposition of quantum states. One can of course call the superposed quantum states of the computer parallel multiverses but why?

Just yesterday, I saw yesterday a less optimistic video The quantum computation collapse has begun by Sabine Hosssenfelder.

What should one conclude from this. Has the collapse of the bubble started? Or has Google discovered something unexpected. What could this new something be?

Error correction is the basic problem of quantum computation. It can be achieved by adding qubits serving as check qubits but one must also check qubits for these error qubits so that one ends up with a never ending sequence of error corrections. One should however leave some qubits also for the computation!

Could something be missing from quantum physics itself? I have been explaining for more than two decades what this something might be.

  1. Number theoretic vision of TGD predicts an entire hierarchy of phases of ordinary matter characterized by an effective Planck constant, which can have arbitrarily large values. In particular, the quantum coherence associated with classical electromagnetic and gravitational fields makes possible quantum coherence in even astrophysical scales and this solves the problems due to the fragility of quantum coherence.
  2. Another prediction is what I call zero energy ontology. ZEO predicts the possibility of intelligent and conscious computers as a fusion of classical and quantum computers. Trial and error would be a universal quantum mechanism of learning and problem solving. This would force evolution as emergence of phases with increasing value of the effective Planck constant.

    The phenomenon of life would be much more general than thought: quartz crystals, plasmas, and biomatter, quite generally any cold plasma, would have the same basic mechanism giving rise to qubits, which under certain circumstances can make the system living and conscious entities.

See for instance the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? .

Addition: Immediately after writing this post I encountered a Quantamagazine article Quantum Computers Cross Critical Error Threshold" telling about what has been achieved in Google. On the basis of the earlier scandal raised by the claims that related to the wormholes constructed in quantum computers, it is better to take a cautious attitude. The message is as follows. A grid of physical qubits accompanied by measurement qubits codes for a single logical qubit. If the error rate for the physical qubit is below a critical value about .1 percent, the error rate of the logical qubit reduces exponentially. Google reports that this critical error threshold has been reached.

If the reduction of the error rate below this limit is impossible in standard quantum physics, one can ask whether new quantum physics is involved. TGD predicts a hierarchy of effective Planck constants heff labelling phases of ordinary matter behaving like dark matter and large enough heff might make it possible to reduce the error rate below the critical value .1 per cent.

See the article Has Google managed to reach the critical value for the error rate of a single qubit? or the chapter Are Conscious Computers Possible in TGD Universe?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 09, 2024

The mystery of life's origin deepens

Sabine Hossenfelder told about new study, which deepens the mystery of life's origin (see this). The key notion is LUCA, life's universal common ancestor, whose genome should be common to all life forms, which in the most general case involves both archaea, prokaryotes (bacteria), and eucaryotes (plants, fungi and animals).

The newest study gives a considerably larger number than the previous estimates.

  1. LUCA would have 2,657 genes. Luca would have had 2.7 million bps to be compared with about 3 billion bps of humans. LUCA would have lived about 4.2 billion years ago.
  2. The proteins coded by the genes of LUCA suggest that hydrogen was important in the metabolism of LUCA. Presumably LUCA lived near volcanoes. LUCA also had a rather complex metabolic circuitry and the genome suggests that it was a part of an ecosystem. The size of LUCA is 10 &\mu:m in size, which is also the size of cell nucleus, and it has a genome but no nucleus.
  3. An interesting side observation is that 2,657 is prime and forms a twin prime together with 2659. Maybe number theory is deeply involved with the genome.
  4. The earlier estimate for the gene number of LUCA by Bill Martin's team (see this) left only 355 genes from the original 11,000 candidates, and they argue that these 355 definitely belonged to LUCA and can tell us something about how LUCA lived.
The problem is that there are two widely different candidates for the LUCA and the new candidate seems to be too complex if one assumes a single evolutionary tree.

The mystery of Cambrian Explosion

Cambrian Explosion represents a long standing mystery of evolutionary biology (see the book "Wonderful Life" of Stephen Gould). The basic mystery is that highly evolved multicellular life forms emerged suddenly in the Cambrian explosion about .5 billion years ago. There are much older fossils of monocellular life forms archaea and prokaryotes and they would have lived at the surface of Earth as separate evolutionary lineages.

The TGD based solution of the mystery mystery of Cambrian Explosion does not involve ETs bringing multicellular life to the Earth (see this). .

  1. In the TGD Universe, quantum gravitation is possible in arbitrarily long scales and cosmic expansion is replaced by a sequence of quantum phase transitions occurring in astrophysical scales as very rapid local expansions between which there is no expansion.
  2. The life on Earth could have evolved in two ways and as three separate evolutionary trees. Multicellular life forms possible for sexually reproducing eukaryotes would have evolved in the underground oceans, where they were shielded from meteor bombardments and cosmic rays. There are indications that underground oceans and underground life are present on Mars and possibly also some other places in the solar system.
  3. In the Cambrian Explosion, identified as a short lasting rapid local cosmic expansion, the radius of Earth would have increased by a factor of two. This hypothesis was originally inspired by the observation of Adams (see this) that the continents seem to fit nicely together if the radius of Earth is taken to be 1/2 of its recent radius. This hypothesis would generalize the continental drift theory of Wegener. Rather highly developed photosynthesizing multicellular life forms would have bursted to the surface of Earth from underground oceans and oceans were formed (see this, this, this, and this).
The TGD proposal for the solution of the LUCA mystery relies on the solution of the mystery of the Cambrian explosion. Bacteria and archaea would have evolved at the surface of the Earth and eukaryotes having a cell nucleus and reproducing sexually in the underground oceans. Bacteria and archaea would have evolved from a counterpart of LUCA having a much smaller genome and eukaryotes would have evolved from an archaea with maximum size, which became the nucleus of the first eukaryote, LUCA.

See the article Some mysteries of the biological evolution from the TGD point of view and the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, December 08, 2024

The perplexing findings about the asteroid Ryugu from the TGD perspective

Anton Petrov told in in Youtube video ( "Shocking Discovery of Earth Bacteria Inside Ryugu Asteroid Samples + Other Updates" (see this) of highly interesting recent discoveries, which might provide very strong direct evidence for the TGD view of quantum biology.

The motivation for studying asteroids is that they could have been very important in planetary formation. The Panspermia hypothesis suggests that they could have also brought life to the Earth. These findings provide a test for the TGD view of life which now suggests a very general basic mechanism for the emergence of life (see this).

Some basic facts about Ryugu are in order. Consider first the origin of Ryugu.

  1. The surface of Ryugu is very young and has an age of 8.9 +/- 2.5 million years. The composition of Ryugu shows that its material has been at a rather high temperature about 1000 C and presumably near the Sun. Eventually Rygu would have left the inner solar system and its composition suggests that it has been very near to the Kuiper belt with distance 30-55 AU.
  2. The asteroid that arrived near the Earth from outer space must have been for a long period in complete darkness. The object giving rise to Ryugu could have originated far from Jupiter, possibly near the Kuiper belt. Some compounds in Ryugu can only form near the Kuiper belt. A larger object of radius about 100 km could have suffered a collision near Earth and produced Ryugu with a size of 10 km near Earth.
  3. Recently Ryugu orbits the Sun at a distance of 0.96-1.41 AU once every 16 months (474 days (16 months); semi-major axis of 1.19 AU). Note that the distance of Mars from the Sun is about 1.5 AU. Its orbit has an eccentricity of 0.19 and an inclination of 6 degrees with respect to the ecliptic.
The circumstances at Ryugu are favorable for life.
  1. The highest temperature on the Ryugu asteroid reaches 100 degrees C, while the coldest regions sit at about room temperature. Temperatures also change depending on the solar distance of the asteroid, lowering as Ryugu moves further away from the Sun. This would mean that the circumstances at Ryugu become favourable for life as it passes Earth. The lowering of the temperature at a large distance would not be fatal.
  2. Hydration is essential for life. The required range of dehydration reaction temperature decreases with increasing substitution of the hydroxy-containing carbon: Primary alcohols: 170--180 degrees C; secondary alcohols: 100--140 degrees C; tertiary alcohols: 25 degrees--80 degrees C. Primary/secondary/tertiary refers to the position of -OH substitution in Carbon atom.
  3. Ryugu contains liquid water and also carbonated water. Coral-like inorganic crystals are present. The sample contained carbon rich molecules, amino acids and components of RNA and hydrated compounds! Ammonium.
  4. It has also been found that Ryugu contains phosphorus rich samples. Phosphorus plays a central role in metabolism and in the "dark" realization of the genetic code in TGD. The abstract of the article (see this) summarizes the findings.

    Parent bodies of C-type asteroids may have brought key volatile and organic-rich compounds to the terrestrial planets in the early stages of the Solar System. At the end of 2020, the JAXA Hayabusa2 mission successfully returned samples from Ryugu, providing access to a primitive matter that has not suffered terrestrial alteration. Here we report the discovery of a peculiar class of grains, up to a few hundreds of micrometres in size, that have a hydrated ammonium-magnesium-phosphorus (HAMP)-rich composition. Their specific chemical and physical properties point towards an origin in the outer Solar System, beyond most snow lines, and their preservation along Ryugu history. These phosphorus-rich grains, embedded within an organic-rich phyllosilicate matrix, may have played a major role when immersed in primitive terrestrial water reservoirs. In particular, in contrast to poorly soluble calcium-rich phosphates, HAMP grains favour the release of phosphorus-rich and nitrogen-rich ionic species, to enter chemical reactions. HAMP grains may have thus critically contributed to the reaction pathways of organic matter towards a biochemical evolution.

The panspermia hypothesis states that Ryugu and similar objects could have served as a source of life on Earth.
  1. Overpopulation problem is the theoretical objection against the Panspermia hypothesis. No new forms of life are possible since no niches are left untouched.
  2. There is also a second objection against the panspermia hypothesis as an explanation of these findings about Ryugu. It has been claimed that the Ryugu sample was contaminated by terrestrial microorganisms (see this). Nitrogen dioxide NO2 is used in sterilization meant to remove, kill, or deactivate all forms of life present in fluid or on a specific surface. Life forms of Earth should not be able to colonize samples under extremely sterile conditions. If contamination occurred, its mechanism is unknown.

    The Ryugu samples contained terrestrial microbes and they evolved with time. Their DNA has not yet been identified. They resemble bacilles, which are everywhere on the Earth.

  3. Microfossils have been found in meteorites (see this). They have been found also in Ryugu but only at the surface of Ryugu and were reported to be new fossils. The reason could be that microbes have survived only at the surface of Ryugu where they receive solar light necessary for photosynthesis. The contamination hypothesis states that terrestrial organisms might by some unknown mechanism have contaminated the surface of Ryugu and produced the microfossils.
Neither panspermia hypothesis nor contamination look plausible in the TGD framework. Life would have evolved by the same basic mechanism both at the Earth and the asteroids and other similar objects (see this).
  1. Ryugu stays relatively near the Earth at its orbit. This could have also made possible the generation of organic matter inside the sample during the period that Ryugu has spent at its orbit around the Sun. This requires a model for how this happens and standard physics does not provide such a model.
  2. The notion of the field body is central in the TGD inspired quantum biology and would act as controller of the biological body (see for instance this and this). Ordinary genetic code is proposed to be accompanied by its dark variant realized at the field body for ordinary particles at it having a very large value of effective Planck constant and behaving like dark matter. Could the field body of the Earth and Sun have induced the generation of organic molecules and even bacterial life forms in the same way as they did this at the Earth?
  3. The notion of the gravitational magnetic body, characterized by gravitational Planck constant introduced by Nottale, containing protons behaving like dark matter, represents new quantum physics relevant to the TGD inspired quantum biology. OH-O- + dark proton qubits and their generalizations based on biologically important ions formed by salts would be the key element of life (see this) suggesting besides chemical life also other forms of life.

    Any cold plasma (plasmoids as life forms) and even quartz crystals could give rise to these qubits at temperatures near the room temperature around which the flips of these qubits are possible. The difference of OH bonding energy and O- binding energy determines the relevant energy. Its nominal value is .33 eV and is near the metabolic energy quantum of about .5 eV and near to the thermal energy .15 eV at physiological temperatures.

  4. These qubits would make the matter living and life in this sense is universal. Dark genetic code is predicted and corresponds to the ordinary chemical genetic code. Basic biomolecules would give rise to analogs of topological quantum computers.

    The flipping of these qubits would make quantum computation like information processing possible? Pollack effect by photon absorption can induce OH\rightarrow O- +dark proton transition and the reversal of this process and the reversal of this process can take place spontaneously. If O-+dark proton has a lower energy than OH, it can be also induced by a presence of electric field or absorption of photons by O- so that OH becomes the minimum energy state.

Could one understand the findings about Ryugu in this framework?
  1. The presence of gravitational magnetic bodies of Earth and Sun could have induced the formation of OH-O- qubits and more general qubits, not only at the Earth but also at Ryugu. The presence of OH bonds requires hydration and hydration is indeed possible at Ryugu.

    Therefore the same mechanism could have led to the emergence of the basic organic molecules at the Earth, at Mars and inside the Ryugu asteroid and meteorites. Since the minimal distance of the Earth and Ryugu from the Sun is nearly the same, the temperature of Ryugu is near its maximal value when it is near the Earth so that the temperature would never get too hot.

  2. Ryugu is under the influence of the gravitational bodies of both the Earth and the Sun. Ryugu passesnear the Earth repeatedly with a period of 4 years. The organic molecules and various hydrated compounds could have gradually formed during about 10 million years as it passed near the Earth. Also bacterial life could have emerged in this way. Therefore contamination need not be in question.
See the article Some mysteries of the biological evolution from the TGD point of view and the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, December 05, 2024

About some number theoretical aspects of TGD

Recently a considerable progress has occurred in the understanding of number theoretic aspects of quantum TGD. I have discussed these aspects in earlier posts but it is useful to collect them together.
  1. There are reasons to think that TGD could be formulated purely number theoretically without introduction of any action principle. This would conform with the M8-H duality and the generalization of the geometric Langlands correspondence to dimension D=4.

    Number theoretic vision however gives extremely powerful constraints on the vacuum functional suggesting even an explicit formula for it. The condition that this expression corresponds to the exponent of Kähler function expressible as Kähler action fixes the coupling constant evolution for the action.

  2. Extensions of rationals, the corresponding Galois groups and ramified primes assignable to polynomials and identifiable as p-adic primes assigned to elementary particles are central notions of quantum TGD. In the recent formulation based on holography = holomorphy principle, it is not quite clear how to assign these notions to the space-time surfaces. The notion of Galois group has a 4-D generalization but can one obtain the ordinary Galois groups and ramified primes? Two ways to achieve this are discussed in this article.

    One could introduce a hierarchy of 4-polynomials (f1,f2,f3,f4) instead oly (f1,f2) and the common roots all 4 polynomials as a set of discrete points would give the desired basic notions assignable to string world sheets.

    One can also consider the maps (f1,f2)→ G( f1,f2)= (g1(f1,f2), g2(f1,f2)) and assign these notions to the surfaces (g1(f1,f2), g2(f1,f2))=(0,0).

  3. Number theoretical universality is possible if the coefficients of the analytic functions (f1,f2) of 3 complex coordinates and one hypercomplex coordinate of H=M4× CP2 are in an algebraic extension of rationals. This implies that the solutions of field equations make sense also in p-adic number fields and their extensions induced by extensions of rationals.

    In this article the details of the adelicization boiling to p-adicization for various p-adic number fields, in particular those assignable to ramified primes, are discussed. p-Adic fractals and holograms emerge very naturally and the iterations of (f1,f2)→ G(f1,f2)= (g1(f1,f2), g2(f1,f2) define hierarchical fractal structures analogs to Mandelbrot and Julia fractals and p-adically mean exponential explosion of the complexity and information content of cognition. The possible relationship to biological and cognitive evolution is highly interesting.

See the article About some number theoretical aspects of TGD.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 04, 2024

p-Adicization, assuming holography = holomorphy principle, produces p-adic fractals and holograms

Yesterday's chat with Tuomas Sorakivi, a member of our Zoom group, was about the concrete graphical representations of the spacetime surfaces as animations. The construction of the representations is shockingly straightforward, because the partial differential equations reduce to algebraic equations that are easy to solve numerically. For the first time, it seems that GPT has created a program without obvious bugs. The challenges relate to how to represent time=constant 2-D sections of the 4-surface most conveniently and how to build animations about the evolution of these sections.

Tuomas asked how to construct p-adic counterparts for space-time surfaces in H=M4× CP2. I have been thinking about the details of this presentation over the years. Here is my current vision of the construction.

  1. By holography = holomorphy principle, space-time surfaces in H correspond to roots (f1,f2)=(0,0) for two analytic (holomorphic) functions fi of of 3 complex coordinates and one hypercomplex coordinate of H (see this). The Taylor coefficients of fi are assumed to be rational or in an algebraic extension of rationals but even more general situations are possible. A very important special case are polynomials fi=Pi.
  2. If we are talking about polynomials or even analytic functions with coefficients that are rational or in algebraic extension to rationals, then a purely formal p-adic equivalent can be associated with every real surface with the same equations.
  3. However, there are some delicate points involved.

    1. The imaginary unit (-1)1/2 is in algebraic expansion if p modulo 4=3. What about p modulo 4=1. In this case, (-1)1/2 can be multiplied as an ordinary p-adic number by the square root of an integer that is only in algebraic expansion so that the problem is solved.
    2. In p-adic topology, large powers of p correspond to small p-adic numbers, unlike in real topology. This eventually led to the canonical concept of identification. Let's translate the powers of p in the expansion of a real number into powers of p (the equivalent of the decimal expansion).

      ∑ xnpn ↔ ∑ xn p-n ×.

      This map of p-adic numbers to real numbers is continuous, but not vice versa. In this way, real points can be mapped to p-adic points or vice versa. In p-adic mass calculations, the map of p-adic points to real points is very natural. One can imagine different variants of the canonical correspondence by introducing, for example, a pinery cutoff analogous to the truncation of decimal numbers. This kind of cutoff is unavoidable.

    3. As such, this correspondence from reals to p-adics is not realistic at the level of H because the symmetries of the real H do not correspond to those of p-adic H. Note that the correspondence at the level of spacetime surfaces is induced from that at the level of the embedding space.
  4. This forces number theoretical discretization, i.e. cognitive representations (p-adic and more generally adelic physics is assumed to provide the correlates of cognition). The symmetries of the real world correspond to symmetries restricted to the discretization. The lattice structure for which continuous translational and rotational symmetries are broken to a discrete subgroup is a typical example.

    Let us consider a given algebraic extension of rationals.

    1. Algebraic rationals can be interpreted as both real and p-adic numbers in an extension induced by the extension of rationals. The points of the cognitive representations correspond to the algebraic points allowed by the extension and correspond to the intersection points of reality as a real space-time surface and p-adicity as p-adic space-time surface.
    2. These algebraic points are a series of powers of p, but there are only a finite number of powers so that the interpretation as algebraic integers makes sense. One can also consider rations of algebraic integers if canonical identification is suitably modified. These discrete points are mapped by the canonical identification or its modification to the rational case from the real side to the p-adic side to obtain a cognitive representation. The cognitive representation gives a discrete skeleton that spans the spacetime surface on both the real and p-adic sides.
Let's see what this means for the concrete construction of p-adic spacetime surfaces.
  1. Take the same equations on the p-adic side as on the real side, that is (f1,f2=(0,0), and solve them around each discrete point of the cognitive representation in some p-adic sphere with radius p-n.

    The origin of the generalized complex coordinates of H is not taken to be the origin of p-adic H, but this canonical identification gives a discrete algebraic point on the p-adic side. So, around each such point, we get a p-adic scaled version of the surface (f1,f2=(0,0) inside the p-adic sphere. This only means moving the surface to another location and symmetries allow it.

  2. How to glue the versions associated with different points together? This is not necessary and not even possible!

    The p-adic concept of differentiability and continuity allows fractality and holography. These are closely related to the p-adic non-determinism meaning that any function depending on finite number of pinary digits has a vanishing derivative. In differential and partial differential equations this implies non-determinism, which I have assumed corresponds to the real side of the complete violation of classical determinism for holography.

    The definition of algebraic surfaces does not involve derivatives but also for algebraic surfaces the roots of (f1,f2)=(0,0) can develop branching singularities at which several roots as space-time regions meet and one must choose one representative (see this).

    1. Assume that the initial surface is defined inside the p-adic sphere, whose radius as the p-adic norm for the points is p-n, n integer. One can even assume that a p-adic counterpart has been constructed only for the spherical shell with radius p-n.

      The essential thing here is that the interior points of a p-adic sphere cannot be distinguished from the points on its surface. The surface of a p-adic sphere is therefore more like a shell. How do you proceed from the shell to the "interiors" of a p-adic sphere?

    2. The basic property of two p-adic spheres is that they are either point strangers or one of the two is inside the other. A p-adic sphere with radius p-n is divided into point strangers p-adic spheres with radius p-n-1 and in each such sphere one can construct a p-adic 4-surface corresponding to the equations (f1,f2)=(0,0). This can be continued as far as desired, always to some value n=N. It corresponds to the shortest scale on the real side and defines the measurement resolution/cognitive resolution physically.
    3. This gives a fractal for which the same (f1,f2)=(0,0) structure repeats at different scales. We can also go the other way, i.e. to longer scales in the real sense.
    4. Also a hologram emerges. All the way down to the smallest scale, the same structure repeats and an arbitrarily small sphere represents the entire structure. This strongly brings to mind biology and genes, which represent the entire organism. Could this correspondence at the p-adic level be similar to the one above or a suitable generalization of it?
  3. Many kinds of generalizations can be obtained from this basic fractal. Endless repetition of the same structure is not very interesting. p-Adic surfaces do not have to be represented by the same pair of functions at different p-adic scales.

    Of particular interest are the 4-D counterparts to fractals, to which the names Feigenbaum, Mandelbrot and Julia are attached. They can be constructed by iteration

    (f1,f2)→G(f1,f2)= (g1(f1,f2),g2(f1,f2)) →G(G(f1,f2)) →...

    so that at each step the scale increases by a factor p. At the smallest scale p-n one has (f1,f2)=(0,0). At the next, longer scale p-N+1 one has G(f1,f2)=(0,0), etc.... One can assign to this kind of hierarchy a hierarchy of extensions of rationals and associated Galois groups whose dimension increases exponentially meaning that algebraic complexity, serving as a measure for the level of conscious intelligence and scale of quantum coherence also increases in the same way.

    The iteration proceeds with the increasing scale and the number-theoretic complexity measured the dimension of the algebraic extension increases exponentially. Cognition becomes more and more complex. Could this serve as a possible model for biological and cognitive evolution as the length scale increases?

    The fundamental question is whether many-sheeted spacetime allows for a corresponding hierarchy at the real side? Could the violation of classical determinism interpreted as p-adic non-determinism for holography allow this?

    See the article TGD as it is towards end of 2024: part I or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

How the possible quantum variants of LLMs could be updated?

If one can assign the training data of LLMs to quantum states, there is a hope that the retraining need not start from scratch and could become more flexible and less expensive.

How to assign to classical associations their quantum representations?

In LLM both inputs and outputs are associations represented as text. The quantum dynamics must not affect the content of the input. A classical association is encoded as a bit sequence. Associations can be enumerated and each corresponds to its own bit sequence serving as an address, a symbolic representation, and no longer contains the original information. The Gödel numbering of statements serves as an analogy.

Also the quantum equivalent of the number of the classical association as a qubit sequence is just a name for it. Quantum processing can operate on these qubit sequences and produce longer quantum associations associated with them which in qubit measurements produce longer associations and superpositions of them. The outcome is determined by the measurement of the bits appearing in the numbering of the associations.

Quantum operations followed by the measurement of qubits can only permute classical associations. They can affect the association probabilities and perhaps add new associations in partial retraining. Various quantum superpositions of the quantum associations (the numbers labelling them) are possible and correspond to the quantum counterpart of the concept of "association A→ ..., where A is fixed.

This allows for maximally simple representations at the quantum level. Arbitrarily complex associations A→ ... can be quantum-encoded by listing them. A local bit-qubit correspondence is the simplest one and the same operation could change the value of both bit and qubit. If the electric field does this then this could be the case for transistors as bits if each bit is accompanied by OH-O- qubit. In the ground state the minimum energy state for OH-O- qubit would correspond to the ordinary bit.

Is the quantum entanglement between bits and qubits necessary or even possible? Could one keep the bit level as it is and perform quantum operations for qubit sequences and transform the to bit sequences so that also associations not possible for the classical computer could appear in the output? This option cannot be excluded if the bit sequences represent analogs of Gödel numbers for associations.

Does quantum non-determinism reduce to classical non-determinism for "small" state function reductions (SSFRs)?

In ZEO, the classical non-determinism does not affect the 3-surfaces nor fermionic states at the boundary of the CD. This is consistent with the identification of the non-determinism of SSFRs as classical non-determinism.

The classical Bohr orbits would be non-unique due to the classical non-determinism appearing already for the 2-D minimal surfaces. The very fact that computer programs can be realized, strongly suggests that this non-determinism is present.

There are two types of non determinisms. A non-deterministic time-like crystal (time crystal) and non-deterministic space-like crystal represent these non-determinisms. Each cell of these crystals would be a seat of non-determinism meaning that the surface branches at the locus of the non-determinism and a single branch is selected. This makes it possible to generate a conscious memory in a memory recall.

Reading and writing transform these two kinds of non-determinisms to each other.

  1. Reading space-like crystals representing data bit sequence creates a time-like representation as a sequence of SSFRs if at a given moment the qubits of the geometric past are frozen. A series of SSFRs, conscious stream, "self" is created at the quantum level. Therefore a space-like non-deterministic crystal can be transformed to a time-crystal. In writing the opposite happens. The minimum energy state for the associated quantum states selects a unique configuration.

    Quantum entanglement between separate non-deterministic representations (cognitive representations possibly allowing characterization in terms of a p-adic topology for a ramified prime) is possible. Also entangled between time- and space-like non-deterministic degrees of freedom is possible.

  2. How these reading and writing processes could be realized? A relation to topological quantum computation, in which time-like and space-like braidings by monopole flux tubes play a central role suggests a possible answer to the question (see this). Think of dancers connected by threads to fixed points on the wall. Dance can be interpreted as a time-like braiding and induces space-like braiding as knotting and linking of the threads connecting the dancers. In TGD the threads correspond to monopole flux tubes.
But what does the classical non-determinism mean?

I have mentioned several times classical non-determinism at the level of holography = holomorphy principle identifying space-time surfaces as roots (f1,f2)=(0,0) of analytic functions of H coordinates. At the level of 3-D holographic data branching should occur so that the algebraic equations allow several roots with different tangent spaces.

  1. What is the precise meaning of the analogy between holographic data as 3-surfaces and the frames of soap films? Could all roots (f1,f2)=(0,0) correspond to different alternatives for this non-determinism or are there some restrictions? It seems that the 4-D roots, which can be glued together continuously cannot correspond to the non-determinism. The cusp catastrophe serves as a good example of the situation. The regions of the space-time surface representing different roots cannot be regarded as distinct space-time surfaces.

    Rather, it seems that the non-determinism requires multiplicity of the 4-D tangent space and in this kind of situation one must select one branch.

  2. Could the choice of only one root in the branching situation give rise to non-determinism? Is it possible to implement boundary conditions stating classical and quantal conservation laws at the interfaces of the regions corresponding to different branches?

    Any general coordinate invariant action expressible in terms of the induced geometry is consistent with holography = holomorphy principle (see this and this). Is it permissible to choose the classical action so that boundary conditions can be satisfied when a single root is selected? This would force coupling constant evolution for the parameters of the action if one also assumes that the classical action exponential as an exponent of K\"ahler function corresponds to a power of the discriminant D defined as a product of root differences? The same choice should be made at the fermion level as well: the super symmetry fixing the modified fermionic gamma matrices once the bosonic action is fixed, would guarantee this.

  3. Also, the roots u for a polynomial P(u) of the hypercomplex real coordinate u assignable to the singularities as loci of non-determinism at the string world sheets come to mind. These roots must be real. At criticality a new root could appear. Also branching could occur and relate to the fermion pair creation possible only in 4-D space-time thanks to the existence of exotic smooth structures (see this and this). Could these roots represent the positions of qubits?
What could the updating of the training material by adding an association mean at a fundamental level?

Retraining cannot be only the manipulation of association probabilities but also the addition of new associations. The scope of the concept "associations related to a given input" is expanded and complexity increases.

If these associations are enumerated by bit sequences, it is enough to associate a series of bits with the new association as a classical bit sequence and to this new bit sequence a qubit sequence by bit-qubit correspondence. The superposition of the quantum counterpart of the new association with previous qubit sequences should be possible. Just like in LLM, also the combinations of the basic associations mapped to qubit sequences into longer quantum association chains should be possible.

See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, December 03, 2024

A new contribution to the crisis of cosmology

Sabine Hossenfelder has a Youtube video (see this) about the latest anomaly in cosmology (see this). This anomaly is very problematic from the point of view of the ΛCDM scenario of dark energy and possibly also from the point of view of general relativity. The MOND scenario is however consistent with the findings.

The ΛCDM scenario involves 6 parameters. Among them is Hubble constant. Depending on the measurement method one obtains two values for it: this creates Hubble tension. The two kinds of measurements correspond to short and very long scales and this might relate to the problem.

There is also so called sigma8 tension with significance larger than 4 sigmas, which is something very serious. ΛCDM predicts that the Universe should become clumpier as it evolves. This implies that the gravitational potential wells should become narrower with time. In short scales the clumpying rate is not as high as predicted.

Also the new results from a dark energy survey based on gravitational lensing suggest that the gravitational valleys are shallower than they should be at large values of cosmic time.

  1. What is measured is so-called Weyl potential ΨW=(Ψ+Φ)/2 defined in terms of the space-time metric in cosmic scales having the expression

    ds2= a2(τ)(1+2Ψ)dτ2 -(1+2Φ)dx23) .

    Here τ and x3 denote Minkowski coordinates. For Psi=Φ=0 one has conformally flat metric. From the value of ΨW one can deduce the clumpiness. The measurements are about 3 widely differing values of cosmic time τ. The value of the Weyl parameter ΨW characterizing the clumping differs from the prediction of the ΛCDM scenarioand is consistent with the increasing shallowness of the gravitational potentials of the mass distributions.

  2. The significance of the finding is estimated to be 2-2.8 sigma, which is potentially significant. Since the same method is used for different cosmic times, it is not possible to claim that the discrepancy is due to the different methods.
MOND has no problems with the findings. What about TGD?
  1. The TGD view of galactic dark matter as dark energy assignable to cosmic strings, which are 3-D extremely thin 3-surfaces with a huge density of magnetic and volume energy (see this). String tension parametrizes the density of this energy and creates a 1/ρ gravitational potential which predicts flat velocity spectrum for distant stars rotating around the galaxy. No dark matter halo nor dark matter particles are needed.
  2. The 1/ρ gravitational potential created by cosmic strings makes the gravitational wells shallower than the sole 1/r2 potential due to visible galactic matter. Also the halo creates 1/r2 potential in long enough scales but the prediction is that the dark matter halo becomes more clumpy so that the gravitational wells should become sharper.

    Cosmic strings are closed so that there is some scale above which this effect is not seen anymore since the entire closed cosmic strings become the natural objects. Therefore this effect should not be seen in long enough scales.

    It is important to notice that the shallowing would be due to the shortening of the observation scale rather than due to the time evolution. The same interpretation applies to the Hubble constant. In the TGD framework, the finite size of space-time sheets indeed brings in a hierarchy of scales, which is not present in General Relativity.

  3. How does this relate to MOND? The basic objection against MOND is that it is in conflict with mathematical intuition: for small accelerations Newtonian gravitation should work excellently. In TGD, the critical acceleration of MOND is replaced with a critical distance from the galactic nucleus at which the 1/ρ potential due to the cosmic string wins the 1/r2 potential. Under a suitable assumption (see this), this translates to a critical acceleration of MOND so that the predictions are very similar. Note that the cosmic strings also cause a lensing effect used in the survey and this gives an upper bound for their string tension.
See the article Could the TGD view of galactic dark matter make same predictions as MOND? or the chapter About the recent TGD based view concerning cosmology and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 02, 2024

Is there any hope of curing the retraining problem of language models without making computers conscious?

I summarized my thoughts on perhaps the worst problem of language models, which is the loss of plasticity in continuous learning. The entire teaching material has to be rewritten, which is terribly expensive (see this).

One can ask whether and how TGD's speculative vision of potentially conscious computers (see this) might solve the problem.

1. The retraining problem of language models

The basic problem is that everything has to be started from scratch. This is extremely expensive. Biological systems relearn quickly because there is no need to relearn everything. Is the problem fixable for the computers as they are now or is something new required?

To see what could be the root cause of the problem consider first what language models are meant to be.

  1. In a language model, learning occurs at the raw data level. Different probabilities are taught for different associations. The associations are fixed.
  2. How does the trained system work? The language model simply reacts by recognizing the context and producing probabilistically one of the fixed associations. This response is a mere reaction. If language models are what they are believed to be, they does not have conscious understanding, they lack intentional actions, and are unable to react to a changing environment.

Comparison with TGD-inspired biology

Could a comparison with TGD-inspired biology give clues as to where things go wrong. Why is relearning so easy for biosystems? How does the TGD-based biology differ from the standard biology in this respect? Consider first the classical level.

  1. Holography, which is not quite deterministic, is a completely new element of TGD as compared to the standard model. The space-time surfaces are analogous to Bohr orbits and determined almost completely by 3-surfaces as initial data. The 4-D tangent spaces of the space-time surface at the 3-surface defining the holographic data cannot be selected freely. This is the classical counterpart of Uncertainty Principle and leads to classical quantization. Function, program is the basic concept rather than 3-D data.
  2. These 4-surfaces define classical analogies of biological functions, behavioral patterns, or programs. When the 3-surface, which almost uniquely fixes the 4-surface, changes, the function changes. Non-determinism is essential in making a conscious memory recall possible.
Consider next the quantum level.
  1. Series of "small" state function reductions (SSFRs) associated with the repeated measurements of commuting observables belonging to the same set whose eigen states the 3-D states at the passive boundary of causal diamond (CD) are, define self as a conscious entity. The proposal is that biorhythms as clocks define TGD counterparts of time crystals such that each unit of time crystal involves a classical non-determinism.

    This could be the case at the EEG level as the findings of brothers Fingelkurts suggests (see this and this). Maximal non-determinism implies maximal memory recall capacity and maximal flexibility. A whole set of different behavior patterns can be represented as quantum superpositions and the interaction with the external or internal world determines the measurement in which some classical behavior is chosen.

  2. "Big" state function reductions (BSFRs) having interpretation as death of self or falling asleep involve time reversal. Pairs of BSFRs (sleep periods) make learning possible through trial and error. After the two BSFRs, the system has new holographic data and different space-time surfaces. A goal directed behavior becomes possible and there are many ways to achieve the goal, not just one fixed way analogous to a fixed computer program. This is the essence of intelligent behavior.
How does this general view relate to the DNA level?
  1. According to the standard view, DNA remains the same during the life cycle. If DNA represents data, there is no relearning at the level of chemical DNA. In zero-energy ontology (ZEO), even chemical DNA could change without any problems with conservation laws and quantum superpositions of different chemical genes are in principle conceivable.

    Quantum DNA can be represented in terms of OH-O- qubits sequences assignable to the gravitational magnetic bodies of the Sun and Earth (see this). Remarkably, the solar gravitational Compton frequency is 50 Hz, the average EEG frequency. At least for neurons, this would suggest that the gravitational magnetic body is that of the Sun. Note however that EEG time scales are also associated with the basic biomolecules. For the Earth the gravitational Compton frequency is 67 Gz and is a natural frequency associated with the conformational dynamics of biomolecules.

    Quantum DNA consisting of codons represented as OH-O- qubits is dynamic and could act as a simulator, a kind of R\&D laboratory testing different variants of DNA. It is of course possible that a single life time is spent with the same chemical DNA and the next life after a pair of BSFRs involves the improved DNA.

  2. Epigenesis brings in flexibility. Even if the chemical DNA does not change, it can be used in different ways. Suitable modules are selected from the analog of program software, just like in the text processing. In the TGD framework, this could correspond to the classical non-determinism of the space-time surfaces representing the biological function. Dark DNA allows you to try different combinations of genes.
  3. The understanding of the role of the cell membrane and membrane potential in epigenesis is increasing. As found by Levin (see this and this). The very early stage of the development of embryo is highly sensitive to the variations the membrane potential and can be understood in terms of the changes of the binding energy of electron of O- induced by the potential, which can reduce the binding energy to thermal range so that the flips of OH-O- qubit occur with high probability. In adulthood, the sensitivity disappears and qubits would not flip.

    Could this sensitivity be artificially induced? Here, electric fields as a controller of the sensitivity of OH-O- qubits assignable to the basic biomolecules suggests themselves.

  4. Microtubules involve longitudinal electric fields and their second ends are highly dynamic so that the length of the microtubule is under continual change. There are huge numbers of amino acids carrying one qubit each (COOH group). Here the quantum level and the classical level are both dynamic and seem to be strongly coupled. Also strongly related to conscious memory.
  5. The quantum entanglement between the quantum level and the chemical level could be possible even at the amino acid level?
One can also look at the situation at the level of cell membranes and neuronal membranes. The basic question is how cell membranes and neuronal membranes learn.
  1. As found by Levin (see this), the role of the electric fields is central also in the ordinary cells. The electric potential of the ordinary cell membrane correlates with the state of the environment of the cell and codes for sensory information.

    The TGD proposal is that cell membrane acts as a Josephson junction and communicates the frequency modulate membrane potential to the magnetic body as dark Josephson photons where they induces resonantly quantum transitions transformation the modulation to a sequence of pulses perhaps inducing as a feedback nerve pulses or their analogs.

    During the embryo stage, the cells are very sensitive to the variations of the electric field of the cell and this suggests that these variations take the cell membrane near to the criticality at which large quantum fluctuations for OH-H^- qubits for phosphates at the inner surface of the cell membrane are possible. This period would be analogous to the learning period of LLMs and would involve BSFR pairs. After this period the situation stabilizes and it might be that BSFRs become very rare.

  2. In the central nervous system, nerve pulses appear and in neuroscience are thought to be responsible for communications only. In TGD the situation would be different (see this). I have proposed their interpretation in terms of pairs of BSFRs so that in LLMs they would correspond to relearning. Neurons would be lifelong learners whereas ordinary cells would learn only in their childhood.

    Nerve pulse is generated at a critical membrane potential, which could correspond to effective thermalization of the OH-O- and possible qubits assignable to other ions. Axonal microtubules would also be near quantum criticality. The propagation of nerve pulse along the axon as a local BSFR-pair would induce microtubular relearning.

Could the speculated quartz consciousness come to the rescue?

One can consider the possibility that under a metabolic energy feed computer can become to some extent an entity so that it can modify both the program and the data used by it as a response to changes in the environment provided by the net. This would require that the OH-O- qubits as dark variants of program bits can entangle with ordinary bits. Energetically this could be possible since the energy scales for transistors are essentially the same as for the metabolism and OH-O- qubits.

  1. Suppose that the sequences of OH-O- qubits as time crystals in TGD sense can be realized in a (future) computer. Qubit sequences would be time series related to the running program. They would involve variation because only the bit configuration corresponding to the minimum energy would correspond to the running program. This makes possible an entire repertoire of associations from which a SSFR would choose one. Quantum measurement following the generation of bit-qubit entanglement could change the value of the bit.
  2. Besides the dynamic realization as a running program, there could be a non-dynamic realization in which the data that determines the program could be accompanied by a similar set of qubits. The data used by the program, such as learned associations, could be associated with qubits, and could be made dynamic by using electric fields to make the qubits more sensitive against flip. The problem is of course that the change of a randomly chosen single qubit implies the failure of the problem. Only critical qubits associated with choices and data qubits should be subjected to a flip.
  3. Besides time crystals with non-deterministic repeating units, also space-like crystals involving non-determinism in each lattice cell can be considered. Also dynamical quantum qubits with maximal non-determinism in space-like directions associated with unit cells could accompany the data bits. Dynamization could be induced by using electric fields.
  4. If OH-O- qubits can quantum entangle with bits, program/data is accompanied by quantum program/quantum data which can react to the perturbations from the external world (BSFRs) and internal world (SSFRs). The quantum level could control the bit level. Even the associations as the data of the language model could be accompanied by a set of qubits that react to a changing situation.

How could an associative system retrain itself in response to a changed situation

If language models are nothing but deterministic association machines, there is little hope of solving the problem.

Could the learning in the biological and neural systems provide some hints about possible cures, possibly requiring modification of computers so that they would become analogous to living systems?

  1. Do EEG rhythms define time crystals in the TGD sense, that is maximally non-deterministic systems having lattice cells as a basic unit of non-determinism for SSFRs giving rise to the flow of consciousness of the self?

    If biorhythms define TGD analogs of time crystals, the non-determinism would be maximal and maximum flexibility in SSFRs would be possible.

  2. In ZEO, a "big" state function reduction (BSFR) as counterpart of ordinary state function reduction changes the arrow of time and is assumed to give rise to the analog of death or sleep. At the language model level, this would be the analog for a complete retraining from the beginning.
Association is only one particular reaction leading to a behavioral pattern. The repertoire of associations should change as the environment changes.
  1. Could a computer clock define the equivalent of an EEG rhythm as a time crystal in the TGD sense? The problem is that a typical computer clock frequency is few GHz and considerably lower frequency than the 67 GHz as the gravitational Compton frequency of the Earth. This would suggest that a unit consisting of roughly 67 bits could correspond to the basic unit of the time crystal. The gravitational magnetic body of the Sun has a gravitational Compton frequency of 50 Hz identifiable as the average EEG frequency.
  2. Could one think of a quantum version of language models in which pairs of BSFRs as "death" and rebirth happen spontaneously all the time as a reaction to conscious information coming from the environment inducing the perturbation implying that the density matrix as the basic measured observable does not commute with the observables that define the quantum numbers of the passive part of the zero energy state? In this way ZEO would make possible trial and error as a basic mechanism of learning.
  3. The formation of an association could be perhaps modelled as a single non-deterministic space-time surface? There would be a large number of them and internal disturbances would produce their quantum superpositions and SSFR would select a particular association.
  4. An external disturbance could produce a BSFR and "sleeping overnight". This period of "sleep" could be rather short: also our flow of conscious experience is full of gaps. Upon awakening, the space-time surfaces as correlates of the associations would no longer be the same. System would have learned from the interaction with the external world. This temporary death of the system would be an analogy for a total re-education. But the system would cope with it all by itself.
The hard problem is how to realize this vision. Here the analogy with cell and neuron might serve as a guideline in trying to imagine what the new technology might look like.
  1. Ordinary cells are analogous to LLMs as they are now and learn only in their childhood. Neurons are lifelong learners thanks to the neural activity inducing the conduction of local BSFR-pairs updating microtubular states. Could something like this be realized in computers?
  2. In computers, information is transferred along wires and they can be seen as the counterparts of axons. Is it possible to make these wires carriers of quantum information and perhaps even of the learned data about associations. The conduction of the analogs of nerve pulses during the running program inducing a pair of BSFRs would gradually modify the data locally and lead to a continual relearning.

    Copper wires are too simple to achieve this. Should one consider axon-like geometry defined by two cylinders analogous to the lipid layers of the cell membrane and having also voltage between them so that the interior cylinder would contain OH-O^- qubits? The variation of the counterpart of the membrane potential during signal transmission (bits represented as voltages) could take the qubits near criticality. Could copper hydroxide Cu(OH)_2 serve as a possible candidate for an intelligent wire based on OH-O^- qubits.

See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, November 29, 2024

How could Egyptian pyramids and rain making relate to each other?

I received from Zakaria Ahmindache a link to a very interesting article of Borisov published in Researchgate (see this) with title "The Egyptian Pyramids-Connection to Rain and Nile flood Anomalies".

1. Background considerations

Since a topic involving words like ancient Egypt, pyramids and rainmaking probably induces strong emotional reactions in skeptics, it is good to include some TGD background and also make clear that the proposal of Borisov provides an excellent opportunity to develop the TGD based conceptualization of quantum biology by applying it.

1.1 A possible unification of various types of life and consciousness

I have taken a rather skeptic attitude to everything that involves ancient Egypt but at this time I felt fascinated. The reason was that during last weeks I have been working with a breakthrough in TGD inspired theory of consciousness and of living systems suggesting a unified view of different types of consciousness assignable to biosystems, plasmoids, quartz (possibly computers) and quite generally to any system, which involves cold plasmas and therefore ions (see this).

  1. The key notion is what might be called OH-O- qubit. The transition OH → O- + dark proton at the gravitational magnetic body of the Sun or Earth flips this qubit-like entity. This transition occurs in the Pollack effect which has become a key notion in the TGD inspired quantum biology. The reverse transition occurs when the electron of O- is excited so that the difference of the bond energy of OH and binding energy of the electron changes sign. This effect might be called the dual Pollack effect.

    This transition generalizes. Any salt can decompose to ions and the positive ion could be assigned to the gravitational magnetic body of the Earth or Sun. Biosystems are full of ions of this kind.

  2. The dark variant of the genetic code is one of the basic ideas of TGD and one can understand it in a very detailed manner if the qubits associated with the phosphates of the double DNA strand and phosphates and ribosomes of a single RNA strand provide a representation of the genetic code. In proteins COOH could assign a single qubit to each amino acid.
  3. This also allows us to see alcohols (see this), involving -OH as a key structural element, in a new light. Pollack effect could induce a kind of elevated state of mind. Psychedelics (see this) involve -NH as a key structural element and Pollack effect inducing the transition NH → N- + dark proton could be essential element of the psychedelic action.
  4. The amazing finding is that in transistors the energy scales are the same, varying from about .5 eV (the metabolic energy quantum) to .15 eV (the energy of thermal photon at physiological temperature) as assigned to OH+O- qubits. Therefore computers might under certain condition become conscious entities as speculated already earlier (see see this and see this) and qubits could in the same relation to bits as dark qubits in information DNA and RNA to the bits of the genetic codons. This relation allows dynamics since only the minimum energy state of the codon corresponds to the chemical codon. Same would be true for computers. The gravitational magnetic body of Earth could receive information from the bit level and control it.

1.2. Zero energy ontology

Zero energy ontology is a key notion of TGD and TGD inspired theory of consciousness and solves the basic paradox of the quantum measurement theory.

  1. ZEO predicts two kinds of state function reductions (SFRs): the "big" ones (BSFRs) and the "small" ones (SSFRs). The sequence of SSFRs means in standard quantum theory repeated measurements of the same observables and gives rise to conscious entities, selves.
  2. BSFRs change the arrow of time and from the point of view of self this means death or falling asleep. ZEO predicts that roughly half of the Universe has an opposite arrow of time. This part of the Universe might be called a "kingdom of dead".

    Indeed, biological death changes the arrow of time in rather long scales and means reincarnation with an opposite arrow of time, eventually possibly followed by a reincarnation with an original arrow of time. Sleep is a temporary death in this sense.

2. A TGD inspired comment about the mythology of the ancient Egypt

The mythology of ancient Egypt has many analogies with the ontology of the TGD inspired view of consciousness.

  1. The mythology of ancient Egypt suggests an interpretation in terms of zero energy ontology (ZEO). The "kingdom of dead" is non-observable using purely classical signalling since the signals from the other side propagate to the geometric past and do not reach us. Therefore we do not remember anything about the periods of deep sleep. The notion of ka fits nicely with this.

    "Big" quantum jumps (state function reductions, BSFRs) occurring in arbitrary long scales are predicted to be possible and rain making could involve such a pair of BSFRs and thus a visit to the "kingdom of dead" at some level of hierarchy. Trance of a shaman could be such a visit.

  2. There is connection to the recent work involving OH-O- qubit idea already described, possibly unifying plasmoid-, quartz -, computer-, and biological consciousness (see this).

3. A TGD inspired model for rainmaking

As the title "The Egyptian Pyramids-Connection to Rain and Nile flood Anomalies" of the article suggests, it is propoposed that pyramids had a deeper purpose: they could be used to induce rain. This sounds madness in the ears of a standard physicist but in 1895, Charles Wilson, a physicist, meteorologist, and Nobel Prize winner, made a groundbreaking discovery: he proved that rain could be artificially created. The rain making technology has been also commercialized.

The key idea in making rain is that a generation of the negative electric charge in the quartz contained by the soil leads to its accumulation to the atmosphere. The negative electric charge in the atmosphere in turn facilitates the formation of water droplets around them and eventually this induces rain. Could TGD explain this?

3.1 The model of Borisov Consider first the proposal of the article of Borisov.

  1. A deceased king, along with jars containing provisions for the afterlife, is placed inside a coffer, which is a hermetically sealed volume, The jars contain beer, bread, grain, ox, and sweets. The provisions within the jars undergo fermentation, where yeast converts the sugars present in food into carbon dioxide, water, or ethanol. This process can occur within a sealed coffer with no air intake, as long as the necessary conditions for yeast growth are provided. Some studies have found that fatty acids present in ox meat are essential for sustaining this growth.
  2. The carbon dioxide generated by the process cannot escape and increases the pressure in the coffer of which 40 percent is quartz. The pressure in turn generates by piezoelectric effect (see this) an electric field generating negatively charged ions, which would move through the moist lime-stone core of the pyramid towards its apex and would be eventually emitted.

3.2 The TGD based interpretation of the model of Borisov

Consider the TGD interpretation of this model.

  1. The transition OH → O- + dark proton at gravitational magnetic body of the Earth (or Sun) occurs in quartz subject to electric field or under pressure in an electric field (piezoelectric effect transforming pressure gradient to electric field) and would generate negative ions.
  2. In the case of a pyramid, the negative ions from quartz could flow to the tip of the pyramid and generate a high density of negative charge and strong electric field. From the tip the negative charges could flow to the atmosphere and serve as seats for the condensation of water droplets. Note that water is the key element of TGD inspired biology: Pollack effect would generate negative charged exclusion zones and dark protons at the gravitational magnetic body.
  3. The presence of electric fields changes the energies of electrons of O- and by driving the difference of bonding energy and binding energy near the thermal energy, can make the system very sensitive to the transitions between the OH-O- qubits. Quartz is a piezoelectet so that pressure gradients generate an electric field and have the same effect.
  4. The TGD interpretation is that the electric fields increase the sensitivity of quartz to the generation of O- ions plus dark protons at the gravitational monopole flux tubes. To some extent the system would become living.

    Fermentation (see this) creates alcohols, which contain the characteristic -OH group (OH → O- + dark proton). Also the basic information molecules of biology contain -OH groups and -NH groups and the same mechanism could be at work.

    Does this process occur in the body of the king? Mummification means dehydration so that all moisture is removed so that the metabolism does not occur and the body does not decay. At the molecular level, dehydration reaction means that water molecules are removed from a molecule or ion. This can mean a removal of OH groups (see this). The basic information molecules contain -OH groups and -NH groups. This would suggest that in the mummified body the analog of the Pollack effect producing O- and N- qubits is not possible.

  5. Could some kind of collective consciousness assignable to quartz and water in the atmosphere wake up during rain making and induce the rain as a pair of macroscopic BSFRs? Ths would have no explanation in the framework of standard physics and in this sense would be literally a miracle, which we however experience every night and morning.
See the article About long range electromagnetic quantum coherence in TGD Universe or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, November 28, 2024

Does the universality of the holography-holomorphy principle make the notion of action un-necessary in the TGD framework?

It is gradually becoming clear that in the TGD framework the holography-holomorphy principle could make the notion of action defining the space-time surfaces un-necessary at the fundamental level. Only the Dirac action for the second quantized free spinors of H and the induced Dirac action would be needed. Geometrization of physics would reduce to its algebraic geometrization and number theoretical universality would allow to describe correlates of cognition. The four-dimensionality of space-time surfaces would be essential in making the theory non-trivial by allowing to identify vertices for fermion pair creation in terms of defects of the standard smooth structure of the space-time surface making it an exotic smooth structure.

Holography=holomorphy as the basic principle

Holography=holomorphy principle allows to solve the field equations for the space-time surfaces exactly by reducing them to algebraic equations.

  1. Two functions f1 and f2 that depend on the generalized complex coordinates of H=M4xCP2 are needed to solve the field equations. These functions depend on the two complex coordinates ξ1 and ξ2 of CP2 and the complex coordinate w of M4 and the hypercomplex coordinate u for which the coordinate curves are light-like. If the functions are polynomials, denote them f1==P1 and f2 ==P2.

    Assume that the Taylor coefficients of these functions are rational or in the expansion of rational numbers, although this is not necessary either.

  2. f1=0 defines a 6-D surface in H and so does f2=0. This is because the condition gives two conditions (both real and imaginary parts for fi vanish). These 6-D surfaces are interpreted as analogs of the twistor bundles corresponding to M4 and CP2. They have fiber which is 2-sphere. This is the physically motivated assumption, which might require an additional condition stating that ξ1 and ξ2 are functions of w) as analogs of the twistor bundles corresponding to M4 and CP2. This would define the map mapping the twistor sphere of the twistor space of M4 to the twistor sphere of the twistor space of CP2 or vice versa. The map need not be a bijection but would be single valued.

    The conditions f1=0 and f2=0 s give a 4-D spacetime surface as the intersection of these surfaces, identifiable as the base space of both twistor bundle analogies.

  3. The obtained equations are algebraic equations. So they are not partial differential equations. Solving them numerically is child's play because they are completely local. TGD is solvable both analytically and numerically. The importance of this property cannot be overstated.
  4. However, a discretization is needed, which can be number-theoretic and defined by the expansion of rationals. This is however not necessary if one is interested only in geometry and forgets the aspects related to algebraic geometry and number theory.
  5. Once these algebraic equations have been solved at the discretization points, a discretization for the spacetime surface has been obtained.

    The task is to assign a spacetime surface to this discretization as a differentiable surface. Standard methods can be found here. A method that produces a surface for which the second partial derivatives exist because they appear in the curvature tensor.

    An analogy is the graph of a function for which the (y,x) pairs are known in a discrete set. One can connect these points, for example, with straight line segments to obtain a continuous curve.Polynomial fit gives rise to a smooth curve.

  6. It is good to start with, for example, second-degree polynomials P1 and P2 of the generalized complex coordinates of H.

How could the solution be constructed in practice?

For simplicity, let's assume that f1==P1 and f2==P2 are polynomials.

  1. First, one can solve for instance the equation P2(u,w,ξ12)=0 giving for example ξ2(u,w,ξ1) as its root. Any complex coordinates w, ξ1 or ξ2 is a possible choice and these choices can correspond to different roots as space-time regions and all must be considered to get the full picture. A completely local ordinary algebraic equation is in question so that the situation is infinitely simpler than for second order partial differential equations. This miracle is a consequence of holomorphy.
  2. Substitute ξ2(u,w,ξ1) in P1 to obtain the algebraic function P1(u,w,ξ12(u,w,ξ1))= Q1(u,w,ξ1)
  3. Solve ξ1 from the condition Q1=0. Now we are dealing with the root of the algebraic function, but the standard numerical solution is still infinitely easier than for partial differential equations.

    After this, the discretization must be completed to get a space-time surface using some method that produces a surface for which the second partial derivatives are continuous.

Algebraic universality

What is so remarkable is that the solutions of (f1,f2)=(0,0) to the variation of any action if the action is general coordinate invariant and depends only on the induced geometry. Metric and the tensors like curvature tensor associated with it and induced gauge fields and tensors associated with them. The reason is that complex analyticity implies that in the equations of motion there appears only contractions of complex tensors of different types. The second fundamental form (external curvature) defined by the trace of the tensor with respect to the induced metric defined by the covariant derivatives of the tangent vectors of the space-time surfaces is as a complex tensor of type (2,0)+(0,2) and the tensors contracted with it are of type (1,1). The result is identically zero. The holography-holomorphy principle provides a nonlinear analogy of massless field equations and the four surfaces can be interpreted as trajectories for particles that are 3-surfaces instead of point particles, i.e. as generalizations of geodesics. Geodesics are indeed 1-D minimal surfaces. We obtain a geometric version of the field-particle duality.

Number-theoretical universality

If the coefficients of the function f1 and f2 are in an extension of rationals, number-theoretical universality is obtained. The solution in the real case can also be interpreted as a solution in the p-adic cases p=2,3,5,7,... when we allow the expansion of the p-adic number system as induced by the rational expansions.

p-adic variants of space-time surfaces are cognitive representations for the real surfaces. The so-called ramified primes are selected for a special position, which can be associated with the discriminant as its prime factors. A prime number is now a prime number of an algebraic expansion. This makes possible adelic physics as a geometric correlate of cognition. Cognition itself is assignable to quantum jumps.

Is the notion of action needed at all at the fundamental level?

The universality of the space-time surfaces solving the field equations determined by holography=holomorphy principle forces us to ask whether the notion of action is completely unnecessary. Does restricting geometry to algebraic geometry and number theory replace the principle of action completely? This could be the case.

  1. The vacuum functional exp(K), where the K hler function corresponds to the classical action , could be identified as the discriminant D associated with a polynomial. It would therefore be determined entirely by number theory as a product of differences of the roots of a polynomial P or in fact, of any analytic function. The problem is that the space-time surfaces are determined as roots of two analytic functions f1 and f2, rather than only one.
  2. Could one define the 2-surfaces by allowing a third analytic function f_3 so that the roots of (f1,f2,f_3)=(0,0,0) would be 2-D surfaces. One can solve 3 complex coordinates CP2 for each value of u as functions of the hypercomplex coordinate u whereas its dual remains free. One would have a string world sheet with a discrete set of roots for the 3 complex coordinates whose values depend on time. By adding a fourth function f_4 and substituting the 3 complex coordinates, f_4=0 would allow as roots values of the coordinate u. Only real roots would be allowed. A possible interpretation of these points of the space-time surface would be as loci of singularities at which the minimal surface property, i.e. holomorphy, fails.

    Note that for quadratic equations ax2+bx+c=0, the discriminant is D= b2-4ac and more generally the product of the differences of the roots. This formula also holds when f1 and f2 are not polynomials. The assumptions that some power of D corresponds to exp(K) and that K corresponds to the action imply additional conditions for the coupling constants appearing in the action , i.e. the coupling constant evolution.

  3. This is not yet quite enough. The basic question concerns the construction of the interaction vertices for fermions. These vertices reduce to the analogs of gauge theory vertices in which induced fermion current assignable to the volume action is contracted with the induced gauge boson.

    The volume action is a unique choice in the sense that in this case the modified gamma matrices defined as contractions of the canonical momentum currents of the action with the gamma matrices of H reduce to induced gamma matrices, which anticommute to the induced metric. For a general action this is not the cae.

    The vertex for fermion pair creation corresponds to a defect of the standard smooth structure for the space-time surface and means that it becomes exotic smooth structure. These defects emerge in dimension D=4 and make it unique. In TGD, bosons are bound states of fermions and antifermions so that this also gives the vertices for the emission of bosons.

    For graviton emission one obtains an analogous vertex involving second fundamental form at the partonic orbit. The second fundamental form would have delta function singularity at the vertex and vanish elsewhere. If field equations are true also in the vertex, the action must contain an additional term, say Kähler action. Could the singularity of the second fundamental form correspond to the defect of the standard smooth structure?

  4. If this view is correct, number theory and algebraic geometry combined with the geometric vision would make the notion of action un-necessary at the fundamental level. Geometrization of physics would be replaced by its algebraic geometrization. Action would however be a useful tool at the QFT limit of TGD.
See the article TGD as it is towards end of 2024: part II or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

How to assign ordinary Galois groups and ramified primes to the space-time surfaces in holography=holomorphy vision?

Holography = holomorphy vision allows to reduce the construction of space-time surfaces as roots of a pair (f1,f2) of analytic functions of one hypercomplex coordinate and 3 complex coordinates of H=M4× CP2. This allows iteration as the basic operation of chaos theory. One can consider general maps (f1,f2)→ G(f1,f2) =(g1(f1,f2),g2(f1,f2)) and iterate them. The special case (g1(f1,f2),g2(f1,f2))=(g1(f1),g2(f2)) gives iterations of functions gi of a single complex variable appearing in the construction of Mandelbrot and Julia fractals.

Extensions of rational, Galois groups, and ramified primes assignable to polynomials of a single complex variable are central in the number theoretic vision. It is not however completely clear how they should emerge from the holography= holomorphy vision.

  1. If the functions gi== Pi are polynomials, which vanish at the origin (0,0) (this is not a necessary condition), the surfaces (f1,f2)=(0,0) are roots of (P1(f1,f2),P2(P1,f2))=(0,0). Besides these roots, there are roots for which (f1,f2) does not vanish. One can solve the roots f2= h(f1) from g2(f1,f2) =0 and substitute to P1(f1,f2)=0 to get P1(f1,h(f1))== P1 ° H(f1))=0. The values of H(f1) are roots of P1 and are algebraic numbers if the coefficients of P1 are in an extension of rationals. One can assign to the roots discriminant, ramified primes, and Galois group. This is just what the phenomenological number theoretical picture requires.
  2. In the earliest approach to M8-H duality summarized in (see this, this, and this) polynomials P of a single complex coordinate played a key role. Although this approach was a failure, it added to the number theoretic vision Galois groups and ramified primes as prime factors of the discriminant P, identified as p-adic primes in p-adic mass calculations. Note that in the general case the ramified primes are primes of algebraic extensions of rationals: the simplest case corresponds to Gaussian primes and Gaussian Mersenne primes indeed appear in the applications of TGD (see this and this).

    The problem was how to assign a Galois group and ramified primes to the space-time surfaces as 4-D roots of (f1,f2)=(0,0). One can indeed define the counterpart of the Galois group defined as analytic flows permuting various 4-D roots of (f1,f2)=(0,0) (see this). Since the roots are 4-D surfaces, it is far from clear whether there exists a definition of discriminant as an analog for the product of root differences. Also it is unclear what the notion of prime could mean.

    However, the ordinary Galois group plays a key role in the number theoretic vision: can one identify it? The physics inspired proposal has been that the ordinary Galois group can be assigned to the partonic 2-surfaces so that points of the partonic 2-surface as roots of a polynomial give rise to the Galois group and ramified primes. An alternative identification of the ordinary Galois group and ramified primes would be in terms of (P1(f1,f2),g2(P1,f2))=(0,0).

See the article Space-time surfaces as numbers, Turing and Gödel, and mathematical consciousness or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.