https://matpitka.blogspot.com/2024/12/

Monday, December 30, 2024

From blackholes to time reversed blackholes: comparing the views of time evolution provided by general relativity and TGD

The TGD inspired very early cosmology is dominated by cosmic strings. Zero energy ontology (ZEO) suggests that this is the case also for the very late cosmology, or that emerging after "big" state function reduction (BSFR) changing the arrow of time. Could this be a counterpart for the general relativity based vision of black holes as the endpoint of evolution: now they would also be starting points for a time evolution as a mini cosmology with reversed time direction. This would conform with the TGD explanation for stars and galaxies older than the Universe and with the picture produced by JWST.

First some facts about zero energy ontology (ZEO) are needed.

  1. In ZEO space-time surfaces satisfying almost deterministic holography and having their ends at the boundaries of causal diamond (CD), which is the intersection of future and past directed light-cones and can have varying scale? The 3-D states at the passive boundary of CD are unaffected in "small" state function reductions (SSFRs) but change at the active boundary. In the simplest scenario CD itself suffers scaling in SSFRs.
  2. In BSFRs the roles of passive and active boundaries of CD change. The self defined by the sequence of SSFRs "dies" and reincarnates with an opposite arrow of geometric time. The hierarchy of effective Planck constants predicted by TGD implies that BSFRs occur even in cosmological scales and this could occur even for blackhole-like objects in the TGD counterpart of evaporation.
Also some basic ideas related to TGD based astrophysics and cosmology are in order.
  1. I have suggested that the counterpart of the GRT black hole in the TGD, I call it blakchole-like object (BH), is a maximally dense volume-filling flux tube tangle (see this). Actually an entire hierarchy of BHs with quantized string tension is predicted (see this) and ordinary BHs would correspond to flux tubes consisting of nucleons (they correspond to Mersenne prime M107 in TGD) and would be essentially giant nuclei.

    M89 hadron physics and corresponding BHs are in principle also possible and have string tension which is 512 higher than the flux tubes associated with ordinary blackholes. Surprisingly, they could play a key part in solar physics.

  2. The very early cosmology of TGD (see this) corresponds to the region near the passive boundary of CD that would be cosmic string dominated. The upper limit for the temperature would be Hagedorn temperature. Cosmic strings are 4-D objects but their CP2 projection is extremely small so that they look like strings in M4.

    The geometry of CD strongly suggests a scaled down analog of big bang at the passive boundary and of big crunch at the active boundary as time reversal of big bang as BSFR. This picture should also apply to the evolution of BH? Could one think that a gas of cosmic strings evolves to a BH or several of them?

  3. In ZEO, the situation at the active future boundary of the CD after BSFR should be similar to that at the passive boundary before it. This requires that the evaporation of the BH at the active boundary must occur as an analog of the big bang, and gives rise to a gas consisting of flux tubes as analog of cosmic string dominated cosmology. Symmetry would be achieved between the boundaries of the CD.
  4. In general relativity, the fate of all matter is to end up in blackholes, which possibly evaporate. What about the situation in TGD?: does all matter end up to a tangle formed by volume filling flux tubes which evaporates to a gas of flux tubes in an analog of Big Bang?

    Holography = holomorphy vision states that space-time surfaces can be constructed as roots for pairs (f1,f2) of analytic functions of 3 complex coordinates and one hypercomplex coordinate of H=M4× CP2. By holography the data would reside at certain 3-surfaces. The 3-surfaces at the either end of causal diamond (CD), the light-like partonic orbits, and lower-dimensional surfaces are good candidates in this respect.

    Could the matter at the passive boundary of CDs consist of monopole flux tubes which in TGD form the building bricks of blackhole-like objects (BHs) and could the BSFR leading to the change of the arrow of geometric time transform the BH at the active boundary of CD to a gas of monopole flux tubes? This would allow a rather detailed picture of what space-time surfaces look like.

Black hole evaporation as an analog of time reversed big bang can be a completely different thing in TGD than in general relativity.
  1. Let's first see whether a naive generalization of the GRT picture makes sense.
    1. The temperature of a black hole is T=ℏ/8πGM. For ordinary hbar it would therefore be extremely low and the black hole radiation would therefore be extremely low-energy.
    2. If hbar is replaced by GMm/β0, the situation changes completely. We get T= m/β0. The temperature of massive particles, mass m, is essentially m. Each particle in its own relativistic temperature. What about photons? They could have very small mass in p-adic thermodynamics.
    3. If m=M, we get T=M/β0. This temperature seems completely insane. I have developed the quantum model of the black hole as a quantum system and in this situation the notion of temperature does not make sense.
  2. Since the counterpart of the black hole would be a flux tube-like object, the Hagedorn temperature TH is a more natural guess for the counterpart of evaporation temperature and also blackhole temperature. In fact, the ordinary M107 BH would correspond to a giant nucleus as nuclear string. Also M89 BH can be considered. The straightforward dimensionally motivated guess for the Hagedorn temperature is suggested by p-adic length scale hypothesis as TH= xℏ/L(k) , where x is a numerical factor. For blackholes as k=107 objects this would give a temperature of order 224 MeV for x=1. Hadron Physics giving experimental evidence for Hagedorn temperature about T=140 MeV near to pion mass and near to the scale determined by ΛQCD, which would be naturally related to the hadronic value of the cosmological constant Λ.
  3. One can of course ask whether the BH evaporation in this sense is just the counterpart for the formation of a supernova. Could the genuine BH be an M89 blackhole formed as an M107 nuclear string transforms to an M89 nuclear string and then evaporates in a mini Big Bang? Could the volume filling property of the M107 flux tube make possible touching of string portions inducing the transition to M89 hadron physics just as it is proposed to do in the process which corresponds to the formation of QCD plasma in TGD (see this ).
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, December 28, 2024

Discussions about life and consciousness with Krista Lagus, Dwight Harris, Bill Ross, and Erik Jensen

There was a very entertaining and idea rich discussion with Krista Lagus (this), Dwight Harris, Bill Ross,and Erik Jensen at the FB page of Krista Lagus (see this) and also at my own FB page (see this).

Since the discussion stimulated several interesting observations and helped to add details to the formulation of my own views, I thought that it might be a good idea to summarize some of the basic points in order to not forget the new observations.

A. The topics discussed with Krista Lagus

We had an interesting discussion with Krista Lagus and I had an opportunity to explain the basic ideas of TGD inspired quantum view of consciousness and biology.

A.1 Touching

The discussion started from the post of Krista Lagus in which she talked about "quantum touch" meaning that physical touch would involve quantum effects, a kind of fusion of two quantum systems leading to entanglement.

In the attempt to understand what could happen in touch, the basic question is what living objects are. One of the key characteristics of a living system is coherence in the scale of an organism. Standard quantum theory does not support quantum coherence above molecular scales. What makes the living system coherent so that is more than mere molecular soup?

The guess is that quantum coherence at some, yet unidentified, level induces the coherence at the level of biomatter. Natural candidates are classical electromagnetic and gravitational fields. TGD leads to a new view of space-time and the notion of many-sheeted space-time with non-trivial topology in all scales. We see this non-trivial topology by bare eyes: the physical objects that we see around us correspond to space-time sheets. Also classical fields having long range have space-time correlates as topological field quanta: magnetic and electric flux quanta. One can speak of electric and magnetic bodies and also of gravitational magnetic bodies and they accompany biological bodies.

The second new TGD based element is that the field bodies carry phases of ordinary matter with non-standard and possibly very large values of effective Planck constant. The quantum coherence of field bodies induces the coherence of biomatter. Even the gravitational for the Earth and Sun and electric Planck constants for systems varying from DNA to Earth are involved and can be very large.

Also field bodies can touch each other and this would give rise to remote mental interactions. Also biological bodies can touch each other and this gives rise to a fusion of them as conscious entities, even in the scale of the organisms. Basically this touch would be at the level of field bodies in the scale of organisms. The formation of a monopole flux tube pair between two systems, be their molecules or organisms or their field bodies, would be a universal mechanism of touch. U-shaped tentacles for the two systems would reconnect (by touching) and form a flux tube pair connecting the systems. This mechanism would be behind biocatalysis and the immune system.

A. 2 Could statistical averaging have an analog at the level of conscious experience?

Krista Lagus criticized my proposal that the statistical averaging crucial for the applications of quantum theory could have an analog at the level of conscious experience.

The motivation for the statistical averaging at the level of sensory experiences of *subselves* determining mental images is that in absence of it the sensory experience is highly fluctuating if the outcomes of single SFR determines it. The averaging would do at the level of conscious experience the same as in quantum computations. At the level of self this would not occur.

Subself should correspond to an ensemble of nearly identical subselves or temporal sequence (ensemble as a time crystal made possible by classical non-determinism behind "small" SFRs) of states of single subself. I remember that there is evidence that this kind of temporal averaging takes place and I have even written of it. A good example is provided by the sensory organ defining a subself consisting of sensory receptors as its subselves. Color summation would reflect this kind of conscious averaging.

There is however an objection against this argument. There are two kinds of SFRs: "big" and "small" ones. In "big" SFRs as counterparts of ordinary quantum measurements the state of the system can change radically and also the arrow of geometric time changes in the TGD Universe. In a "small" ones, whose sequence is the TGD counterpart for a sequence of repeated measurements of the same observables, the changes are small and one might argue in this case the argument requiring a conscious counterpart for the statistical averaging does not hold true.

A.3. The importance of geometry

What follows is my reaction to the earlier comment of Krista Lagus mentioning the importance of geometry. Certainly the environment is important. Highly symmetric geometry favours resonances and if classical electromagnetic is involved, this can be important.

Suppose that the creation of connection to the electric and gravitational magnetic bodies is a prerequisite for achieving a higher level of consciousness (alpha rhythm of 10 Hz and gamma rhythm of 40 Hz are examples). This would require generation of particles with a large value of heff serving as a measure for algebraic complexity and scale of quantum coherence.

Suppose that the formation of OH-O- +dark proton qubits occurs and takes dark protons to the gravitational magnetic body and involves Pollack effect and its generalizations. This creates negatively charged regions as exclusion zones and strong electric fields so that also electric bodies become important (note that DNA has constant density of negative charge). Large negative electric charges increase the value of electric Planck constant. I had not noticed this, a trivial fact as such, earlier.

Quartz crystals are believed to induce altered states of consciousness and I have a personal experience of this. One could think that dark protons are transferred from quartz crystals (quartz is everywhere) to gravitational magnetic bodies and generate negative charge generating large scale electric quantum coherence. The geometry of holy buildings involves sharp tips (church towers). Also pyramids have sharp tips and highly symmetric geometry favouring resonances. In the presence of charge, these tips involve strong electric fields characterized by large electric Planck constant (something new) generating quantum coherence (see this).

There is experimental evidence about strange phenomena associated with this kind of building: the last one was in Finland this Christmas and was a dynamical light pillar associated with a church. So called UFOs are the second familiar example assigned with lines of tectonic activity and could be seen as a plasmoid life form. Also crop circles involve light balls.

A.4 Bioharmony

In the TGD framework a model for music harmony led to the notion of icosahedral harmony. The Icosahedron has 12 vertices and 20 triangular faces. Also the octahedron and tetrahedron have triangular faces. 12-note scale can be represented as a Hamiltonian cycle at icosahedron and one can classify them by their symmetries: there are 3 types of them and combining 3 Hamiltonian cycles one obtains union of three harmonies with 20+20+20 3-chords identified as vertices of triangles. The surprising finding was that one can identify these 3-chords as 60 DNA codons and that the numbers of triangles related by symmetry of the cycle correspond to the number of DNAs coding for a given amino acid. The addition of a tetrahedron gives the tetrahedral cycle and this gives 64 DNA codons.

The interpretation would be that the chords correspond to triplets of dark photons providing a realization of the genetic code. Music represents and induces emotions so that the bioharmonies would represent moods at the molecular level. DNA would give 6-qubit code and represent also emotions.

What did not look nice was that both icosahedron and tetrahedron were needed. Much later I realized that hyperbolic 3-space H^3 central in TGD (3-surface of Minkowski space with constant light-cone proper time or 3-surface with constant cosmic time) allows a completely unique tessellation (lattice) consisting of tetrahedrons, octahedrons and icosahedrons: usually only one Platonic solid is possible. This tessellation appears in various scales and genetic code could be a universal manner to represent information and would be realized in many scales in biology and also outside biology.

To get idea about the evolution of these ideas see for instance this, this , this, and this .

B. Dwight Harris and parity violation

There was a comparison of views about consciousness and biology with Dwight Harris. One of his key ideas is the importance of chiral selection and this stimulated a concrete idea about how could be forced by holography=holomorphy principle of TGD, in the same way as matter antimatter asymmetry would be forced by it (see this).

Dwight Harris assumes that parity violation is necessary for conscious memory. In TGD, this is not the case. The notion of conscious memory is an essential element of theories of quantum consciousness but standard QM does not allow it. In TGD the situation changes due to the replacement of the standard ontology of QM with what I call zero energy ontology.

Parity violation manifesting itself as a chiral selection is however essential for biological life. Chiral selection is a mystery in the standard model in which parity violation is extremely small above intermediate gauge boson Compton length Lw of order 10-17 meters. Weak bosons are however effectively massless below Lw and parity violation is large.

In the TGD framework the situation changes since large values of effective Planck constant scales up the scale of parity violation. Dark weak bosons are massless below the scale (heff/h)×Lw so that for large enough heff the parity violation can be larger in scales of order cell size and even longer. This could explain the chiral selection.

A highly interesting question is whether the chiral selection caused to parity violation is essential for all forms of life: TGD indeed predicts that life is present in all scales and biological life is only one particular special case. Why would parity violation be unavoidable? Why would the states with different chiralities have different energies?

Or could the explanation be at a much deeper level and based on holography=holomorphy principle? Could this principle allow only the second chirality. Complex conjugation is analogous to reflection. Could different chiralities be like analytic function f(z) and its conjugate f*(z*) For a given space-time region only one of these is possible. The glove cannot be simultaneously left- and right handed and the option with the minimal energy is selected.

I have already earlier proposed that holography=holomorphy principle forces matter antimatter asymmetry in the sense that the space-time surface decomposes to regions containing preferentially matter or antimatter (see for instance this).

C. Bill Ross and G-quadruples

The discussion with Bill Ross made me conscious of the notion of G-quadruples which I have certainly encountered but with any conscious response. I tried to get from the Wikipedia article (see this) some idea about what is involved.

There can be 4 separate short G-rich DNA strands, 2 DNA strands, which are looped so that there are 4 parallel strands, and a single strand which is doubly looped giving again locally 4 parallel strands. A kind of tangle is formed. Planar tetrads, G-quadruples, are formed between the 4 strand portions involving G letters each. For instance, the telomere regions contain these structures and they have biological functions. G-quadruples appear also elsewhere and seem to be important in the control of transcription of DNA.

I also understood that cations K+ stabilizing the system are between G-tetrads which can be formed in G-rich regions of DNA such as telomeres. There are pairs of cations. 2 negative carbonyls would mean 2 C=O-:s. A Cooper pair of electrons would come into mind as Bill Ross suggested and these kinds of pairs are associated with aromatic rings. Bill Ross was especially interested in what TGD could say about the situation.

This is what I can say. Since phosphates containing O- are there, and base pair for DNA strands corresponds to 3+3 6 dark codons, one would have a doubling of OH-O- + dark proton qubits, that is 6+6=12 qubits per codon quadruplets.

This could increase quantum computational power possibly associated with the codon sequences. In quantum computation N bits is related with 2N dimensional state space so that the doubling of qubits increase the dimension of state space by factor 2 although state function reductions halting the computation give rise to a state characterized by N bits but in a basic which depends on the observables measured.

There is an interesting analogy related to beta sheets of proteins. Each amino-acid would define a qubit associated with the COOH part. N parallel protein folds would increase the number of qubits by N. Could proteins of beta sheets perform quantum computations?

D. Erik Jensen and the two meanings of purification

Erik Jensen talked about purification as a process of purifying the mind. It is amusing that purification and distillation are terms used in quantum computationalism for the generation of pure states from mixed states, which are non-pure because they have entanglement with the environment so that a density matrix must be used to describe them. I learned just yesterday that without so-called magic states produced by distillation (see this), quantum computation could do only what classical computation can do. Meditation is usually seen as an attempt to reduce the attachment to the external world. Could the physical meaning of this be that it makes genuine quantum computational processes possible! Enlightenment as a transition from classical to quantum computation!;-)

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, December 26, 2024

Is dark energy needed at all?

I received links to two very interesting ScienceDaily articles from Mark McWilliams. The first article (see this) discusses dark energy. The second article (see this) discusses Hubble tension. Both articles relate to the article "Cosmological foundations revisited with Pantheon+." published by Lane ZG et al in Notices of Royal Astronomical Society (see this).

On the basis of their larger than expected redshift supernovae in distant galaxies appear to be farther than they should be and this inspires the notion of dark energy explaining accelerated expansion. The argument proposes a different explanation based on giving up the Friedmannin cosmology and giving up the notion of dark energy. This argument would also resolve the Hubble tension discussed in the second popular article. The argument goes as follows.

  1. Gravitation slows down the clocks. The clocks tick faster inside large voids than at their boundaries where galaxies reside. When light passes through a large void it ages more than if it passed through the same region with an average mass density.
  2. As the light from distant supernovae arrives it spends a longer time inside the voids than in passing through galaxies. This would mean that the redshift for supernovae in distant galaxies appears to be larger so that they are apparently farther away. Apparently the expansion accelerates.
  3. This is also argued to explain the Hubble tension meaning that the cosmic expansion rate characterized by the Hubble constant, for the objects in the early Universe is smaller than for the nearby objects.
This looks to me like a nice argument and is claimed to also explain the Hubble tension. The model forces us to give up the standard Robertson-Walker cosmology. Qualitatively the proposal conforms with the notion of many-sheeted space-time predicting Russian doll cosmology defined by space-time sheets condensed on larger space-time sheets. For the general view of TGD inspired cosmology see this.

I have written two articles about what I call magnetic bubbles and used the term "mini big bang" (see this and this). Supernova explosion would be one example of a mini big bang. Also planets would be created in mini big bangs.

But what about the galactic dark matter? TGD predicts an analog of dark energy as Kaehler magnetic and volume energy of cosmic strings and of monopole flux tubes generated as they thicken and generate ordinary matter in the process is identifiable as galactic dark matter. No dark matter halo is predicted, only the cosmic strings and the monopole flux tube with much smaller string tension appearing in all scales, even in biology.

It should be noticed that TGD also predicts phases of ordinary matter with non-standard value of Planck constant behaving like dark matter. The transformation of ordinary matter to these kinds of phases explains the gradual disappearance of baryonic (and also leptonic) matter. These phases are absolutely essential in TGD inspired quantum biology and reside at the field bodies of the organisms with a much larger size than the organism itself and their quantum coherence induces the coherence of biomatter.

See the article The blackhole that grew too fast, why Vega has no planets, and is dark energy needed at all? or the chapter About the recent TGD based view concerning cosmology and astrophysics.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 25, 2024

Oceans around quasars and the origin of life

One of the many astonishing recent findings in astrophysics is the discovery 10 trillion oceans of water circling a supermassive black hole of a quasar (see this). Despite being 300 trillion times less dense than Earth's atmosphere, the water vapour is five times hotter and hundreds of times denser than gas found in typical galaxies. The density ρ of the Earth's atmosphere is about 1/800 of that of water.

Consider first the average density of these oceans circling quasars.

  1. The number density n(H2O) of water molecules in condensed matter at room temperature is about n(H20)= .5× 1029 molecules/Angstrom3. Therefore the density of the atmosphere corresponds to natm=.4× 1029 water molecules/Angstrom3. The average number density of H2O molecules in the oceans accompanying quasars is therefore n= 10-15natm/3 ≈ .4× 1013 molecules/Angstrom3. The edge of a cube containing a single water molecule would be L=1/n1/3=.5× 10-4 m. This is the size scale of a neuron. A blob of water at the normal normal density has Planck mass and size about 10-4 m. Could this have some deeper meaning?
  2. Could the water molecules be dark or involve dark protons assignable with gravitational monopole flux tubes? At the surface of the Earth the monopole flux tubes give rise to the "endogenous" magnetic field, explaining the findings of Blackman and others about quantal effects of ELF radiation on vertebrate brains. They would carry a magnetic field of .2 Gauss and would have magnetic length (2ℏ/eB)1/2=5.6 μm serving as an estimate for the radius of the flux tube. The assumption that the local density of water equals the average density could of course be wrong: one could also consider a formation of water blobs.
The average temperature of the evaporated water is about -17 degrees Celsius and not far from the physiological temperature of about 36 degrees Celsius. What could this mean?
  1. The diffuse ionized gas (DIG) constitutes the largest fraction of the total ionized interstellar matter in star-forming galaxies. It is still unclear whether the ionization is driven predominantly by the ionizing radiation of hot massive stars, as in H II regions (in which ions are protons), or whether additional sources of ionization have to be considered.
  2. TGD inspired new physics suggests molecular ionization in which ionization energies are much lower than for atomic ionization. The TGD based model of (see this, this, this, this, and this) of Pollack effect (see this) is central in the TGD based model of life. Pollack effect occurs at physiological temperature range and is induced by photons in IR and visible range, which kick protons to the gravitational magnetic body of the system, where they become dark protons with non-standard value of effective Planck constant. TGD leads to generalizations of Pollack effect (see this). The most recent view of life forms relying on the notion of OH-O- qubit, discussed in (see this), predicts that any cold plasma can have life-like properties.
A more detailed formulation of this proposal is in terms of PAHs (see this). The list of the basic properties of PAHs can be found here. TGD suggests that the so called space scent could be induced by the IR radiation from PAHs (see this).
  1. PAHs (polycyclic aromatic compounds) are assigned with unidentified infrared bands (UIBs) and could induce Pollack effect. The IR radiation could be also induced by the reverse of the Pollack effect.
  2. The properties of PAHs have led to the PAH world hypothesis stating that PAHs are predecessors of the recent basic organic molecules. For instance, the distances of aromatic molecules appearing as basic building bricks are the same as distances of DNA base pairs.
  3. So called Unidentified Infrared Bands (UIBs) of radiation around IR energies E [.11 , .20, .375] eV arriving from the interstellar space are proposed to be produced by PAHs. The UIBs can be mimicked in the laboratory in reactions associated with photosynthesis producing PAHs.
  4. PAHs are detected in interstellar space. James Webb telescope found that PAHs exist in the very early cosmology 1 billion years before they should be possible in the standard cosmology! Furthermore, PAHs exist in regions, where there are no stars and no star formation (see this).
In the TGD inspired quantum biology, the transitions OH→ O- + dark proton at gravitational monopole flux tube, having interpretation as a flip of quantum gravitational qubit, play a fundamental role (see this) and would also involve Pollack effect. The difference of the bonding energy for OH and of binding energy of O- is .33 eV and is slightly above the thermal energy of .15 eV of photon at physiological temperature. Note that the energies of UIBs are just in the range important for the transitions flipping OH→ O- qubits.

Could IR radiation from PAHs at these energies induce these transitions and could the reversals of OH→ O- qubit liberate energy heating the water so that its temperature is 5 times higher than that of the environment? Note that the density of water is hundreds of times higher than the gas in typical galaxies and could make possible thermal equilibrium of water vapour. This leads to ask whether the water around quasars could have life-like properties.

See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Unlike in wave mechanics, chaos is possible in quantum TGD

Sabine Hossenfelder has an interesting Youtube video about chaos titled "Scientists Uncover Hidden Pattern in Quantum Chaos" (see this).

Standard Quantum Mechanics cannot describe chaos. TGD view allows its description. In the geometric degrees of freedom, quantums states correspond to quantum superpositions of space-time surfaces in H=M4×CP2 satisfying holography which makes it possible to avoid path integral spoiled by horrible divergences. Holography= holomorphy principle allows to construct space-time surfaces as roots of a pair (f1,f2) of analytic functions of 3 complex coordinates and one hypercomplex coordinate. These surfaces are minimal surfaces and satisfy field equations for any general coordinate invariant action constructible in terms of induced gauge fields and metric.

The iterations of (f1,f2)--> (g1(f1,f2),g2(f1,f2)) give rise to transition to chaos in 2 complex-D sense and as a special case one obtains analogs of Julia and Mandelbrot fractals when assumes that only g1 differs from identity. Hence chaos emerges because point-like particles are replaced with 3-surfaces in turn replaced by space-time surfaces obeying holography=holomorphy principle.

See for instance the articles About some number theoretical aspects of TGD and About Langlands correspondence in the TGD framework .

Tuesday, December 24, 2024

Speed of thought is 10 Hz: what does this mean?

The popular article "Scientists Quantified The Speed of Human Thought, And It's a Big Surprise" (see this) tells about the article "The unbearable slowness of being: Why do we live at 10 bits/s?" of Zheng and Meister (see this). The speed of human thought would be 1 step per .1 seconds. This time interval corresponds to 10 alpha rhythm.

The conclusion is rather naive and reflects the failure to realize that consciousness is a hierarchical structure. This failure is one of the deep problems of neuroscience and also of quantum theories of consciousness. Although the physical world has a hierarchical structure and although the structure of consciousness should reflect this, it seems impossible to realize that it indeed does so!

Only a very small part of this hierarchical structure is conscious to us. Conscious entities, selves, have subselves (associated with physical subsystems), which they experience as mental images. Also subselves have subselves as sub-subselves of us. The hierarchy continues downwards and upwards and the latter predicts collective levels of consciousness.

We do not experience these subsubselves as separate entities but only their statistical average. This makes possible statistical determinism of mental images so that they do not fluctuate randomly. This conforms with the fact that there is a large number of sensory receptors. For instance, this statistical averaging explains the summation of visual colors.

This applies also to cognition and quantum computation-like processes in which the outcomes are sub-sub-selves giving rise to cognitive mental image, self, conscious average. This averaging applies also in time direction since zero energy ontology predicts a slight failure of classical non-determinism. Averaging as a basic operation in quantum theory computations giving rise to predictions would have a counterpart at the level of conscious experience.

See the article Some objections against TGD inspired view of qualia and the chapter General Theory of Qualia.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, December 21, 2024

How could p-adicization and hyper-finite factors relate?

Factors of type I are von Neumann algebras acting in the ordinary Hilbert space allowing a discrete enumerable basis. Also hyperfinite factors of type II1 (HFFs in the sequel) play a central role in quantum TGD (see this and this). HFFs replace the problematic factors of type III encountered in algebraic quantum field theories. Note that von Neumann himself regarded factors of type III pathological.

HFFs have rather unintuitive properties, which I have summarized in (see this and this).

  1. The Hilbert spaces associated with HFFs do not have a discrete basis and one could say that the dimension of Hilbert spaces associated with HFFs corresponds to the cardinality of reals. However, the dimension of the Hilbert space identified as a trace Tr(Id) of the unit operator is finite and can be taken equal to 1.
  2. HFFs have subfactors and the inclusion of sub-HFFs as analogs of tensor factors give rise to subfactors with dimension smaller than 1 defining a fractal dimension. For Jones inclusions these dimensions are known and form a discrete set algebraic numbers. In the TGD framework, the included tensor factor allows an interpretation in terms of a finite measurement resolution. The inclusions give rise to quantum groups and their representations as analogs of coset spaces.
p-Adic numbers represent a second key notion of TGD.
  1. p-Adic number fields emerged in p-adic mass calculations (see this, this and this and this). Their properties led to a proposal that they serve as correlates of cognition. All p-adic number fields are possible and can be combined to form adele and the outcome is what could be called adelic physics (see this and this).
  2. Also the extensions of p-adic number fields induced by the extensions of rationals are involved and define a hierarchy of extensions of adeles. The ramified primes for a given polynomial define preferred p-adic primes. For a given space-time region the extension is assignable to the coefficients for a pair of polynomials or even Taylor coefficients for two analytic functions defining the space-time surface as their common root.
  3. The inclusion hierarchies for the extensions of rationals accompanied by inclusion hierarchies of Galois groups for extensions of extensions of .... are analogous to the inclusion hierarchies of HFFs.
Before discussing how p-adic and real physics relate, one must summarize the recent formulation of TGD based on holography = holography correspondence.
  1. The recent formulation of TGD allows to identify space-time surfaces in the imbedding space H=M4× CP2 as common roots for the pair (f1,f2) of generalized holomorphic functions defined in H. If the Taylor coefficients of fi are in an extension of rationals, the conditions defining the space-time surfaces make sense also in an extension of p-adic number fields induced by this extension. As a special case this applies to the case when the functions fi are polynomials. For the completely Taylor coefficients of generalized holomorphic functions fi, the p-adicization is not possible. The Taylor series for fi must also converge in the p-adic sense. For instance, this is the case for exp(x) only if the p-adic norm of x is not smaller than 1.
  2. The notion of Galois group can be generalized when the roots are not anymore points but 4-D surfaces (see this). However, the notion of ramified prime becomes problematic. The notion of ramified prime makes sense if one allows 4 polynomials (P1,P2,P3,P4) instead of two. The roots of 3 polynomials (P1,P2,P3) give rise to 2-surfaces as string world sheets and the simultaneous roots of (P1,P2,P3,P4) can be regarded as roots of the fourth polynomial and are identified as physical singularities identifiable as vertices (see this).

    Also the maps defined by analytic functions g in the space of function pairs (f1,f2) generate new space-time surfaces. One can assign Galois group and ramified primes to h if it is a polynomial P in an extension of rationals. The composition of polynomials Pi defines inclusion hierarchies with increasing algebraic complexity and as a special case one obtains iterations, an approach to chaos, and 4-D analogs of Mandelbrot fractals.

Consider now the relationship between real and p-adic physics.
  1. The connection between real and p-adic physics is defined by common points of reals and p-adic numbers defining a discretization at the space-time level and therefore a finite measurement resolution. This correspondence generalizes to the level of the space-time surfaces and defines a highly unique discretization depending only on the pinary cutoff for the algebraic integers involved. The discretization is not completely unique since the choice of the generalized complex coordinates for H is not completely unique although the symmetries of H make it highly unique.
  2. This picture leads to a vision in which reals and various p-adic number fields and their extensions induced by rationals form a gigantic book in which pages meet at the back of the book at the common points belonging to rationals and their extensions.
What it means to be a point "common" for reals and p-adics, is not quite clear. These common numbers belong to an algebraic extension of rationals inducing that of p-adic numbers. Since a discretization is in question, one can require that these common numbers have a finite pinary expansion in powers of p. For points with coordinates in an algebraic extension of rationals and having p-adic norm equal to 1, a direct identification is possible. In the general case, one can consider two options for the correspondence between p-adic discretization and its real counterpart.
  1. The real number and the number in the extension have the same finite pinary expansions. This correspondence is however highly irregular and not continuous at the limit when an infinite number of powers of p are allowed.
  2. The real number and its p-adic counterpart are related by canonical identification I. The coefficients of the units of the algebraic extension are finite real integers and mapped to p-adic numbers by xR=I(xp)= ∑ xnp-n → xp= ∑ xnpn. The inverse of I has the same form. This option is favored by the continuity of I as a map from p-adics to reals at the limit of an infinite number of pinary digits.

    Canonical identification has several variants. In particular, rationals m/n such that m and n have no common divisors and have finite pinary expansions can be mapped their p-adic counterparts and vice versa by using the map m/n→ I(m)/I(n). This map generalizes to algebraic extensions of rationals.

The detailed properties of the canonical identification deserve a summary.
  1. For finite integers I is a bijection. At the limit when an infinite number of pinary digits is allowed, I is a surjection from p-adics to reals but not a bijection. The reason is that the pinary expansion of a real number is not unique. In analogy with 1=.999...for decimal numbers, the pinary expansion [(p-1)/p]∑k≥ 0p-k is equal to the real unit 1. The inverse images of these numbers under canonical identification correspond to xp=1 and yp= (p-1)p∑k≥ 0 pk. yp has p-adic norm 1/p and an infinite pinary expansion. More generally, I maps real numbers x= ∑n<Nxnp-n +xNp-N and y=∑n<Nxnp-n +(xN-1)p-N +p-N-1(p-1)∑k≥ 0p-k to the same real number so that at the limit of infinite number of pinary digits, the inverse of I is two value for finite real integers if one allows the two representations. For rationals formed from finite integers there are 4 inverse images for I(m/n)= I(m)/I(n).
  2. One can consider 3 kinds of p-adic numbers. p-Adic integers correspond to finite ordinary integers with a finite pinary expansion. p-Adic rationals are ratios of finite integers and have a periodic pinary expansion. p-Adic transcendentals correspond to reals with non-periodic pinary expansion. For real transcendentals with infinite non-periodic pinary expansion the p-adic valued inverse image is unique since xR does not have a largest pinary digit.
  3. Negative reals are problematic from the point of view of canonical identification. The reason is that p-adic numbers are not well-ordered so that the notion of negative p-adic number is not well-defined unless one restricts the consideration to finite p-adic integers and the their negatives as -n=(p-1)(1-p)n=(p-1)(1+p+p2+...)n. As far as discretizations are considered this restriction is very natural. The images of n and -n under I would correspond to the same real integer but being represented differently. This does not make sense.

    Should one modify I so that the p-adic -n is mapped to real -n? This would work also for the rationals. The p-adic counterpart of a real with infinite and non-periodic pinary expansion and its negative would correspond to the same p-adic number. An analog of compactification of the real number to a p-adic circle would take place.

Both hyperfinite factors and p-adicization allow a description of a finite measurement resolution. Therefore a natural question is whether the strange properties of hyperfinite factors, in particular the fact that the dimension D of Hilbert space equals to the cardinality of reals on one hand and to a finite number (D=1 in the convention used) on the other hand, could have a counterpart in the p-adic sector. What is the cardinality of p-adic numbers defined in terms of canonical identification? Could it be finite?
  1. Consider finite real integers x=∑n=0N-1xnpn but with x=0 excluded. Each pinary digit has p values and the total cardinality of these numbers of this kind is pN-1. These real integers correspond to two kinds of p-adic integers in canonical identification so that the total number is 2pN-2. One must also include zero so that the total cardinality is M=2pN-1. Identify M as a p-adic integer. Its padic norm equals 1.
  2. As a p-adic number, M corresponds to Mp=2pN+(p-1)(1+p+p2+...)= pN+pN+1 +(p-1)(1+p+...-pN)). One can write Mp=pN+pN+2+ (p-1)(1+p+...-pN-pN+1). One can continue in this way and obtains at the limit N→ ∞ pN→ ∞(1 + p+ ...)+ (p-1)(1+ p+ ... + pN-1). The first term has a vanishing p-adic norm. The canonical image of this number equals p at the limit N→ ∞. The cardinality of p-adic numbers in this sense would be that of the corresponding finite field! Does this have some deep meaning or is it only number theoretic mysticism?
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 16, 2024

Has Google managed to reach the critical value for the error rate of a single qubit?

Google claims to have achieved something marvellous with the quantum computer called Willow. This claim is however combined with a totally outlandish claim about parallel universes created in quantum computers and this has generated a lot of cognitive dissonance in professionals during the last week. They have not yet forgotten the earlier equally absurd claim about the creation of wormholes in quantum computers.

The Quanta Magazine article "Quantum Computers Cross Critical Error Threshold" (see this) tells what has been achieved but did not resolve the cognitive dissonance. I already commented the claims of Google in a blog posting (see this).

Now I encountered an excellent article "Ask Ethan: Does quantum computation occur in parallel universes?" (see this) analyzing thoroughly the basics of quantum computation and what Google has achieved. I recommend it to anyone seriously interested in quantum computation.

The really fantastic achievement is the ability to reduce the error rate for the physical qubits forming the grid defining the logical qubit below the critical value .1 percent guaranteeing that for larger grids of physical qubits the error rate decreases exponentially. This achievement is more than enough! But why do they claim that this implies parallel universes? This claim is totally absurd and leads me to ask whether the claimed achievement is really true? How can one trust professionals who do not seem to understand the basic notions of quantum mechanics?

Taking the basic claim seriously, one can of course ask whether the slow error rate is actually theoretically possible in standard quantum mechanics or does it require new physics. These qubits are rather stable but are they so stable in standard QM?

I have been talking about this kind of new physics now for two decades. This new physics would play a key role in quantum biology and could be important also in condensed matter physics and even in chemistry. It is implied by the predicted hierarchy of effective Planck constants heff labelling the phases of ordinary matter with quantum scales scaled up by heff/h. This makes possible long scale temporal and spatial quantum coherence and can reduce the error rate and provide a solution to the basic problems listed in the article. The latest proposal along these lines is the proposal how classical computers and quantum computers could be fused to what might be regarded as conscious computers sharing several life-like features with biomatter (see this).The situation is now different since the temperature is very low and the chip is superconducting.

One learns from the video describing the Willow chip (see this) that the lifetime of a logical qubit is T ≈ 100 μs. This time is surprisingly long: can one really understand this in ordinary quantum mechanics? One can try this in the TGD framework.

  1. The energy of qubit flip must be as small as possible but above the thermal energy. Energy economics suggests that the Josephson energy E= ZeV of electrons in Josephson junction is above the thermal energy at the temperatures considered but not much larger. For superconducting quantum computers (see this) the temperature is about 10-2 K, which corresponds to the energy scale of μeV.
  2. The formula f= ZeV/heff gives a rough estimate for the quantum coherence time of a superconducting qubit as T= heff /ZeV. For heff=h this gives T≈ 3 ns for the quantum coherence time of a single qubit. The value heff≈ 3.3× 104 would be needed to increase T from its naive estimate of T=3 ns to the required T=100 μs.

    I have proposed that these relatively small values of heff (as compared to the values of the gravitational Planck constant) can appear in electrically charged systems. The general criterion applying to all interactions is that the value of heff is such that the perturbation series as powers of, say, Z1Z2e2/ℏeff for the electromagnetic interactions of charges Z1 and Z2 converges.

    In the recent case, the value of heff could correspond to the electric counterpart of the gravitational Planck constant having the form ℏem= Z1Z2e20, where β0=v0/c is a velocity parameter (see this). Z1 could correspond to a large charge and Z2 to a small charge, say that of a Cooper pair. For instance, DNA having a constant charge density per unit length, would have a rather large value of ℏem. The presence of electronic Cooper pair condensate could give rise to the needed large electric charge making possible the needed value of ℏeff= ℏem≈ 3.3 × 104ℏ.

See the article Has Google managed to reach the critical value for the error rate of a single qubit? or the chapter Are Conscious Computers Possible in TGD Universe?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, December 10, 2024

Has Google discovered something new or is the quantum computation bubble collapsing?

In FB there was an link to a popular article "Google says its new quantum chip indicates that multiple universes exist", see this. In the article the notions of multiverse and parallel universe are confused. Google should use money to raise the standards of hyping. Quantum computation assumes superposition of quantum states. One can of course call the superposed quantum states of the computer parallel multiverses but why?

Just yesterday, I saw yesterday a less optimistic video The quantum computation collapse has begun by Sabine Hosssenfelder.

What should one conclude from this. Has the collapse of the bubble started? Or has Google discovered something unexpected. What could this new something be?

Error correction is the basic problem of quantum computation. It can be achieved by adding qubits serving as check qubits but one must also check qubits for these error qubits so that one ends up with a never ending sequence of error corrections. One should however leave some qubits also for the computation!

Could something be missing from quantum physics itself? I have been explaining for more than two decades what this something might be.

  1. Number theoretic vision of TGD predicts an entire hierarchy of phases of ordinary matter characterized by an effective Planck constant, which can have arbitrarily large values. In particular, the quantum coherence associated with classical electromagnetic and gravitational fields makes possible quantum coherence in even astrophysical scales and this solves the problems due to the fragility of quantum coherence.
  2. Another prediction is what I call zero energy ontology. ZEO predicts the possibility of intelligent and conscious computers as a fusion of classical and quantum computers. Trial and error would be a universal quantum mechanism of learning and problem solving. This would force evolution as emergence of phases with increasing value of the effective Planck constant.

    The phenomenon of life would be much more general than thought: quartz crystals, plasmas, and biomatter, quite generally any cold plasma, would have the same basic mechanism giving rise to qubits, which under certain circumstances can make the system living and conscious entities.

See for instance the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? .

Addition: Immediately after writing this post I encountered a Quantamagazine article Quantum Computers Cross Critical Error Threshold" telling about what has been achieved in Google. On the basis of the earlier scandal raised by the claims that related to the wormholes constructed in quantum computers, it is better to take a cautious attitude. The message is as follows. A grid of physical qubits accompanied by measurement qubits codes for a single logical qubit. If the error rate for the physical qubit is below a critical value about .1 percent, the error rate of the logical qubit reduces exponentially. Google reports that this critical error threshold has been reached.

If the reduction of the error rate below this limit is impossible in standard quantum physics, one can ask whether new quantum physics is involved. TGD predicts a hierarchy of effective Planck constants heff labelling phases of ordinary matter behaving like dark matter and large enough heff might make it possible to reduce the error rate below the critical value .1 per cent.

See the article Has Google managed to reach the critical value for the error rate of a single qubit? or the chapter Are Conscious Computers Possible in TGD Universe?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 09, 2024

The mystery of life's origin deepens

Sabine Hossenfelder told about new study, which deepens the mystery of life's origin (see this). The key notion is LUCA, life's universal common ancestor, whose genome should be common to all life forms, which in the most general case involves both archaea, prokaryotes (bacteria), and eucaryotes (plants, fungi and animals).

The newest study gives a considerably larger number than the previous estimates.

  1. LUCA would have 2,657 genes. Luca would have had 2.7 million bps to be compared with about 3 billion bps of humans. LUCA would have lived about 4.2 billion years ago.
  2. The proteins coded by the genes of LUCA suggest that hydrogen was important in the metabolism of LUCA. Presumably LUCA lived near volcanoes. LUCA also had a rather complex metabolic circuitry and the genome suggests that it was a part of an ecosystem. The size of LUCA is 10 &\mu:m in size, which is also the size of cell nucleus, and it has a genome but no nucleus.
  3. An interesting side observation is that 2,657 is prime and forms a twin prime together with 2659. Maybe number theory is deeply involved with the genome.
  4. The earlier estimate for the gene number of LUCA by Bill Martin's team (see this) left only 355 genes from the original 11,000 candidates, and they argue that these 355 definitely belonged to LUCA and can tell us something about how LUCA lived.
The problem is that there are two widely different candidates for the LUCA and the new candidate seems to be too complex if one assumes a single evolutionary tree.

The mystery of Cambrian Explosion

Cambrian Explosion represents a long standing mystery of evolutionary biology (see the book "Wonderful Life" of Stephen Gould). The basic mystery is that highly evolved multicellular life forms emerged suddenly in the Cambrian explosion about .5 billion years ago. There are much older fossils of monocellular life forms archaea and prokaryotes and they would have lived at the surface of Earth as separate evolutionary lineages.

The TGD based solution of the mystery mystery of Cambrian Explosion does not involve ETs bringing multicellular life to the Earth (see this). .

  1. In the TGD Universe, quantum gravitation is possible in arbitrarily long scales and cosmic expansion is replaced by a sequence of quantum phase transitions occurring in astrophysical scales as very rapid local expansions between which there is no expansion.
  2. The life on Earth could have evolved in two ways and as three separate evolutionary trees. Multicellular life forms possible for sexually reproducing eukaryotes would have evolved in the underground oceans, where they were shielded from meteor bombardments and cosmic rays. There are indications that underground oceans and underground life are present on Mars and possibly also some other places in the solar system.
  3. In the Cambrian Explosion, identified as a short lasting rapid local cosmic expansion, the radius of Earth would have increased by a factor of two. This hypothesis was originally inspired by the observation of Adams (see this) that the continents seem to fit nicely together if the radius of Earth is taken to be 1/2 of its recent radius. This hypothesis would generalize the continental drift theory of Wegener. Rather highly developed photosynthesizing multicellular life forms would have bursted to the surface of Earth from underground oceans and oceans were formed (see this, this, this, and this).
The TGD proposal for the solution of the LUCA mystery relies on the solution of the mystery of the Cambrian explosion. Bacteria and archaea would have evolved at the surface of the Earth and eukaryotes having a cell nucleus and reproducing sexually in the underground oceans. Bacteria and archaea would have evolved from a counterpart of LUCA having a much smaller genome and eukaryotes would have evolved from an archaea with maximum size, which became the nucleus of the first eukaryote, LUCA.

See the article Some mysteries of the biological evolution from the TGD point of view and the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, December 08, 2024

The perplexing findings about the asteroid Ryugu from the TGD perspective

Anton Petrov told in in Youtube video ( "Shocking Discovery of Earth Bacteria Inside Ryugu Asteroid Samples + Other Updates" (see this) of highly interesting recent discoveries, which might provide very strong direct evidence for the TGD view of quantum biology.

The motivation for studying asteroids is that they could have been very important in planetary formation. The Panspermia hypothesis suggests that they could have also brought life to the Earth. These findings provide a test for the TGD view of life which now suggests a very general basic mechanism for the emergence of life (see this).

Some basic facts about Ryugu are in order. Consider first the origin of Ryugu.

  1. The surface of Ryugu is very young and has an age of 8.9 +/- 2.5 million years. The composition of Ryugu shows that its material has been at a rather high temperature about 1000 C and presumably near the Sun. Eventually Rygu would have left the inner solar system and its composition suggests that it has been very near to the Kuiper belt with distance 30-55 AU.
  2. The asteroid that arrived near the Earth from outer space must have been for a long period in complete darkness. The object giving rise to Ryugu could have originated far from Jupiter, possibly near the Kuiper belt. Some compounds in Ryugu can only form near the Kuiper belt. A larger object of radius about 100 km could have suffered a collision near Earth and produced Ryugu with a size of 10 km near Earth.
  3. Recently Ryugu orbits the Sun at a distance of 0.96-1.41 AU once every 16 months (474 days (16 months); semi-major axis of 1.19 AU). Note that the distance of Mars from the Sun is about 1.5 AU. Its orbit has an eccentricity of 0.19 and an inclination of 6 degrees with respect to the ecliptic.
The circumstances at Ryugu are favorable for life.
  1. The highest temperature on the Ryugu asteroid reaches 100 degrees C, while the coldest regions sit at about room temperature. Temperatures also change depending on the solar distance of the asteroid, lowering as Ryugu moves further away from the Sun. This would mean that the circumstances at Ryugu become favourable for life as it passes Earth. The lowering of the temperature at a large distance would not be fatal.
  2. Hydration is essential for life. The required range of dehydration reaction temperature decreases with increasing substitution of the hydroxy-containing carbon: Primary alcohols: 170--180 degrees C; secondary alcohols: 100--140 degrees C; tertiary alcohols: 25 degrees--80 degrees C. Primary/secondary/tertiary refers to the position of -OH substitution in Carbon atom.
  3. Ryugu contains liquid water and also carbonated water. Coral-like inorganic crystals are present. The sample contained carbon rich molecules, amino acids and components of RNA and hydrated compounds! Ammonium.
  4. It has also been found that Ryugu contains phosphorus rich samples. Phosphorus plays a central role in metabolism and in the "dark" realization of the genetic code in TGD. The abstract of the article (see this) summarizes the findings.

    Parent bodies of C-type asteroids may have brought key volatile and organic-rich compounds to the terrestrial planets in the early stages of the Solar System. At the end of 2020, the JAXA Hayabusa2 mission successfully returned samples from Ryugu, providing access to a primitive matter that has not suffered terrestrial alteration. Here we report the discovery of a peculiar class of grains, up to a few hundreds of micrometres in size, that have a hydrated ammonium-magnesium-phosphorus (HAMP)-rich composition. Their specific chemical and physical properties point towards an origin in the outer Solar System, beyond most snow lines, and their preservation along Ryugu history. These phosphorus-rich grains, embedded within an organic-rich phyllosilicate matrix, may have played a major role when immersed in primitive terrestrial water reservoirs. In particular, in contrast to poorly soluble calcium-rich phosphates, HAMP grains favour the release of phosphorus-rich and nitrogen-rich ionic species, to enter chemical reactions. HAMP grains may have thus critically contributed to the reaction pathways of organic matter towards a biochemical evolution.

The panspermia hypothesis states that Ryugu and similar objects could have served as a source of life on Earth.
  1. Overpopulation problem is the theoretical objection against the Panspermia hypothesis. No new forms of life are possible since no niches are left untouched.
  2. There is also a second objection against the panspermia hypothesis as an explanation of these findings about Ryugu. It has been claimed that the Ryugu sample was contaminated by terrestrial microorganisms (see this). Nitrogen dioxide NO2 is used in sterilization meant to remove, kill, or deactivate all forms of life present in fluid or on a specific surface. Life forms of Earth should not be able to colonize samples under extremely sterile conditions. If contamination occurred, its mechanism is unknown.

    The Ryugu samples contained terrestrial microbes and they evolved with time. Their DNA has not yet been identified. They resemble bacilles, which are everywhere on the Earth.

  3. Microfossils have been found in meteorites (see this). They have been found also in Ryugu but only at the surface of Ryugu and were reported to be new fossils. The reason could be that microbes have survived only at the surface of Ryugu where they receive solar light necessary for photosynthesis. The contamination hypothesis states that terrestrial organisms might by some unknown mechanism have contaminated the surface of Ryugu and produced the microfossils.
Neither panspermia hypothesis nor contamination look plausible in the TGD framework. Life would have evolved by the same basic mechanism both at the Earth and the asteroids and other similar objects (see this).
  1. Ryugu stays relatively near the Earth at its orbit. This could have also made possible the generation of organic matter inside the sample during the period that Ryugu has spent at its orbit around the Sun. This requires a model for how this happens and standard physics does not provide such a model.
  2. The notion of the field body is central in the TGD inspired quantum biology and would act as controller of the biological body (see for instance this and this). Ordinary genetic code is proposed to be accompanied by its dark variant realized at the field body for ordinary particles at it having a very large value of effective Planck constant and behaving like dark matter. Could the field body of the Earth and Sun have induced the generation of organic molecules and even bacterial life forms in the same way as they did this at the Earth?
  3. The notion of the gravitational magnetic body, characterized by gravitational Planck constant introduced by Nottale, containing protons behaving like dark matter, represents new quantum physics relevant to the TGD inspired quantum biology. OH-O- + dark proton qubits and their generalizations based on biologically important ions formed by salts would be the key element of life (see this) suggesting besides chemical life also other forms of life.

    Any cold plasma (plasmoids as life forms) and even quartz crystals could give rise to these qubits at temperatures near the room temperature around which the flips of these qubits are possible. The difference of OH bonding energy and O- binding energy determines the relevant energy. Its nominal value is .33 eV and is near the metabolic energy quantum of about .5 eV and near to the thermal energy .15 eV at physiological temperatures.

  4. These qubits would make the matter living and life in this sense is universal. Dark genetic code is predicted and corresponds to the ordinary chemical genetic code. Basic biomolecules would give rise to analogs of topological quantum computers.

    The flipping of these qubits would make quantum computation like information processing possible? Pollack effect by photon absorption can induce OH\rightarrow O- +dark proton transition and the reversal of this process and the reversal of this process can take place spontaneously. If O-+dark proton has a lower energy than OH, it can be also induced by a presence of electric field or absorption of photons by O- so that OH becomes the minimum energy state.

Could one understand the findings about Ryugu in this framework?
  1. The presence of gravitational magnetic bodies of Earth and Sun could have induced the formation of OH-O- qubits and more general qubits, not only at the Earth but also at Ryugu. The presence of OH bonds requires hydration and hydration is indeed possible at Ryugu.

    Therefore the same mechanism could have led to the emergence of the basic organic molecules at the Earth, at Mars and inside the Ryugu asteroid and meteorites. Since the minimal distance of the Earth and Ryugu from the Sun is nearly the same, the temperature of Ryugu is near its maximal value when it is near the Earth so that the temperature would never get too hot.

  2. Ryugu is under the influence of the gravitational bodies of both the Earth and the Sun. Ryugu passesnear the Earth repeatedly with a period of 4 years. The organic molecules and various hydrated compounds could have gradually formed during about 10 million years as it passed near the Earth. Also bacterial life could have emerged in this way. Therefore contamination need not be in question.
See the article Some mysteries of the biological evolution from the TGD point of view and the chapter Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, December 05, 2024

About some number theoretical aspects of TGD

Recently a considerable progress has occurred in the understanding of number theoretic aspects of quantum TGD. I have discussed these aspects in earlier posts but it is useful to collect them together.
  1. There are reasons to think that TGD could be formulated purely number theoretically without introduction of any action principle. This would conform with the M8-H duality and the generalization of the geometric Langlands correspondence to dimension D=4.

    Number theoretic vision however gives extremely powerful constraints on the vacuum functional suggesting even an explicit formula for it. The condition that this expression corresponds to the exponent of Kähler function expressible as Kähler action fixes the coupling constant evolution for the action.

  2. Extensions of rationals, the corresponding Galois groups and ramified primes assignable to polynomials and identifiable as p-adic primes assigned to elementary particles are central notions of quantum TGD. In the recent formulation based on holography = holomorphy principle, it is not quite clear how to assign these notions to the space-time surfaces. The notion of Galois group has a 4-D generalization but can one obtain the ordinary Galois groups and ramified primes? Two ways to achieve this are discussed in this article.

    One could introduce a hierarchy of 4-polynomials (f1,f2,f3,f4) instead oly (f1,f2) and the common roots all 4 polynomials as a set of discrete points would give the desired basic notions assignable to string world sheets.

    One can also consider the maps (f1,f2)→ G( f1,f2)= (g1(f1,f2), g2(f1,f2)) and assign these notions to the surfaces (g1(f1,f2), g2(f1,f2))=(0,0).

  3. Number theoretical universality is possible if the coefficients of the analytic functions (f1,f2) of 3 complex coordinates and one hypercomplex coordinate of H=M4× CP2 are in an algebraic extension of rationals. This implies that the solutions of field equations make sense also in p-adic number fields and their extensions induced by extensions of rationals.

    In this article the details of the adelicization boiling to p-adicization for various p-adic number fields, in particular those assignable to ramified primes, are discussed. p-Adic fractals and holograms emerge very naturally and the iterations of (f1,f2)→ G(f1,f2)= (g1(f1,f2), g2(f1,f2) define hierarchical fractal structures analogs to Mandelbrot and Julia fractals and p-adically mean exponential explosion of the complexity and information content of cognition. The possible relationship to biological and cognitive evolution is highly interesting.

See the article About some number theoretical aspects of TGD.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 04, 2024

p-Adicization, assuming holography = holomorphy principle, produces p-adic fractals and holograms

Yesterday's chat with Tuomas Sorakivi, a member of our Zoom group, was about the concrete graphical representations of the spacetime surfaces as animations. The construction of the representations is shockingly straightforward, because the partial differential equations reduce to algebraic equations that are easy to solve numerically. For the first time, it seems that GPT has created a program without obvious bugs. The challenges relate to how to represent time=constant 2-D sections of the 4-surface most conveniently and how to build animations about the evolution of these sections.

Tuomas asked how to construct p-adic counterparts for space-time surfaces in H=M4× CP2. I have been thinking about the details of this presentation over the years. Here is my current vision of the construction.

  1. By holography = holomorphy principle, space-time surfaces in H correspond to roots (f1,f2)=(0,0) for two analytic (holomorphic) functions fi of of 3 complex coordinates and one hypercomplex coordinate of H (see this). The Taylor coefficients of fi are assumed to be rational or in an algebraic extension of rationals but even more general situations are possible. A very important special case are polynomials fi=Pi.
  2. If we are talking about polynomials or even analytic functions with coefficients that are rational or in algebraic extension to rationals, then a purely formal p-adic equivalent can be associated with every real surface with the same equations.
  3. However, there are some delicate points involved.

    1. The imaginary unit (-1)1/2 is in algebraic expansion if p modulo 4=3. What about p modulo 4=1. In this case, (-1)1/2 can be multiplied as an ordinary p-adic number by the square root of an integer that is only in algebraic expansion so that the problem is solved.
    2. In p-adic topology, large powers of p correspond to small p-adic numbers, unlike in real topology. This eventually led to the canonical concept of identification. Let's translate the powers of p in the expansion of a real number into powers of p (the equivalent of the decimal expansion).

      ∑ xnpn ↔ ∑ xn p-n ×.

      This map of p-adic numbers to real numbers is continuous, but not vice versa. In this way, real points can be mapped to p-adic points or vice versa. In p-adic mass calculations, the map of p-adic points to real points is very natural. One can imagine different variants of the canonical correspondence by introducing, for example, a pinery cutoff analogous to the truncation of decimal numbers. This kind of cutoff is unavoidable.

    3. As such, this correspondence from reals to p-adics is not realistic at the level of H because the symmetries of the real H do not correspond to those of p-adic H. Note that the correspondence at the level of spacetime surfaces is induced from that at the level of the embedding space.
  4. This forces number theoretical discretization, i.e. cognitive representations (p-adic and more generally adelic physics is assumed to provide the correlates of cognition). The symmetries of the real world correspond to symmetries restricted to the discretization. The lattice structure for which continuous translational and rotational symmetries are broken to a discrete subgroup is a typical example.

    Let us consider a given algebraic extension of rationals.

    1. Algebraic rationals can be interpreted as both real and p-adic numbers in an extension induced by the extension of rationals. The points of the cognitive representations correspond to the algebraic points allowed by the extension and correspond to the intersection points of reality as a real space-time surface and p-adicity as p-adic space-time surface.
    2. These algebraic points are a series of powers of p, but there are only a finite number of powers so that the interpretation as algebraic integers makes sense. One can also consider rations of algebraic integers if canonical identification is suitably modified. These discrete points are mapped by the canonical identification or its modification to the rational case from the real side to the p-adic side to obtain a cognitive representation. The cognitive representation gives a discrete skeleton that spans the spacetime surface on both the real and p-adic sides.
Let's see what this means for the concrete construction of p-adic spacetime surfaces.
  1. Take the same equations on the p-adic side as on the real side, that is (f1,f2=(0,0), and solve them around each discrete point of the cognitive representation in some p-adic sphere with radius p-n.

    The origin of the generalized complex coordinates of H is not taken to be the origin of p-adic H, but this canonical identification gives a discrete algebraic point on the p-adic side. So, around each such point, we get a p-adic scaled version of the surface (f1,f2=(0,0) inside the p-adic sphere. This only means moving the surface to another location and symmetries allow it.

  2. How to glue the versions associated with different points together? This is not necessary and not even possible!

    The p-adic concept of differentiability and continuity allows fractality and holography. These are closely related to the p-adic non-determinism meaning that any function depending on finite number of pinary digits has a vanishing derivative. In differential and partial differential equations this implies non-determinism, which I have assumed corresponds to the real side of the complete violation of classical determinism for holography.

    The definition of algebraic surfaces does not involve derivatives but also for algebraic surfaces the roots of (f1,f2)=(0,0) can develop branching singularities at which several roots as space-time regions meet and one must choose one representative (see this).

    1. Assume that the initial surface is defined inside the p-adic sphere, whose radius as the p-adic norm for the points is p-n, n integer. One can even assume that a p-adic counterpart has been constructed only for the spherical shell with radius p-n.

      The essential thing here is that the interior points of a p-adic sphere cannot be distinguished from the points on its surface. The surface of a p-adic sphere is therefore more like a shell. How do you proceed from the shell to the "interiors" of a p-adic sphere?

    2. The basic property of two p-adic spheres is that they are either point strangers or one of the two is inside the other. A p-adic sphere with radius p-n is divided into point strangers p-adic spheres with radius p-n-1 and in each such sphere one can construct a p-adic 4-surface corresponding to the equations (f1,f2)=(0,0). This can be continued as far as desired, always to some value n=N. It corresponds to the shortest scale on the real side and defines the measurement resolution/cognitive resolution physically.
    3. This gives a fractal for which the same (f1,f2)=(0,0) structure repeats at different scales. We can also go the other way, i.e. to longer scales in the real sense.
    4. Also a hologram emerges. All the way down to the smallest scale, the same structure repeats and an arbitrarily small sphere represents the entire structure. This strongly brings to mind biology and genes, which represent the entire organism. Could this correspondence at the p-adic level be similar to the one above or a suitable generalization of it?
  3. Many kinds of generalizations can be obtained from this basic fractal. Endless repetition of the same structure is not very interesting. p-Adic surfaces do not have to be represented by the same pair of functions at different p-adic scales.

    Of particular interest are the 4-D counterparts to fractals, to which the names Feigenbaum, Mandelbrot and Julia are attached. They can be constructed by iteration

    (f1,f2)→G(f1,f2)= (g1(f1,f2),g2(f1,f2)) →G(G(f1,f2)) →...

    so that at each step the scale increases by a factor p. At the smallest scale p-n one has (f1,f2)=(0,0). At the next, longer scale p-N+1 one has G(f1,f2)=(0,0), etc.... One can assign to this kind of hierarchy a hierarchy of extensions of rationals and associated Galois groups whose dimension increases exponentially meaning that algebraic complexity, serving as a measure for the level of conscious intelligence and scale of quantum coherence also increases in the same way.

    The iteration proceeds with the increasing scale and the number-theoretic complexity measured the dimension of the algebraic extension increases exponentially. Cognition becomes more and more complex. Could this serve as a possible model for biological and cognitive evolution as the length scale increases?

    The fundamental question is whether many-sheeted spacetime allows for a corresponding hierarchy at the real side? Could the violation of classical determinism interpreted as p-adic non-determinism for holography allow this?

    See the article TGD as it is towards end of 2024: part I or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

How the possible quantum variants of LLMs could be updated?

If one can assign the training data of LLMs to quantum states, there is a hope that the retraining need not start from scratch and could become more flexible and less expensive.

How to assign to classical associations their quantum representations?

In LLM both inputs and outputs are associations represented as text. The quantum dynamics must not affect the content of the input. A classical association is encoded as a bit sequence. Associations can be enumerated and each corresponds to its own bit sequence serving as an address, a symbolic representation, and no longer contains the original information. The Gödel numbering of statements serves as an analogy.

Also the quantum equivalent of the number of the classical association as a qubit sequence is just a name for it. Quantum processing can operate on these qubit sequences and produce longer quantum associations associated with them which in qubit measurements produce longer associations and superpositions of them. The outcome is determined by the measurement of the bits appearing in the numbering of the associations.

Quantum operations followed by the measurement of qubits can only permute classical associations. They can affect the association probabilities and perhaps add new associations in partial retraining. Various quantum superpositions of the quantum associations (the numbers labelling them) are possible and correspond to the quantum counterpart of the concept of "association A→ ..., where A is fixed.

This allows for maximally simple representations at the quantum level. Arbitrarily complex associations A→ ... can be quantum-encoded by listing them. A local bit-qubit correspondence is the simplest one and the same operation could change the value of both bit and qubit. If the electric field does this then this could be the case for transistors as bits if each bit is accompanied by OH-O- qubit. In the ground state the minimum energy state for OH-O- qubit would correspond to the ordinary bit.

Is the quantum entanglement between bits and qubits necessary or even possible? Could one keep the bit level as it is and perform quantum operations for qubit sequences and transform the to bit sequences so that also associations not possible for the classical computer could appear in the output? This option cannot be excluded if the bit sequences represent analogs of Gödel numbers for associations.

Does quantum non-determinism reduce to classical non-determinism for "small" state function reductions (SSFRs)?

In ZEO, the classical non-determinism does not affect the 3-surfaces nor fermionic states at the boundary of the CD. This is consistent with the identification of the non-determinism of SSFRs as classical non-determinism.

The classical Bohr orbits would be non-unique due to the classical non-determinism appearing already for the 2-D minimal surfaces. The very fact that computer programs can be realized, strongly suggests that this non-determinism is present.

There are two types of non determinisms. A non-deterministic time-like crystal (time crystal) and non-deterministic space-like crystal represent these non-determinisms. Each cell of these crystals would be a seat of non-determinism meaning that the surface branches at the locus of the non-determinism and a single branch is selected. This makes it possible to generate a conscious memory in a memory recall.

Reading and writing transform these two kinds of non-determinisms to each other.

  1. Reading space-like crystals representing data bit sequence creates a time-like representation as a sequence of SSFRs if at a given moment the qubits of the geometric past are frozen. A series of SSFRs, conscious stream, "self" is created at the quantum level. Therefore a space-like non-deterministic crystal can be transformed to a time-crystal. In writing the opposite happens. The minimum energy state for the associated quantum states selects a unique configuration.

    Quantum entanglement between separate non-deterministic representations (cognitive representations possibly allowing characterization in terms of a p-adic topology for a ramified prime) is possible. Also entangled between time- and space-like non-deterministic degrees of freedom is possible.

  2. How these reading and writing processes could be realized? A relation to topological quantum computation, in which time-like and space-like braidings by monopole flux tubes play a central role suggests a possible answer to the question (see this). Think of dancers connected by threads to fixed points on the wall. Dance can be interpreted as a time-like braiding and induces space-like braiding as knotting and linking of the threads connecting the dancers. In TGD the threads correspond to monopole flux tubes.
But what does the classical non-determinism mean?

I have mentioned several times classical non-determinism at the level of holography = holomorphy principle identifying space-time surfaces as roots (f1,f2)=(0,0) of analytic functions of H coordinates. At the level of 3-D holographic data branching should occur so that the algebraic equations allow several roots with different tangent spaces.

  1. What is the precise meaning of the analogy between holographic data as 3-surfaces and the frames of soap films? Could all roots (f1,f2)=(0,0) correspond to different alternatives for this non-determinism or are there some restrictions? It seems that the 4-D roots, which can be glued together continuously cannot correspond to the non-determinism. The cusp catastrophe serves as a good example of the situation. The regions of the space-time surface representing different roots cannot be regarded as distinct space-time surfaces.

    Rather, it seems that the non-determinism requires multiplicity of the 4-D tangent space and in this kind of situation one must select one branch.

  2. Could the choice of only one root in the branching situation give rise to non-determinism? Is it possible to implement boundary conditions stating classical and quantal conservation laws at the interfaces of the regions corresponding to different branches?

    Any general coordinate invariant action expressible in terms of the induced geometry is consistent with holography = holomorphy principle (see this and this). Is it permissible to choose the classical action so that boundary conditions can be satisfied when a single root is selected? This would force coupling constant evolution for the parameters of the action if one also assumes that the classical action exponential as an exponent of K\"ahler function corresponds to a power of the discriminant D defined as a product of root differences? The same choice should be made at the fermion level as well: the super symmetry fixing the modified fermionic gamma matrices once the bosonic action is fixed, would guarantee this.

  3. Also, the roots u for a polynomial P(u) of the hypercomplex real coordinate u assignable to the singularities as loci of non-determinism at the string world sheets come to mind. These roots must be real. At criticality a new root could appear. Also branching could occur and relate to the fermion pair creation possible only in 4-D space-time thanks to the existence of exotic smooth structures (see this and this). Could these roots represent the positions of qubits?
What could the updating of the training material by adding an association mean at a fundamental level?

Retraining cannot be only the manipulation of association probabilities but also the addition of new associations. The scope of the concept "associations related to a given input" is expanded and complexity increases.

If these associations are enumerated by bit sequences, it is enough to associate a series of bits with the new association as a classical bit sequence and to this new bit sequence a qubit sequence by bit-qubit correspondence. The superposition of the quantum counterpart of the new association with previous qubit sequences should be possible. Just like in LLM, also the combinations of the basic associations mapped to qubit sequences into longer quantum association chains should be possible.

See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.