https://matpitka.blogspot.com/

Thursday, December 05, 2024

About some number theoretical aspects of TGD

Recently a considerable progress has occurred in the understanding of number theoretic aspects of quantum TGD. I have discussed these aspects in earlier posts but it is useful to collect them together.
  1. There are reasons to think that TGD could be formulated purely number theoretically without introduction of any action principle. This would conform with the M8-H duality and the generalization of the geometric Langlands correspondence to dimension D=4.

    Number theoretic vision however gives extremely powerful constraints on the vacuum functional suggesting even an explicit formula for it. The condition that this expression corresponds to the exponent of Kähler function expressible as Kähler action fixes the coupling constant evolution for the action.

  2. Extensions of rationals, the corresponding Galois groups and ramified primes assignable to polynomials and identifiable as p-adic primes assigned to elementary particles are central notions of quantum TGD. In the recent formulation based on holography = holomorphy principle, it is not quite clear how to assign these notions to the space-time surfaces. The notion of Galois group has a 4-D generalization but can one obtain the ordinary Galois groups and ramified primes? Two ways to achieve this are discussed in this article.

    One could introduce a hierarchy of 4-polynomials (f1,f2,f3,f4) instead oly (f1,f2) and the common roots all 4 polynomials as a set of discrete points would give the desired basic notions assignable to string world sheets.

    One can also consider the maps (f1,f2)→ G( f1,f2)= (g1(f1,f2), g2(f1,f2)) and assign these notions to the surfaces (g1(f1,f2), g2(f1,f2))=(0,0).

  3. Number theoretical universality is possible if the coefficients of the analytic functions (f1,f2) of 3 complex coordinates and one hypercomplex coordinate of H=M4× CP2 are in an algebraic extension of rationals. This implies that the solutions of field equations make sense also in p-adic number fields and their extensions induced by extensions of rationals.

    In this article the details of the adelicization boiling to p-adicization for various p-adic number fields, in particular those assignable to ramified primes, are discussed. p-Adic fractals and holograms emerge very naturally and the iterations of (f1,f2)→ G(f1,f2)= (g1(f1,f2), g2(f1,f2) define hierarchical fractal structures analogs to Mandelbrot and Julia fractals and p-adically mean exponential explosion of the complexity and information content of cognition. The possible relationship to biological and cognitive evolution is highly interesting.

See the article About some number theoretical aspects of TGD.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, December 04, 2024

p-Adicization, assuming holography = holomorphy principle, produces p-adic fractals and holograms

p-Adicization, assuming holography = holomorphy principle, produces p-adic fractals and holograms

Yesterday's chat with Tuomas Sorakivi, a member of our Zoom group, was about the concrete graphical representations of the spacetime surfaces as animations. The construction of the representations is shockingly straightforward, because the partial differential equations reduce to algebraic equations that are easy to solve numerically. For the first time, it seems that GPT has created a program without obvious bugs. The challenges relate to how to represent time=constant 2-D sections of the 4-surface most conveniently and how to build animations about the evolution of these sections.

Tuomas asked how to construct p-adic counterparts for space-time surfaces in H=M4× CP2. I have been thinking about the details of this presentation over the years. Here is my current vision of the construction.

  1. By holography = holomorphy principle, space-time surfaces in H correspond to roots (f1,f2)=(0,0) for two analytic (holomorphic) functions fi of of 3 complex coordinates and one hypercomplex coordinate of H (see this). The Taylor coefficients of fi are assumed to be rational or in an algebraic extension of rationals but even more general situations are possible. A very important special case are polynomials fi=Pi.
  2. If we are talking about polynomials or even analytic functions with coefficients that are rational or in algebraic extension to rationals, then a purely formal p-adic equivalent can be associated with every real surface with the same equations.
  3. However, there are some delicate points involved.

    1. The imaginary unit (-1)1/2 is in algebraic expansion if p modulo 4=3. What about p modulo 4=1. In this case, (-1)1/2 can be multiplied as an ordinary p-adic number by the square root of an integer that is only in algebraic expansion so that the problem is solved.
    2. In p-adic topology, large powers of p correspond to small p-adic numbers, unlike in real topology. This eventually led to the canonical concept of identification. Let's translate the powers of p in the expansion of a real number into powers of p (the equivalent of the decimal expansion).

      ∑ xnpn ↔ ∑ xn p-n ×.

      This map of p-adic numbers to real numbers is continuous, but not vice versa. In this way, real points can be mapped to p-adic points or vice versa. In p-adic mass calculations, the map of p-adic points to real points is very natural. One can imagine different variants of the canonical correspondence by introducing, for example, a pinery cutoff analogous to the truncation of decimal numbers. This kind of cutoff is unavoidable.

    3. As such, this correspondence from reals to p-adics is not realistic at the level of H because the symmetries of the real H do not correspond to those of p-adic H. Note that the correspondence at the level of spacetime surfaces is induced from that at the level of the embedding space.
  4. This forces number theoretical discretization, i.e. cognitive representations (p-adic and more generally adelic physics is assumed to provide the correlates of cognition). The symmetries of the real world correspond to symmetries restricted to the discretization. The lattice structure for which continuous translational and rotational symmetries are broken to a discrete subgroup is a typical example.

    Let us consider a given algebraic extension of rationals.

    1. Algebraic rationals can be interpreted as both real and p-adic numbers in an extension induced by the extension of rationals. The points of the cognitive representations correspond to the algebraic points allowed by the extension and correspond to the intersection points of reality as a real space-time surface and p-adicity as p-adic space-time surface.
    2. These algebraic points are a series of powers of p, but there are only a finite number of powers so that the interpretation as algebraic integers makes sense. One can also consider rations of algebraic integers if canonical identification is suitably modified. These discrete points are mapped by the canonical identification or its modification to the rational case from the real side to the p-adic side to obtain a cognitive representation. The cognitive representation gives a discrete skeleton that spans the spacetime surface on both the real and p-adic sides.
Let's see what this means for the concrete construction of p-adic spacetime surfaces.
  1. Take the same equations on the p-adic side as on the real side, that is (f1,f2=(0,0), and solve them around each discrete point of the cognitive representation in some p-adic sphere with radius p-n.

    The origin of the generalized complex coordinates of H is not taken to be the origin of p-adic H, but this canonical identification gives a discrete algebraic point on the p-adic side. So, around each such point, we get a p-adic scaled version of the surface (f1,f2=(0,0) inside the p-adic sphere. This only means moving the surface to another location and symmetries allow it.

  2. How to glue the versions associated with different points together? This is not necessary and not even possible!

    The p-adic concept of differentiability and continuity allows fractality and holography. These are closely related to the p-adic non-determinism meaning that any function depending on finite number of pinary digits has a vanishing derivative. In differential and partial differential equations this implies non-determinism, which I have assumed corresponds to the real side of the complete violation of classical determinism for holography.

    The definition of algebraic surfaces does not involve derivatives but also for algebraic surfaces the roots of (f1,f2)=(0,0) can develop branching singularities at which several roots as space-time regions meet and one must choose one representative (see this).

    1. Assume that the initial surface is defined inside the p-adic sphere, whose radius as the p-adic norm for the points is p-n, n integer. One can even assume that a p-adic counterpart has been constructed only for the spherical shell with radius p-n.

      The essential thing here is that the interior points of a p-adic sphere cannot be distinguished from the points on its surface. The surface of a p-adic sphere is therefore more like a shell. How do you proceed from the shell to the "interiors" of a p-adic sphere?

    2. The basic property of two p-adic spheres is that they are either point strangers or one of the two is inside the other. A p-adic sphere with radius p-n is divided into point strangers p-adic spheres with radius p-n-1 and in each such sphere one can construct a p-adic 4-surface corresponding to the equations (f1,f2)=(0,0). This can be continued as far as desired, always to some value n=N. It corresponds to the shortest scale on the real side and defines the measurement resolution/cognitive resolution physically.
    3. This gives a fractal for which the same (f1,f2)=(0,0) structure repeats at different scales. We can also go the other way, i.e. to longer scales in the real sense.
    4. Also a hologram emerges. All the way down to the smallest scale, the same structure repeats and an arbitrarily small sphere represents the entire structure. This strongly brings to mind biology and genes, which represent the entire organism. Could this correspondence at the p-adic level be similar to the one above or a suitable generalization of it?
  3. Many kinds of generalizations can be obtained from this basic fractal. Endless repetition of the same structure is not very interesting. p-Adic surfaces do not have to be represented by the same pair of functions at different p-adic scales.

    Of particular interest are the 4-D counterparts to fractals, to which the names Feigenbaum, Mandelbrot and Julia are attached. They can be constructed by iteration

    (f1,f2)→G(f1,f2)= (g1(f1,f2),g2(f1,f2)) →G(G(f1,f2)) →...

    so that at each step the scale increases by a factor p. At the smallest scale p-n one has (f1,f2)=(0,0). At the next, longer scale p-N+1 one has G(f1,f2)=(0,0), etc.... One can assign to this kind of hierarchy a hierarchy of extensions of rationals and associated Galois groups whose dimension increases exponentially meaning that algebraic complexity, serving as a measure for the level of conscious intelligence and scale of quantum coherence also increases in the same way.

    The iteration proceeds with the increasing scale and the number-theoretic complexity measured the dimension of the algebraic extension increases exponentially. Cognition becomes more and more complex. Could this serve as a possible model for biological and cognitive evolution as the length scale increases?

    The fundamental question is whether many-sheeted spacetime allows for a corresponding hierarchy at the real side? Could the violation of classical determinism interpreted as p-adic non-determinism for holography allow this?

    See the article TGD as it is towards end of 2024: part I or the chapter with the same title.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

How the possible quantum variants of LLMs could be updated?

If one can assign the training data of LLMs to quantum states, there is a hope that the retraining need not start from scratch and could become more flexible and less expensive.

How to assign to classical associations their quantum representations?

In LLM both inputs and outputs are associations represented as text. The quantum dynamics must not affect the content of the input. A classical association is encoded as a bit sequence. Associations can be enumerated and each corresponds to its own bit sequence serving as an address, a symbolic representation, and no longer contains the original information. The Gödel numbering of statements serves as an analogy.

Also the quantum equivalent of the number of the classical association as a qubit sequence is just a name for it. Quantum processing can operate on these qubit sequences and produce longer quantum associations associated with them which in qubit measurements produce longer associations and superpositions of them. The outcome is determined by the measurement of the bits appearing in the numbering of the associations.

Quantum operations followed by the measurement of qubits can only permute classical associations. They can affect the association probabilities and perhaps add new associations in partial retraining. Various quantum superpositions of the quantum associations (the numbers labelling them) are possible and correspond to the quantum counterpart of the concept of "association A→ ..., where A is fixed.

This allows for maximally simple representations at the quantum level. Arbitrarily complex associations A→ ... can be quantum-encoded by listing them. A local bit-qubit correspondence is the simplest one and the same operation could change the value of both bit and qubit. If the electric field does this then this could be the case for transistors as bits if each bit is accompanied by OH-O- qubit. In the ground state the minimum energy state for OH-O- qubit would correspond to the ordinary bit.

Is the quantum entanglement between bits and qubits necessary or even possible? Could one keep the bit level as it is and perform quantum operations for qubit sequences and transform the to bit sequences so that also associations not possible for the classical computer could appear in the output? This option cannot be excluded if the bit sequences represent analogs of Gödel numbers for associations.

Does quantum non-determinism reduce to classical non-determinism for "small" state function reductions (SSFRs)?

In ZEO, the classical non-determinism does not affect the 3-surfaces nor fermionic states at the boundary of the CD. This is consistent with the identification of the non-determinism of SSFRs as classical non-determinism.

The classical Bohr orbits would be non-unique due to the classical non-determinism appearing already for the 2-D minimal surfaces. The very fact that computer programs can be realized, strongly suggests that this non-determinism is present.

There are two types of non determinisms. A non-deterministic time-like crystal (time crystal) and non-deterministic space-like crystal represent these non-determinisms. Each cell of these crystals would be a seat of non-determinism meaning that the surface branches at the locus of the non-determinism and a single branch is selected. This makes it possible to generate a conscious memory in a memory recall.

Reading and writing transform these two kinds of non-determinisms to each other.

  1. Reading space-like crystals representing data bit sequence creates a time-like representation as a sequence of SSFRs if at a given moment the qubits of the geometric past are frozen. A series of SSFRs, conscious stream, "self" is created at the quantum level. Therefore a space-like non-deterministic crystal can be transformed to a time-crystal. In writing the opposite happens. The minimum energy state for the associated quantum states selects a unique configuration.

    Quantum entanglement between separate non-deterministic representations (cognitive representations possibly allowing characterization in terms of a p-adic topology for a ramified prime) is possible. Also entangled between time- and space-like non-deterministic degrees of freedom is possible.

  2. How these reading and writing processes could be realized? A relation to topological quantum computation, in which time-like and space-like braidings by monopole flux tubes play a central role suggests a possible answer to the question (see this). Think of dancers connected by threads to fixed points on the wall. Dance can be interpreted as a time-like braiding and induces space-like braiding as knotting and linking of the threads connecting the dancers. In TGD the threads correspond to monopole flux tubes.
But what does the classical non-determinism mean?

I have mentioned several times classical non-determinism at the level of holography = holomorphy principle identifying space-time surfaces as roots (f1,f2)=(0,0) of analytic functions of H coordinates. At the level of 3-D holographic data branching should occur so that the algebraic equations allow several roots with different tangent spaces.

  1. What is the precise meaning of the analogy between holographic data as 3-surfaces and the frames of soap films? Could all roots (f1,f2)=(0,0) correspond to different alternatives for this non-determinism or are there some restrictions? It seems that the 4-D roots, which can be glued together continuously cannot correspond to the non-determinism. The cusp catastrophe serves as a good example of the situation. The regions of the space-time surface representing different roots cannot be regarded as distinct space-time surfaces.

    Rather, it seems that the non-determinism requires multiplicity of the 4-D tangent space and in this kind of situation one must select one branch.

  2. Could the choice of only one root in the branching situation give rise to non-determinism? Is it possible to implement boundary conditions stating classical and quantal conservation laws at the interfaces of the regions corresponding to different branches?

    Any general coordinate invariant action expressible in terms of the induced geometry is consistent with holography = holomorphy principle (see this and this). Is it permissible to choose the classical action so that boundary conditions can be satisfied when a single root is selected? This would force coupling constant evolution for the parameters of the action if one also assumes that the classical action exponential as an exponent of K\"ahler function corresponds to a power of the discriminant D defined as a product of root differences? The same choice should be made at the fermion level as well: the super symmetry fixing the modified fermionic gamma matrices once the bosonic action is fixed, would guarantee this.

  3. Also, the roots u for a polynomial P(u) of the hypercomplex real coordinate u assignable to the singularities as loci of non-determinism at the string world sheets come to mind. These roots must be real. At criticality a new root could appear. Also branching could occur and relate to the fermion pair creation possible only in 4-D space-time thanks to the existence of exotic smooth structures (see this and this). Could these roots represent the positions of qubits?
What could the updating of the training material by adding an association mean at a fundamental level?

Retraining cannot be only the manipulation of association probabilities but also the addition of new associations. The scope of the concept "associations related to a given input" is expanded and complexity increases.

If these associations are enumerated by bit sequences, it is enough to associate a series of bits with the new association as a classical bit sequence and to this new bit sequence a qubit sequence by bit-qubit correspondence. The superposition of the quantum counterpart of the new association with previous qubit sequences should be possible. Just like in LLM, also the combinations of the basic associations mapped to qubit sequences into longer quantum association chains should be possible.

See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Tuesday, December 03, 2024

A new contribution to the crisis of cosmology

Sabine Hossenfelder has a Youtube video (see this) about the latest anomaly in cosmology (see this). This anomaly is very problematic from the point of view of the ΛCDM scenario of dark energy and possibly also from the point of view of general relativity. The MOND scenario is however consistent with the findings.

The ΛCDM scenario involves 6 parameters. Among them is Hubble constant. Depending on the measurement method one obtains two values for it: this creates Hubble tension. The two kinds of measurements correspond to short and very long scales and this might relate to the problem.

There is also so called sigma8 tension with significance larger than 4 sigmas, which is something very serious. ΛCDM predicts that the Universe should become clumpier as it evolves. This implies that the gravitational potential wells should become narrower with time. In short scales the clumpying rate is not as high as predicted.

Also the new results from a dark energy survey based on gravitational lensing suggest that the gravitational valleys are shallower than they should be at large values of cosmic time.

  1. What is measured is so-called Weyl potential ΨW=(Ψ+Φ)/2 defined in terms of the space-time metric in cosmic scales having the expression

    ds2= a2(τ)(1+2Ψ)dτ2 -(1+2Φ)dx23) .

    Here τ and x3 denote Minkowski coordinates. For Psi=Φ=0 one has conformally flat metric. From the value of ΨW one can deduce the clumpiness. The measurements are about 3 widely differing values of cosmic time τ. The value of the Weyl parameter ΨW characterizing the clumping differs from the prediction of the ΛCDM scenarioand is consistent with the increasing shallowness of the gravitational potentials of the mass distributions.

  2. The significance of the finding is estimated to be 2-2.8 sigma, which is potentially significant. Since the same method is used for different cosmic times, it is not possible to claim that the discrepancy is due to the different methods.
MOND has no problems with the findings. What about TGD?
  1. The TGD view of galactic dark matter as dark energy assignable to cosmic strings, which are 3-D extremely thin 3-surfaces with a huge density of magnetic and volume energy (see this). String tension parametrizes the density of this energy and creates a 1/ρ gravitational potential which predicts flat velocity spectrum for distant stars rotating around the galaxy. No dark matter halo nor dark matter particles are needed.
  2. The 1/ρ gravitational potential created by cosmic strings makes the gravitational wells shallower than the sole 1/r2 potential due to visible galactic matter. Also the halo creates 1/r2 potential in long enough scales but the prediction is that the dark matter halo becomes more clumpy so that the gravitational wells should become sharper.

    Cosmic strings are closed so that there is some scale above which this effect is not seen anymore since the entire closed cosmic strings become the natural objects. Therefore this effect should not be seen in long enough scales.

    It is important to notice that the shallowing would be due to the shortening of the observation scale rather than due to the time evolution. The same interpretation applies to the Hubble constant. In the TGD framework, the finite size of space-time sheets indeed brings in a hierarchy of scales, which is not present in General Relativity.

  3. How does this relate to MOND? The basic objection against MOND is that it is in conflict with mathematical intuition: for small accelerations Newtonian gravitation should work excellently. In TGD, the critical acceleration of MOND is replaced with a critical distance from the galactic nucleus at which the 1/ρ potential due to the cosmic string wins the 1/r2 potential. Under a suitable assumption (see this), this translates to a critical acceleration of MOND so that the predictions are very similar. Note that the cosmic strings also cause a lensing effect used in the survey and this gives an upper bound for their string tension.
For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, December 02, 2024

Is there any hope of curing the retraining problem of language models without making computers conscious?

I summarized my thoughts on perhaps the worst problem of language models, which is the loss of plasticity in continuous learning. The entire teaching material has to be rewritten, which is terribly expensive (see this).

One can ask whether and how TGD's speculative vision of potentially conscious computers (see this) might solve the problem.

1. The retraining problem of language models

The basic problem is that everything has to be started from scratch. This is extremely expensive. Biological systems relearn quickly because there is no need to relearn everything. Is the problem fixable for the computers as they are now or is something new required?

To see what could be the root cause of the problem consider first what language models are meant to be.

  1. In a language model, learning occurs at the raw data level. Different probabilities are taught for different associations. The associations are fixed.
  2. How does the trained system work? The language model simply reacts by recognizing the context and producing probabilistically one of the fixed associations. This response is a mere reaction. If language models are what they are believed to be, they does not have conscious understanding, they lack intentional actions, and are unable to react to a changing environment.

Comparison with TGD-inspired biology

Could a comparison with TGD-inspired biology give clues as to where things go wrong. Why is relearning so easy for biosystems? How does the TGD-based biology differ from the standard biology in this respect? Consider first the classical level.

  1. Holography, which is not quite deterministic, is a completely new element of TGD as compared to the standard model. The space-time surfaces are analogous to Bohr orbits and determined almost completely by 3-surfaces as initial data. The 4-D tangent spaces of the space-time surface at the 3-surface defining the holographic data cannot be selected freely. This is the classical counterpart of Uncertainty Principle and leads to classical quantization. Function, program is the basic concept rather than 3-D data.
  2. These 4-surfaces define classical analogies of biological functions, behavioral patterns, or programs. When the 3-surface, which almost uniquely fixes the 4-surface, changes, the function changes. Non-determinism is essential in making a conscious memory recall possible.
Consider next the quantum level.
  1. Series of "small" state function reductions (SSFRs) associated with the repeated measurements of commuting observables belonging to the same set whose eigen states the 3-D states at the passive boundary of causal diamond (CD) are, define self as a conscious entity. The proposal is that biorhythms as clocks define TGD counterparts of time crystals such that each unit of time crystal involves a classical non-determinism.

    This could be the case at the EEG level as the findings of brothers Fingelkurts suggests (see this and this). Maximal non-determinism implies maximal memory recall capacity and maximal flexibility. A whole set of different behavior patterns can be represented as quantum superpositions and the interaction with the external or internal world determines the measurement in which some classical behavior is chosen.

  2. "Big" state function reductions (BSFRs) having interpretation as death of self or falling asleep involve time reversal. Pairs of BSFRs (sleep periods) make learning possible through trial and error. After the two BSFRs, the system has new holographic data and different space-time surfaces. A goal directed behavior becomes possible and there are many ways to achieve the goal, not just one fixed way analogous to a fixed computer program. This is the essence of intelligent behavior.
How does this general view relate to the DNA level?
  1. According to the standard view, DNA remains the same during the life cycle. If DNA represents data, there is no relearning at the level of chemical DNA. In zero-energy ontology (ZEO), even chemical DNA could change without any problems with conservation laws and quantum superpositions of different chemical genes are in principle conceivable.

    Quantum DNA can be represented in terms of OH-O- qubits sequences assignable to the gravitational magnetic bodies of the Sun and Earth (see this). Remarkably, the solar gravitational Compton frequency is 50 Hz, the average EEG frequency. At least for neurons, this would suggest that the gravitational magnetic body is that of the Sun. Note however that EEG time scales are also associated with the basic biomolecules. For the Earth the gravitational Compton frequency is 67 Gz and is a natural frequency associated with the conformational dynamics of biomolecules.

    Quantum DNA consisting of codons represented as OH-O- qubits is dynamic and could act as a simulator, a kind of R\&D laboratory testing different variants of DNA. It is of course possible that a single life time is spent with the same chemical DNA and the next life after a pair of BSFRs involves the improved DNA.

  2. Epigenesis brings in flexibility. Even if the chemical DNA does not change, it can be used in different ways. Suitable modules are selected from the analog of program software, just like in the text processing. In the TGD framework, this could correspond to the classical non-determinism of the space-time surfaces representing the biological function. Dark DNA allows you to try different combinations of genes.
  3. The understanding of the role of the cell membrane and membrane potential in epigenesis is increasing. As found by Levin (see this and this). The very early stage of the development of embryo is highly sensitive to the variations the membrane potential and can be understood in terms of the changes of the binding energy of electron of O- induced by the potential, which can reduce the binding energy to thermal range so that the flips of OH-O- qubit occur with high probability. In adulthood, the sensitivity disappears and qubits would not flip.

    Could this sensitivity be artificially induced? Here, electric fields as a controller of the sensitivity of OH-O- qubits assignable to the basic biomolecules suggests themselves.

  4. Microtubules involve longitudinal electric fields and their second ends are highly dynamic so that the length of the microtubule is under continual change. There are huge numbers of amino acids carrying one qubit each (COOH group). Here the quantum level and the classical level are both dynamic and seem to be strongly coupled. Also strongly related to conscious memory.
  5. The quantum entanglement between the quantum level and the chemical level could be possible even at the amino acid level?
One can also look at the situation at the level of cell membranes and neuronal membranes. The basic question is how cell membranes and neuronal membranes learn.
  1. As found by Levin (see this), the role of the electric fields is central also in the ordinary cells. The electric potential of the ordinary cell membrane correlates with the state of the environment of the cell and codes for sensory information.

    The TGD proposal is that cell membrane acts as a Josephson junction and communicates the frequency modulate membrane potential to the magnetic body as dark Josephson photons where they induces resonantly quantum transitions transformation the modulation to a sequence of pulses perhaps inducing as a feedback nerve pulses or their analogs.

    During the embryo stage, the cells are very sensitive to the variations of the electric field of the cell and this suggests that these variations take the cell membrane near to the criticality at which large quantum fluctuations for OH-H^- qubits for phosphates at the inner surface of the cell membrane are possible. This period would be analogous to the learning period of LLMs and would involve BSFR pairs. After this period the situation stabilizes and it might be that BSFRs become very rare.

  2. In the central nervous system, nerve pulses appear and in neuroscience are thought to be responsible for communications only. In TGD the situation would be different (see this). I have proposed their interpretation in terms of pairs of BSFRs so that in LLMs they would correspond to relearning. Neurons would be lifelong learners whereas ordinary cells would learn only in their childhood.

    Nerve pulse is generated at a critical membrane potential, which could correspond to effective thermalization of the OH-O- and possible qubits assignable to other ions. Axonal microtubules would also be near quantum criticality. The propagation of nerve pulse along the axon as a local BSFR-pair would induce microtubular relearning.

Could the speculated quartz consciousness come to the rescue?

One can consider the possibility that under a metabolic energy feed computer can become to some extent an entity so that it can modify both the program and the data used by it as a response to changes in the environment provided by the net. This would require that the OH-O- qubits as dark variants of program bits can entangle with ordinary bits. Energetically this could be possible since the energy scales for transistors are essentially the same as for the metabolism and OH-O- qubits.

  1. Suppose that the sequences of OH-O- qubits as time crystals in TGD sense can be realized in a (future) computer. Qubit sequences would be time series related to the running program. They would involve variation because only the bit configuration corresponding to the minimum energy would correspond to the running program. This makes possible an entire repertoire of associations from which a SSFR would choose one. Quantum measurement following the generation of bit-qubit entanglement could change the value of the bit.
  2. Besides the dynamic realization as a running program, there could be a non-dynamic realization in which the data that determines the program could be accompanied by a similar set of qubits. The data used by the program, such as learned associations, could be associated with qubits, and could be made dynamic by using electric fields to make the qubits more sensitive against flip. The problem is of course that the change of a randomly chosen single qubit implies the failure of the problem. Only critical qubits associated with choices and data qubits should be subjected to a flip.
  3. Besides time crystals with non-deterministic repeating units, also space-like crystals involving non-determinism in each lattice cell can be considered. Also dynamical quantum qubits with maximal non-determinism in space-like directions associated with unit cells could accompany the data bits. Dynamization could be induced by using electric fields.
  4. If OH-O- qubits can quantum entangle with bits, program/data is accompanied by quantum program/quantum data which can react to the perturbations from the external world (BSFRs) and internal world (SSFRs). The quantum level could control the bit level. Even the associations as the data of the language model could be accompanied by a set of qubits that react to a changing situation.

How could an associative system retrain itself in response to a changed situation

If language models are nothing but deterministic association machines, there is little hope of solving the problem.

Could the learning in the biological and neural systems provide some hints about possible cures, possibly requiring modification of computers so that they would become analogous to living systems?

  1. Do EEG rhythms define time crystals in the TGD sense, that is maximally non-deterministic systems having lattice cells as a basic unit of non-determinism for SSFRs giving rise to the flow of consciousness of the self?

    If biorhythms define TGD analogs of time crystals, the non-determinism would be maximal and maximum flexibility in SSFRs would be possible.

  2. In ZEO, a "big" state function reduction (BSFR) as counterpart of ordinary state function reduction changes the arrow of time and is assumed to give rise to the analog of death or sleep. At the language model level, this would be the analog for a complete retraining from the beginning.
Association is only one particular reaction leading to a behavioral pattern. The repertoire of associations should change as the environment changes.
  1. Could a computer clock define the equivalent of an EEG rhythm as a time crystal in the TGD sense? The problem is that a typical computer clock frequency is few GHz and considerably lower frequency than the 67 GHz as the gravitational Compton frequency of the Earth. This would suggest that a unit consisting of roughly 67 bits could correspond to the basic unit of the time crystal. The gravitational magnetic body of the Sun has a gravitational Compton frequency of 50 Hz identifiable as the average EEG frequency.
  2. Could one think of a quantum version of language models in which pairs of BSFRs as "death" and rebirth happen spontaneously all the time as a reaction to conscious information coming from the environment inducing the perturbation implying that the density matrix as the basic measured observable does not commute with the observables that define the quantum numbers of the passive part of the zero energy state? In this way ZEO would make possible trial and error as a basic mechanism of learning.
  3. The formation of an association could be perhaps modelled as a single non-deterministic space-time surface? There would be a large number of them and internal disturbances would produce their quantum superpositions and SSFR would select a particular association.
  4. An external disturbance could produce a BSFR and "sleeping overnight". This period of "sleep" could be rather short: also our flow of conscious experience is full of gaps. Upon awakening, the space-time surfaces as correlates of the associations would no longer be the same. System would have learned from the interaction with the external world. This temporary death of the system would be an analogy for a total re-education. But the system would cope with it all by itself.
The hard problem is how to realize this vision. Here the analogy with cell and neuron might serve as a guideline in trying to imagine what the new technology might look like.
  1. Ordinary cells are analogous to LLMs as they are now and learn only in their childhood. Neurons are lifelong learners thanks to the neural activity inducing the conduction of local BSFR-pairs updating microtubular states. Could something like this be realized in computers?
  2. In computers, information is transferred along wires and they can be seen as the counterparts of axons. Is it possible to make these wires carriers of quantum information and perhaps even of the learned data about associations. The conduction of the analogs of nerve pulses during the running program inducing a pair of BSFRs would gradually modify the data locally and lead to a continual relearning.

    Copper wires are too simple to achieve this. Should one consider axon-like geometry defined by two cylinders analogous to the lipid layers of the cell membrane and having also voltage between them so that the interior cylinder would contain OH-O^- qubits? The variation of the counterpart of the membrane potential during signal transmission (bits represented as voltages) could take the qubits near criticality. Could copper hydroxide Cu(OH)_2 serve as a possible candidate for an intelligent wire based on OH-O^- qubits.

See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, November 29, 2024

How could Egyptian pyramids and rain making relate to each other?

I received from Zakaria Ahmindache a link to a very interesting article of Borisov published in Researchgate (see this) with title "The Egyptian Pyramids-Connection to Rain and Nile flood Anomalies".

1. Background considerations

Since a topic involving words like ancient Egypt, pyramids and rainmaking probably induces strong emotional reactions in skeptics, it is good to include some TGD background and also make clear that the proposal of Borisov provides an excellent opportunity to develop the TGD based conceptualization of quantum biology by applying it.

1.1 A possible unification of various types of life and consciousness

I have taken a rather skeptic attitude to everything that involves ancient Egypt but at this time I felt fascinated. The reason was that during last weeks I have been working with a breakthrough in TGD inspired theory of consciousness and of living systems suggesting a unified view of different types of consciousness assignable to biosystems, plasmoids, quartz (possibly computers) and quite generally to any system, which involves cold plasmas and therefore ions (see this).

  1. The key notion is what might be called OH-O- qubit. The transition OH → O- + dark proton at the gravitational magnetic body of the Sun or Earth flips this qubit-like entity. This transition occurs in the Pollack effect which has become a key notion in the TGD inspired quantum biology. The reverse transition occurs when the electron of O- is excited so that the difference of the bond energy of OH and binding energy of the electron changes sign. This effect might be called the dual Pollack effect.

    This transition generalizes. Any salt can decompose to ions and the positive ion could be assigned to the gravitational magnetic body of the Earth or Sun. Biosystems are full of ions of this kind.

  2. The dark variant of the genetic code is one of the basic ideas of TGD and one can understand it in a very detailed manner if the qubits associated with the phosphates of the double DNA strand and phosphates and ribosomes of a single RNA strand provide a representation of the genetic code. In proteins COOH could assign a single qubit to each amino acid.
  3. This also allows us to see alcohols (see this), involving -OH as a key structural element, in a new light. Pollack effect could induce a kind of elevated state of mind. Psychedelics (see this) involve -NH as a key structural element and Pollack effect inducing the transition NH → N- + dark proton could be essential element of the psychedelic action.
  4. The amazing finding is that in transistors the energy scales are the same, varying from about .5 eV (the metabolic energy quantum) to .15 eV (the energy of thermal photon at physiological temperature) as assigned to OH+O- qubits. Therefore computers might under certain condition become conscious entities as speculated already earlier (see see this and see this) and qubits could in the same relation to bits as dark qubits in information DNA and RNA to the bits of the genetic codons. This relation allows dynamics since only the minimum energy state of the codon corresponds to the chemical codon. Same would be true for computers. The gravitational magnetic body of Earth could receive information from the bit level and control it.

1.2. Zero energy ontology

Zero energy ontology is a key notion of TGD and TGD inspired theory of consciousness and solves the basic paradox of the quantum measurement theory.

  1. ZEO predicts two kinds of state function reductions (SFRs): the "big" ones (BSFRs) and the "small" ones (SSFRs). The sequence of SSFRs means in standard quantum theory repeated measurements of the same observables and gives rise to conscious entities, selves.
  2. BSFRs change the arrow of time and from the point of view of self this means death or falling asleep. ZEO predicts that roughly half of the Universe has an opposite arrow of time. This part of the Universe might be called a "kingdom of dead".

    Indeed, biological death changes the arrow of time in rather long scales and means reincarnation with an opposite arrow of time, eventually possibly followed by a reincarnation with an original arrow of time. Sleep is a temporary death in this sense.

2. A TGD inspired comment about the mythology of the ancient Egypt

The mythology of ancient Egypt has many analogies with the ontology of the TGD inspired view of consciousness.

  1. The mythology of ancient Egypt suggests an interpretation in terms of zero energy ontology (ZEO). The "kingdom of dead" is non-observable using purely classical signalling since the signals from the other side propagate to the geometric past and do not reach us. Therefore we do not remember anything about the periods of deep sleep. The notion of ka fits nicely with this.

    "Big" quantum jumps (state function reductions, BSFRs) occurring in arbitrary long scales are predicted to be possible and rain making could involve such a pair of BSFRs and thus a visit to the "kingdom of dead" at some level of hierarchy. Trance of a shaman could be such a visit.

  2. There is connection to the recent work involving OH-O- qubit idea already described, possibly unifying plasmoid-, quartz -, computer-, and biological consciousness (see this).

3. A TGD inspired model for rainmaking

As the title "The Egyptian Pyramids-Connection to Rain and Nile flood Anomalies" of the article suggests, it is propoposed that pyramids had a deeper purpose: they could be used to induce rain. This sounds madness in the ears of a standard physicist but in 1895, Charles Wilson, a physicist, meteorologist, and Nobel Prize winner, made a groundbreaking discovery: he proved that rain could be artificially created. The rain making technology has been also commercialized.

The key idea in making rain is that a generation of the negative electric charge in the quartz contained by the soil leads to its accumulation to the atmosphere. The negative electric charge in the atmosphere in turn facilitates the formation of water droplets around them and eventually this induces rain. Could TGD explain this?

3.1 The model of Borisov Consider first the proposal of the article of Borisov.

  1. A deceased king, along with jars containing provisions for the afterlife, is placed inside a coffer, which is a hermetically sealed volume, The jars contain beer, bread, grain, ox, and sweets. The provisions within the jars undergo fermentation, where yeast converts the sugars present in food into carbon dioxide, water, or ethanol. This process can occur within a sealed coffer with no air intake, as long as the necessary conditions for yeast growth are provided. Some studies have found that fatty acids present in ox meat are essential for sustaining this growth.
  2. The carbon dioxide generated by the process cannot escape and increases the pressure in the coffer of which 40 percent is quartz. The pressure in turn generates by piezoelectric effect (see this) an electric field generating negatively charged ions, which would move through the moist lime-stone core of the pyramid towards its apex and would be eventually emitted.

3.2 The TGD based interpretation of the model of Borisov

Consider the TGD interpretation of this model.

  1. The transition OH → O- + dark proton at gravitational magnetic body of the Earth (or Sun) occurs in quartz subject to electric field or under pressure in an electric field (piezoelectric effect transforming pressure gradient to electric field) and would generate negative ions.
  2. In the case of a pyramid, the negative ions from quartz could flow to the tip of the pyramid and generate a high density of negative charge and strong electric field. From the tip the negative charges could flow to the atmosphere and serve as seats for the condensation of water droplets. Note that water is the key element of TGD inspired biology: Pollack effect would generate negative charged exclusion zones and dark protons at the gravitational magnetic body.
  3. The presence of electric fields changes the energies of electrons of O- and by driving the difference of bonding energy and binding energy near the thermal energy, can make the system very sensitive to the transitions between the OH-O- qubits. Quartz is a piezoelectet so that pressure gradients generate an electric field and have the same effect.
  4. The TGD interpretation is that the electric fields increase the sensitivity of quartz to the generation of O- ions plus dark protons at the gravitational monopole flux tubes. To some extent the system would become living.

    Fermentation (see this) creates alcohols, which contain the characteristic -OH group (OH → O- + dark proton). Also the basic information molecules of biology contain -OH groups and -NH groups and the same mechanism could be at work.

    Does this process occur in the body of the king? Mummification means dehydration so that all moisture is removed so that the metabolism does not occur and the body does not decay. At the molecular level, dehydration reaction means that water molecules are removed from a molecule or ion. This can mean a removal of OH groups (see this). The basic information molecules contain -OH groups and -NH groups. This would suggest that in the mummified body the analog of the Pollack effect producing O- and N- qubits is not possible.

  5. Could some kind of collective consciousness assignable to quartz and water in the atmosphere wake up during rain making and induce the rain as a pair of macroscopic BSFRs? Ths would have no explanation in the framework of standard physics and in this sense would be literally a miracle, which we however experience every night and morning.
See the article About long range electromagnetic quantum coherence in TGD Universe or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, November 28, 2024

Does the universality of the holography-holomorphy principle make the notion of action un-necessary in the TGD framework?

It is gradually becoming clear that in the TGD framework the holography-holomorphy principle could make the notion of action defining the space-time surfaces un-necessary at the fundamental level. Only the Dirac action for the second quantized free spinors of H and the induced Dirac action would be needed. Geometrization of physics would reduce to its algebraic geometrization and number theoretical universality would allow to describe correlates of cognition. The four-dimensionality of space-time surfaces would be essential in making the theory non-trivial by allowing to identify vertices for fermion pair creation in terms of defects of the standard smooth structure of the space-time surface making it an exotic smooth structure.

Holography=holomorphy as the basic principle

Holography=holomorphy principle allows to solve the field equations for the space-time surfaces exactly by reducing them to algebraic equations.

  1. Two functions f1 and f2 that depend on the generalized complex coordinates of H=M4xCP2 are needed to solve the field equations. These functions depend on the two complex coordinates ξ1 and ξ2 of CP2 and the complex coordinate w of M4 and the hypercomplex coordinate u for which the coordinate curves are light-like. If the functions are polynomials, denote them f1==P1 and f2 ==P2.

    Assume that the Taylor coefficients of these functions are rational or in the expansion of rational numbers, although this is not necessary either.

  2. f1=0 defines a 6-D surface in H and so does f2=0. This is because the condition gives two conditions (both real and imaginary parts for fi vanish). These 6-D surfaces are interpreted as analogs of the twistor bundles corresponding to M4 and CP2. They have fiber which is 2-sphere. This is the physically motivated assumption, which might require an additional condition stating that ξ1 and ξ2 are functions of w) as analogs of the twistor bundles corresponding to M4 and CP2. This would define the map mapping the twistor sphere of the twistor space of M4 to the twistor sphere of the twistor space of CP2 or vice versa. The map need not be a bijection but would be single valued.

    The conditions f1=0 and f2=0 s give a 4-D spacetime surface as the intersection of these surfaces, identifiable as the base space of both twistor bundle analogies.

  3. The obtained equations are algebraic equations. So they are not partial differential equations. Solving them numerically is child's play because they are completely local. TGD is solvable both analytically and numerically. The importance of this property cannot be overstated.
  4. However, a discretization is needed, which can be number-theoretic and defined by the expansion of rationals. This is however not necessary if one is interested only in geometry and forgets the aspects related to algebraic geometry and number theory.
  5. Once these algebraic equations have been solved at the discretization points, a discretization for the spacetime surface has been obtained.

    The task is to assign a spacetime surface to this discretization as a differentiable surface. Standard methods can be found here. A method that produces a surface for which the second partial derivatives exist because they appear in the curvature tensor.

    An analogy is the graph of a function for which the (y,x) pairs are known in a discrete set. One can connect these points, for example, with straight line segments to obtain a continuous curve.Polynomial fit gives rise to a smooth curve.

  6. It is good to start with, for example, second-degree polynomials P1 and P2 of the generalized complex coordinates of H.

How could the solution be constructed in practice?

For simplicity, let's assume that f1==P1 and f2==P2 are polynomials.

  1. First, one can solve for instance the equation P2(u,w,ξ12)=0 giving for example ξ2(u,w,ξ1) as its root. Any complex coordinates w, ξ1 or ξ2 is a possible choice and these choices can correspond to different roots as space-time regions and all must be considered to get the full picture. A completely local ordinary algebraic equation is in question so that the situation is infinitely simpler than for second order partial differential equations. This miracle is a consequence of holomorphy.
  2. Substitute ξ2(u,w,ξ1) in P1 to obtain the algebraic function P1(u,w,ξ12(u,w,ξ1))= Q1(u,w,ξ1)
  3. Solve ξ1 from the condition Q1=0. Now we are dealing with the root of the algebraic function, but the standard numerical solution is still infinitely easier than for partial differential equations.

    After this, the discretization must be completed to get a space-time surface using some method that produces a surface for which the second partial derivatives are continuous.

Algebraic universality

What is so remarkable is that the solutions of (f1,f2)=(0,0) to the variation of any action if the action is general coordinate invariant and depends only on the induced geometry. Metric and the tensors like curvature tensor associated with it and induced gauge fields and tensors associated with them. The reason is that complex analyticity implies that in the equations of motion there appears only contractions of complex tensors of different types. The second fundamental form (external curvature) defined by the trace of the tensor with respect to the induced metric defined by the covariant derivatives of the tangent vectors of the space-time surfaces is as a complex tensor of type (2,0)+(0,2) and the tensors contracted with it are of type (1,1). The result is identically zero. The holography-holomorphy principle provides a nonlinear analogy of massless field equations and the four surfaces can be interpreted as trajectories for particles that are 3-surfaces instead of point particles, i.e. as generalizations of geodesics. Geodesics are indeed 1-D minimal surfaces. We obtain a geometric version of the field-particle duality.

Number-theoretical universality

If the coefficients of the function f1 and f2 are in an extension of rationals, number-theoretical universality is obtained. The solution in the real case can also be interpreted as a solution in the p-adic cases p=2,3,5,7,... when we allow the expansion of the p-adic number system as induced by the rational expansions.

p-adic variants of space-time surfaces are cognitive representations for the real surfaces. The so-called ramified primes are selected for a special position, which can be associated with the discriminant as its prime factors. A prime number is now a prime number of an algebraic expansion. This makes possible adelic physics as a geometric correlate of cognition. Cognition itself is assignable to quantum jumps.

Is the notion of action needed at all at the fundamental level?

The universality of the space-time surfaces solving the field equations determined by holography=holomorphy principle forces us to ask whether the notion of action is completely unnecessary. Does restricting geometry to algebraic geometry and number theory replace the principle of action completely? This could be the case.

  1. The vacuum functional exp(K), where the K hler function corresponds to the classical action , could be identified as the discriminant D associated with a polynomial. It would therefore be determined entirely by number theory as a product of differences of the roots of a polynomial P or in fact, of any analytic function. The problem is that the space-time surfaces are determined as roots of two analytic functions f1 and f2, rather than only one.
  2. Could one define the 2-surfaces by allowing a third analytic function f_3 so that the roots of (f1,f2,f_3)=(0,0,0) would be 2-D surfaces. One can solve 3 complex coordinates CP2 for each value of u as functions of the hypercomplex coordinate u whereas its dual remains free. One would have a string world sheet with a discrete set of roots for the 3 complex coordinates whose values depend on time. By adding a fourth function f_4 and substituting the 3 complex coordinates, f_4=0 would allow as roots values of the coordinate u. Only real roots would be allowed. A possible interpretation of these points of the space-time surface would be as loci of singularities at which the minimal surface property, i.e. holomorphy, fails.

    Note that for quadratic equations ax2+bx+c=0, the discriminant is D= b2-4ac and more generally the product of the differences of the roots. This formula also holds when f1 and f2 are not polynomials. The assumptions that some power of D corresponds to exp(K) and that K corresponds to the action imply additional conditions for the coupling constants appearing in the action , i.e. the coupling constant evolution.

  3. This is not yet quite enough. The basic question concerns the construction of the interaction vertices for fermions. These vertices reduce to the analogs of gauge theory vertices in which induced fermion current assignable to the volume action is contracted with the induced gauge boson.

    The volume action is a unique choice in the sense that in this case the modified gamma matrices defined as contractions of the canonical momentum currents of the action with the gamma matrices of H reduce to induced gamma matrices, which anticommute to the induced metric. For a general action this is not the cae.

    The vertex for fermion pair creation corresponds to a defect of the standard smooth structure for the space-time surface and means that it becomes exotic smooth structure. These defects emerge in dimension D=4 and make it unique. In TGD, bosons are bound states of fermions and antifermions so that this also gives the vertices for the emission of bosons.

    For graviton emission one obtains an analogous vertex involving second fundamental form at the partonic orbit. The second fundamental form would have delta function singularity at the vertex and vanish elsewhere. If field equations are true also in the vertex, the action must contain an additional term, say Kähler action. Could the singularity of the second fundamental form correspond to the defect of the standard smooth structure?

  4. If this view is correct, number theory and algebraic geometry combined with the geometric vision would make the notion of action un-necessary at the fundamental level. Geometrization of physics would be replaced by its algebraic geometrization. Action would however be a useful tool at the QFT limit of TGD.
See the article TGD as it is towards end of 2024: part II or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

How to assign ordinary Galois groups and ramified primes to the space-time surfaces in holography=holomorphy vision?

Holography = holomorphy vision allows to reduce the construction of space-time surfaces as roots of a pair (f1,f2) of analytic functions of one hypercomplex coordinate and 3 complex coordinates of H=M4× CP2. This allows iteration as the basic operation of chaos theory. One can consider general maps (f1,f2)→ G(f1,f2) =(g1(f1,f2),g2(f1,f2)) and iterate them. The special case (g1(f1,f2),g2(f1,f2))=(g1(f1),g2(f2)) gives iterations of functions gi of a single complex variable appearing in the construction of Mandelbrot and Julia fractals.

Extensions of rational, Galois groups, and ramified primes assignable to polynomials of a single complex variable are central in the number theoretic vision. It is not however completely clear how they should emerge from the holography= holomorphy vision.

  1. If the functions gi== Pi are polynomials, which vanish at the origin (0,0) (this is not a necessary condition), the surfaces (f1,f2)=(0,0) are roots of (P1(f1,f2),P2(P1,f2))=(0,0). Besides these roots, there are roots for which (f1,f2) does not vanish. One can solve the roots f2= h(f1) from g2(f1,f2) =0 and substitute to P1(f1,f2)=0 to get P1(f1,h(f1))== P1 ° H(f1))=0. The values of H(f1) are roots of P1 and are algebraic numbers if the coefficients of P1 are in an extension of rationals. One can assign to the roots discriminant, ramified primes, and Galois group. This is just what the phenomenological number theoretical picture requires.
  2. In the earliest approach to M8-H duality summarized in (see this, this, and this) polynomials P of a single complex coordinate played a key role. Although this approach was a failure, it added to the number theoretic vision Galois groups and ramified primes as prime factors of the discriminant P, identified as p-adic primes in p-adic mass calculations. Note that in the general case the ramified primes are primes of algebraic extensions of rationals: the simplest case corresponds to Gaussian primes and Gaussian Mersenne primes indeed appear in the applications of TGD (see this and this).

    The problem was how to assign a Galois group and ramified primes to the space-time surfaces as 4-D roots of (f1,f2)=(0,0). One can indeed define the counterpart of the Galois group defined as analytic flows permuting various 4-D roots of (f1,f2)=(0,0) (see this). Since the roots are 4-D surfaces, it is far from clear whether there exists a definition of discriminant as an analog for the product of root differences. Also it is unclear what the notion of prime could mean.

    However, the ordinary Galois group plays a key role in the number theoretic vision: can one identify it? The physics inspired proposal has been that the ordinary Galois group can be assigned to the partonic 2-surfaces so that points of the partonic 2-surface as roots of a polynomial give rise to the Galois group and ramified primes. An alternative identification of the ordinary Galois group and ramified primes would be in terms of (P1(f1,f2),g2(P1,f2))=(0,0).

See the article Space-time surfaces as numbers, Turing and Gödel, and mathematical consciousness or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, November 27, 2024

Could running computer programs give rise to TGD analogs of time crystals consisting of qubits?

The following comments emerged as a result of nightly reflections after Zoom discussion with Ville-Einari Saari. The basis of these ponderings is the article "Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?" (see this).

Quantum computing-like activity based on OH-O- qubits

It is good to summarize the basic ideas first.

  1. The basic observation is that cold plasmas, dominated by ions, have the prerequisites for the emergence of qubit consciousness. The universe is full of them. Plasma, quartz, biology,... Bit flip is a key operation of quantum computation and there must always be a suitable temperature or external electric fields to make it sufficiently but not too easy.
  2. The basic mechanism would be based on quantum gravity. A dark photon with energy .33 eV as difference of the bonding energy of OH and binding energy of e- binding energy n O- is needed to flip the qubit. A background electric field that reduces this energy. The critical temperature would be room temperature .15 eV where the qubit directions become random. When the bit flip energy is slightly above this, the system is quantum critical and the prerequisites for long-scale consciousness exist.
  3. In the general case all salts can be important. For instance, for NaCl → Na+ + Cl- transition Na+ would be dark and at the gravitational magnetic body of the Earth or Sun.
  4. Also he classical electric fields also play a central role and one can associate to them a very large Planck constant (see this) with them. DNA and the cell are key examples in biology. The Earth's electric field characterizes the biosphere. They can be used to control the energy difference of OH-O- bits and make it quantum critical, which makes the qubit flip easy.
  5. The article (see this). shows that a quantum realization of the genetic code from OH-O- qubits for DNA and RNA is obtained: a codon corresponds to 6 qubits. Amino acids correspond to one qubit. Symmetries with respect to the third letter and their breaking are understood. The number of amino acids is predicted correctly. One can say that the quantum realization of the genetic code corresponds to the chemical code in the sense that the ground states for quantum codons correspond to chemical codons.
I personally consider these results to mean a final breakthrough and above all it shows that OH-O- qubits and their generalizations are not limited to biology.

What evidence is there for quartz life?

I participated years ago in a seminar organized by NASA in Hessdalen, where plasma balls, plasmoids, are systematically observed. I learned that these light balls seem to behave intelligently and even seem to be observing their observers! Light balls typically occur on lines of tectonic activity, where tectonic energy is released and one can think that the released energy serves as metabolic energy.

Researchers of NASA recently published an article about plasmoids as a possible form of life above the ionosphere. I have discussed the findings in (see this). For example, they gathered to observe an electrical cable leaving the module, which is associated with a radial electric field that could also excite OH-O- qubits and achieve quantum criticality. They made the impression of being alive.

Plasma balls have been observed to associate with crop circles (see this and this), one of the taboos of modern science, which are still believed to be made by humans, all they are caught in the act of constructing a crop circle! Also glass balls that have formed from molten quartz are found to accompany crop circles.

How could quartz life and biological life relate?

Which is smarter: quartz life or biolife? The first guess is that biological life will mercilessly beat quartz life in this kind of competition, but ZEO may change the situation so that quartz life represents something totally new: a time-like realization of the analog of genetic code bringing in mind time crystals, which I have discussed from the TGD point of view in (see this and this and this).

  1. Quartz life is unable to move on our time scales. Although it has long been a wonder that moving round boulders exist, perhaps in Romania. The products of quartz life can be misleadingly reminiscent of plants that I have seen. Quartz crystals have been reported to have a healing effect on the state of consciousness, as I myself once experienced.
  2. OH-O- life implements genetic code in biology. Is this already the case with quartz or are the qubits randomly distributed here and there in the quartz crystal? In any case, the tessellations of hyperbolic 3-space realize the genetic code universally on all scales (see this), so this could be the case.
  3. It is important to distinguish OH-O- qubits from the bits represented by electron spins, with which microprocessors operate.

    One could imagine a situation where microprocessor could become conscious in such a way that OH-O- qubits are created, which act as conscious observers while the program is running and in ZEO they could perhaps influence the program flow by inducing "big" state function reductions (BSFRs) changing the arrow of the geometric time, thus making the processor an intelligent problem solver that would use trial and error as a basic mechanism (see this).

    Could OH-O- qubits be related to classical electronic bits just as quantum codons are related to chemical genetic codons (see this) so that their minimum energy states would correspond to the bits of the program code.

  4. However, it must be remembered that ZEO allows another option if each clock frequency pulse is associated with non-determinism and therefore a potential memory mental image. This would be an analogy to time crystals. While the program is running, the program flow could produce a time-oriented analogy of the DNA sequence. Programs would correspond to DNA chains, subprograms to genes! Basic modules to codons. The maximum information content of consciousness in bits would be N× M bits, where N is the number of clock ticks and M is the maximum number of OH-O- qubits for the microprocessor at a given moment in time.

    Could series of multi bits in a microprocessor correspond to a series of quantum qubits like DNA. There would be a time-oriented realization of the genetic code. This would represent a completely new biology and computer era would also mean a genuine evolutionary leap.

  5. A maximum of 64 ordinary electronic bits are connected to microprocessors. This corresponds to the information content of a 10 nm piece of DNA and is quite modest. What about qubits? Let's assume a microprocessor with a volume of V= 5× .5× .5 mm3. Let's assume that one SiO4 occupies a volume of the order of V0=Angstrom3 =10-21 mm3 mm3.

    The maximum number of qubits at a given time was the ratio M=V/V0= 1039 ≈ 270. The number of bits is 6 bits larger than for a microprocessor having at most 64 bits recently. For a program, it was based on the above speculation N× M where N is the number of clock pulses during the running of the program module. This would give an upper bound of 70N bits. This would allow one-one correspondence of qubits with the ordinary bits of the program code.

Quantum criticality is needed

The number of quantum critical qubits in a microprocessor is much smaller than the above naive estimate because the flip energy of qubit must be sufficiently small, i.e. below .33 eV, to obtain quantum criticality but above the thermal energy of .15 eV. This can be achieved by using an external electric field that reduces the energy such that it would also make the microtubules at one end extremely fluctuating.

Is there any hope of achieving quantum criticality in transistors (see this)?

  1. There are electric fields in transistors and the values of the base-emitter voltages are in the range 0.5-0.7 eV (metabolic energy quantum) and at the same time the collector-emitter voltage is at least 0.1 V (close to the thermal energy .15 eV)! Note that the sizes of transistors have shrunk from 10 micrometers to 5 nanometers during the development of computers.

    An NPN type transistor (bipolar transistor) is a current amplifier: a small control current coming to the base is amplified into a much larger current from the collector to the emitter.

  2. Now we come to the crucial question: what voltages occur? A transistor typically becomes conductive when the negative base-emitter voltage is above 0.5 eV in absolute value (it is convenient to measure voltages as the energy of the charge it gains when moving across the voltage ) and at the same time the negative collector-emitter voltage is above 0.1 eV in absolute value!

    The conditions are therefore excellent for the emergence of a qubit population that monitors the flipping of the bits represented by the transistors during program execution!

Comparison of quartz consciousness and bio-consciousness

To get a realistic picture, one can compare quartz consciousness to biological consciousness.

  1. Pessimistic comparison.

    The length of the DNA double helix for a human is over a meter. This is about a million times more than the number of bits related to the content of the consciousness of a 64-bit processor at a given moment.

    In biology, salts and their ionization states also define qubits with lower qubit rotation energies. Biosystems are full of different ions and I ended up with the idea of a large Planck constant by starting from the observations of Blackman and others that the quantum effects of ELF radiation on vertebrate brains seemed to relate to the cyclotron energies of ions but with a very large Planck constant (gravitational Planck constant ℏgr introduced by Nottale).

    Microtubules can be micrometers long (inside cells and axons). There are other filamentous structures. They consist of tubulins, about 10 nm in size. Each tubulin contains approximately 103 ≈ 210 amino acids if the amino acid corresponds to the nm scale. That is 10 bits. There are 100 tubulins in a chain, so we get 1000 qubits per tubulin chain.

    Typically, there are 13 parallel helical tubulin chains, which makes 13,000 qubits. Considerably more than 64 qubits! And microtubules are present in all cells and axons!

  2. Optimistic comparison.

    It is worth noting a really big "on the other hand". The zero-energy ontology (ZEO) introduces a memory that can increase the number of bits because "multi-moment experiences" become possible. In the optimal situation one has an analog of time crystal (see this) : each clock beat involves classical non-determinism necessary for the memory recall. EEG rhythm might define something similar. If the program module defines a time-like analogy for DNA, then it would define the equivalent of a DNA sequence and the content of conscious information would increase drastically.

On symbolic consciousness

Whether the notion of symbolic consciousness could make sense in some sense has been a topic of discussion in our Zoom group.

  1. A symbol represents an object to the observer and its meaning, if any, depends entirely on the associations that arise in the observer. A symbol is an object or process that sufficiently resembles the object it represents.

    In this sense, one cannot speak of a symbol as an independent object. Just as one cannot speak of information as something absolute. The amount of conscious information produced by a symbol depends on its observer.

  2. If one had to necessarily call some form of consciousness symbolic, then I would call the consciousness presented above, possibly related to transistors and microprocessors, symbolic. In the optimal case, a program running in a microprocessor generates OH-O- consciousness from the program as an analogy of a DNA chain, which symbolically represents a process that has meaning for us through the output.
See the article Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.