https://matpitka.blogspot.com/

Sunday, November 17, 2024

Evolutionary hierarchy formed by quartz crystals, proteins, DNA/RNA?

Evolutionary hierarchy formed by quartz crystals, proteins, DNA/RNA?

The considerations of the earlier post suggest an evolutionary hierarchy in which quartz crystals are at the lowest level whereas proteins, and DNA and RNA represent biological levels characterized by the number of qubits in the codon. Quartz crystals would belong to the lowest level in the classification to the kingdoms of minerals, plants, and animals. At the highest level would be the magnetic bodies of the Earth and Sun. Can one understand these classifications at a deeper level?

OH-O- qubits at DNA and RNA level

Consider first the DNA and RNA level.

  1. For quartz only the OH-O- qubits are realized. If the hierarchy is realized, OH-O- qubits should be realized also for DNA and RNA. This suggests an elegant resolution of a long standarding problem of how to get 64 dark DNA codons (6 bits) instead of 32 codons (5 bits). If codons correspond to 3 dark protons, proton spin would give only 3 bits and 8 different codons for a single DNA strand. I have considered several new physics solutions to the problem but none of them is completely satisfactory.
  2. Could OH-O- qubit for the proton defining spin qubit given an additional qubit for each DNA letter (dark proton) assignable to the phosphate provide a solution to the problem: one would obtain 8× 8=64 codons for DNA and RNA. Amino acids contain only a single COOH group so that they can have only a single OH-O- qubit.

    There is however a problem. The spins of electron and dark proton sum up to spin 0 had one cannot speak of proton spin as a degree of freedom. Could one consider the entire DNA double strand as a realization of the genetic code so that each base pair would correspond to two OH-O- phosphate qubits?

  3. What about RNA? The differences between DNA and RNA suggest another solution to the problem. The riboses of RNA contain OH group making RNA unstable, which means that RNA is dynamical as required by quantum computational activities. In DNA the OH group of the ribose is missing so that DNA is stable unless entire double strands represent the dark code. Does the ribose OH give an additional OH-O- qubit for RNA and does the instability reflect the occurrence of quantum computation-like activities? Each RNA letter would have 2 OH-O- qubits and there would be 64 dark codons (6 qubits) realized in this sense completely dynamically!
  4. The chemical variants of codons are non-dynamical and could have an interpretation as a slowly varying long term memory. This forces us to ask what one really means with the dark variant of the genetic code. The simplest assumption is that the dark codons correspond to dynamical OH bonds able to transform to O-?

    The ordinary chemical realization of the genetic code would be separate from but in some sense correlated with the dark realizations determined by OH-O- qubits assigned with the phosphates of DNA and RNA, OH groups associated with the riboses of RNA, COOH groups of amino acids, and other OH groups.

  5. What is the relation between the chemical code and OH-O- code? The assumption that the chemical genetic code is completely independent of the dark code realized in terms of OH-O- qubits seems unrealistic. A more realistic assumption is that the ground states of minimum energy for the dynamical OH-O- qubits or more plausibly, the entire codons consisting of entangled OH-O- qubits for the letters of the codon are entanglement associated with DNA base pairs and RNA codons, correspond in 1-1 manner to the chemical codons.

Symmetries of the chemical code in relation to OH-O- code

It is interesting to consider the symmetries of the genetic codon required to reduce the number of amino acids from 61 to 20.

  1. In the case of single DNA strand 3 OH-O- qubits per codon in the ground states should be consistent with the approximate T-C and A-G degeneracies for the third codon. For DNA T↔C and A↔G exchanges for the third codon correspond to very slightly broken symmetries (see this).

    The T-C↔A-G exchange permutes the DNA strands and is exact only when all codons XYZ, Z\in \{T,C,A,G\ code for the same amino acid (see this). This symmetry is analogous to a broken isospin symmetry.

  2. What about the interpretation of these symmetries? T-C↔A-G exchange is analogous CP conjugation in the sense that the passive character of the conjugate strand in DNA transcription is analogous to the invisibility of the antimatter. This symmetry would therefore permute the strands. A testable naive guess is that for the passive strand the ground state codons have OH in the phosphate of the third letter. A stronger testable hypothesis is that T-C↔A-G exchange corresponds to O-O- permutation for all phosphate bonds. The situation would be similar in RNA.

    For this option, the T↔C and A↔G exchanges would define an analog of almost exact strong isospin symmetry. One could however as whether the situation

  3. The origin of the symmetries could be thermodynamic. The difference Ebind(O-)-Ebond(OH) for the codons coding for the same amino acid estimated to be .33 eV under normal conditions could be smaller than thermal energy of about .15 eV at physiological temperatures and thermal fluctuations would destroy the information of OH-O- qubit and also information about the difference of T-C and A-G doublets.

OH-O- qubits in proteins

OH-O- qubits appear also in proteins.

  1. The number of proteins is 20 and 5 bits is more than enough to code for them. The code has an almost symmetry with respect to the third letter meaning that the DNA and RNA codons XYZ with fixed XY and varying Z define a quadruplet decompositing to two doublets with T-C and A-G symmetry for Z. There are only two exceptions and they correspond to A-G doubles for Z. The Ile-ile-ile-met quadruplet can be understood in terms of the tetrahedral Hamilton cycle. For the top-trp A-G symmetry is broken, which would mean that the A in stop codon does not have O- as a dark counterpart. This could be due to the fact that Ebind(O-) is smaller than Ebond(OH) unlike for the other codons. The small deviations from the standard code could be understood in this way.
  2. Could the almost symmetry mean that DNA base pair codons for which the third OH-O--qubit pair corresponding to the third letter degenerates to a single qubit: OH or O- bit for the third letter are mapped to the same protein? If the energy difference between these bits is below thermal threshold this is the case.
  3. Amino-acids contain only a single OH group (COOH) whereas the phosphates of DNA codons contain 3 OH groups. This conforms with the idea that they represent a lower evolutionary level than DNA. For most amino acids, the COOH group does not transform to COO- under usual conditions. The metabolic reason would be that the binding energy Ebind(O-) is smaller than the bonding energy Ebond(OH). Pollack effect is required to excite the protein qubit. Asp and Glu are exceptions and have COO- permanently so that in this case only O- bit for protein would be realized.
  4. The OH-O- bit of the amino acid and those of DNA are non-dynamical under normal conditions. The instability (quantum criticality of RNA) suggests that in this case the energy needed to transform OH and O- to each other is rather small but differs sufficiently from the thermal energy.

    Wien's law for the wavelength distribution of blackbody radiation for the wavelength at the maximum of the wavelength distribution of photons at temperature T reads as λmax = 2.898 10-3mK/T. At room temperature 300 K this gives Eth=0.146 eV and infrared frequency f=3.43× 104 GHz. Photons having energy sufficiently above or below Eth are not thermally masked. The estimated energy difference e=Ebind(O-)-Ebond(OH) =.33 eV is more than twice Eth so that there would be no thermal masking. Raising the temperature by a factor of ≈ 2.26 to about 600 K would cause thermal masking. This explains why biological functions fail at low temperatures.

    One expects that the critical temperature at which Pollack effect occurs should be around the bodily temperature 313 K (40 degrees Celsius) prevailing in fever causing hallucinations. A possible identification is that this energy absorbed by the electron of O- reduces the Ebind(O-)-Ebond(OH) near thermal energy and induces the instability of O- ions of phosphates of DNA and RNA against transformation to OH. Second possibility is that this transformation transforms protons of OH to gravitational magnetic body as in the Pollack effect.

    Note that microwaves with frequency 3000 GHz have energy about .013 eV by factor ≈ 1/11 lower Eth so that they are not thermally masked (see this) Note also that the clock frequency of Pentium 4 processor is 3000 GHz and represents recent upper bound (see this).

  5. The biocatalyst property RNA, of proteins and presumably also of DNA could relate closely to the OH-O- dichotomy. The liberation of energy in the OH-O- transition occurring for or being induced by the presence of ribozyme or enzyme could allow it to overcome the potential wall making the reaction slow. Protons spin degrees of freedom would be present but frozen at least for the ground state configuration. Note that also the OH state could be dark. Even the transitions between ℏgr(Sun) and ℏgr(Earth) cannot be excluded.

    Could the dark dynamics be completely independent of the chemical realization. In this case DNA double strand and RNA would carry OH-O- 6 qubits and define a completely dynamical genetic code and would serve as ideal tool for topological quantum computations (see this, this and this).

  6. Chemically the activities of dark codons would manifest themselves as the transitions OH↔O- for dark codons whose ground states correspond to the chemical codons. In the case of O- photon could excite the electron to a higher energy state so that OH would be the less energetic state. In the case of OH, the ordinary Pollack effect would occur. DNA double strands and RNA strands could participate in topological computations under suitable metabolic conditions and chemical parameters such as pH making the OH↔O- transition energy small but not smaller than thermal energy.

How the field bodies control control the chemical activity of biomolecules

The value of e= Ebond(OH)-Ebind(O-) characterizes the level of quantum criticality of the biomolecules and the nearer this parameter is to the thermal energy, the more sensitive the system is to sensory input and more capable to perform chemical activities. Besides pH also the presence of electric field affects the energy of the electron of O- and could induce the instability of dark codons and electric fields associated with the electric body of the system (see this) could serve as tools controlling how "quantal" DNA, RNA and proteins are.

A good example is provided by microtubules which define a 2-D quantum computer like system organized into helical strands of OH-O- qubits. Tubulin proteins are collections of OH-O- qubits and the surface of the microtubule involves GPTs molecules accompanied by phosphates accompanied by OH-O- qubits.

Microtubules have a longitudinal electric field and the second end of the microtubules is highly unstable inducing a continual decay and regeneration of the microtubule. This could be due to the reduction of the energy difference e= Ebond(OH)-Ebind(O-) to energy near the thermal energy. In the case of DNA this could be achieved by irradiation using photons with energy which reduces e≈ .33 eV to about eth≈ .15 eV. The needed energy would be about .18 eV.

Quite generally, the body of the organism carries an electric field in the head-tail direction (see this. For the TGD based interpretation of Becker's findings (see this). Becker's electric field plays a key role during the growth of the organism and also in healing of wounds and addition of external electric field affects these processes. If the energy e= Ebond(OH)-Ebind(O-) is nearer to the thermal energy for the growing or healing cells, they would be more capable of changing.

See the preliminary article (a work in progress) Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter Are Conscious Computers Possible in TGD Universe?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, November 14, 2024

Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life?

The considerations of this post were originally inspired by large language models leading to the earlier speculations about whether the computers might be conscious entities in the TGD based quantum ontology (zero energy ontology). Quantum gravitation in the TGD sense would play a key role in guaranteeing quantum coherence even in astrophysical scales.

Quite recently, came the realization that microprocessors (MPs) have a size scale .5 cm given by gravitational Compton length Λgr,E of any particle in the gravitational field of the Earth (for the Sun one has Λgr,S=RE/2, where RE is the radius of the Earth). This led to the question of whether microprocessors (MPs) could be conscious entities.

Since MPs are quartz crystals (QCs), this led to the question whether the QCs might be conscious entities able to perform activities analogous to quantum computations. I have already considered this possibility: the key idea is that the generalized Pollack effect kicks the protons of OH molecules appearing as a standard building brick of biomolecules to dark protons at the gravitational magnetic body. OH and O- could define the states of a qubit. This would be possible both in living matter and in QCs. Should we reverse our views about the relationship between computers and us? Could QC life use computers as interfaces making it possible to use us as sensory and motor instruments?

See the preliminary article (a work in progress) Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter Are Conscious Computers Possible in TGD Universe?.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, November 11, 2024

What it feels to be a microprocessor?

The following ponderings were inspired by discussions in our Zoom group (Marko Manninen, Ville-Einari Saari, Rode Majakka and Tuomas Sorakivi). The topic of discussion was whether one could assign to language models what might be called symbolic consciousness and whether the avatars creatable using language models are real conscious entities.

Personally, I would consider the language models to be what their builders intended them to be. Context-recognizing association machines, perfect students, which do not perform logical thinking or generate new ideas. Computers might develop consciousness, if they become genuine meso-micro or macroscopic, or even longer-scale quantum systems, for which statistical determinism no longer applies. However, I would guess that the level of consciousness in recent computers is far from that assignable to humans.

This can be considered more concretely. A microprocessor is the operational center for a PC.

  1. An interesting observation is that the upper limit for the length of the wafer containing the microprocessor is .5 cm, which is the gravitational Compton wavelength for any system regardless of mass in the classical gravitational field of the Earth and equal to half the Schwartschild radius of the Earth. Is this a coincidence?

    I've been wondering if quantum gravity in the sense of TGD could make computers conscious beings within a zero-energy ontology. The gravitational magnetic bodies of the Sun and the Earth are natural candidates. For the Sun, the gravitational Compton frequency is 50 Hz, the average frequency of the EEG. This is another thought provoking observation.

  2. The number of bits could be a guess at the complexity of the content in one 3-D space. 6 bits means 64 internal states. Not too much. There are 64 internal states associated with a DNA codon. The upper limit for the number of bits is 64, which means 2^64 = about 10^19 different internal states. About 11 DNA codons: a 10 nanometer long DNA strand.
  3. This system is classically coherent and in the TGD Universe the associated field body should guarantee this. Would the field body be conscious and would the upper limit of its "bitty" consciousness content be 64 bits? This consciousness seems very primitive when compared to human consciousness. This consciousness could be called symbolic consciousness or, more concretely, processor-consciousness, but it would be something very different from symbolic consciousness at the level of humans.
  4. It is interesting that there are about 20-30 letters in written language and that the number of amino acids is 20. Genetic code is predicted to be universal in the TGD Universe and have realizations not only in biological systems but in all scales. Could there be a realization of the universal genetic code behind the letters. Could language be a manifestation of symbolic consciousness? Could the production of speech and text relate to some kind of central unit, microprocessor, or collection of them producing a statistical output in the brain? Could it operate with 6-bit units?

    Interestingly, for instance in Chinese there are no letters as symbols, only words: a kind of holism and no reduction to the level of letters. Also in the TGD based quantum realizations of the genetic code only codons are realized whereas chemical realization decomposes them to letters.

  5. The moments of 3-D computer awareness determined by bit configuration would integrate into a stream of consciousness. The clock frequency, in the order of GHz, would be analogous, for example, to the alpha rhythm of the EEG. The gravitational Compton frequency is 67 GHz, and one might very conservatively argue that the clock frequency must be higher than 67 Hz in order to have a conscious microprocessor. Conscious memory would require classical non-determinism at the bit level. Non-determinism would be a shortcoming in the standard view, but now it would be a virtue.
  6. Could a microprocessor be an intelligent problem solver? The zero energy ontology (ZEO) defining ontology of quantum TGD predicts that the counterpart for a sequence of repeated measurements of the same observables (Zeno effect) gives rise to a conscious entity, self. In the TGD counterparts of ordinary quantum measurements the arrow of time is predicted to change. This makes possible a trial and error mechanism based on pairs of ordinary state function reductions making possible intelligent problem solving. Periods of sleep would be an example of this at our level (morning is wiser than the evening) but also microprocessors could apply it at the level of principle at least.
  7. The user and the microprocessor could entangle to form a larger conscious entity and make it possible for the user to affect the behaviour of the computer so that it would not be a deterministic machine anymore. There is an experiment in which a chicken imprinted on a robot seemed to be able to affect the behavior of the robot. If true it might be understood in terms of entanglement with the random number generator determining the behavior of the robot.
To conclude, I find it really difficult to see how higher level consciousness could be realized at the processor level. Rather, the output of the language models produce the contents of consciousness in our brain (or its field body) as associations, and a huge amount of information from completely different levels determines the generated mental images. The amount of information generated in us by the text is enormously greater than the text itself, and its amount depends on the recipient. The power of language models is that they manage to generate the important bits managing to generate sensible mental images. Just like shouting the name of the dog generates a lot of sensible activity.

See the preliminary article (a work in progress) Quartz crystals as a life form and ordinary computers as an interface between quartz life and ordinary life? or the chapter Are Conscious Computers Possible in TGD Universe?.

See also the article Space-time surfaces as numbers, Turing and Gödel, and mathematical consciousness.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, November 08, 2024

Why doesn't Vega have planets?

The popular article 'Ridiculously smooth': James Webb telescope spies unusual pancake-like disk around nearby star Vega and scientists can't explain it informs that James Webb telescope has found that star Vega probably has no planets.

Vega is a blueish colored star about twice as massive as the Sun and located at distance of about 25 light-years from Earth and is therefore rather near to Sun. By its large mass Vega is predicted to be short lived. Vega is .5 billion years old and considerably younger than Sun. The age of Sun and its planetary system, believed to have condensed simultaneously from a proto disk, is believed to be 4.6 billion years. Due to its fast spin, close proximity to Earth and the fact that its magnetic pole is pointed right at us, Vega appears very bright in the night sky. Vega is the fifth-brightest star visible from Earth to the naked eye in Northern sky (Pohjan Tähti in Finnish).

JWST images reveal that Vega is surrounded by a surprisingly smooth, 100 billion-mile-wide (161 billion kilometers) disk of cosmic dust similar to the similar disk believed to have surrounded Sun for 4.5 billion years ago , confirming that it is not surrounded by any exoplanets. The standard model for the formation of planets and Sun from this kind of disc however predicts that Vega should have planets. This might mean a death blow for the standard narrative of the formation of planets.

The TGD based model for the formation of planets predicts that planets were formed in mini bigbangs, that is explosions in which the parent star lost a surface layer consisting of closed flux monopole flux tubes flowing along the surfacein North-South direction. The surface layer hand roughly the mass of the planet to be formed and condensed later to the planet (see for instance this, this, and this).

The model is developed in more detail here and differs dramatically from the standard model view of the stellar energy production since stellar wind and radiation would be produced at the surface layer consisting of nuclei of a scaled up variant of ordinary hadron physics predicted by padic length scale hypothesis (see this and I this). I refer to this hadron physics as M89= 289-1 hadron physics. M89 nuclei would have mass scale, which is 512 times that of the nuclei of the ordinary hadron physics, which corresponds to M107= 2107-1.

Whether the properties of Vega, for instance the fact that according to the standard theory it has lower abundances of elements heavier than 4He, could explain why these mini bigbangs did not occur for Vega, remains an open question. This would require a more precise understanding of what causes these mini bigbangs. These explosions should have induced the decay of M89 hadrons to ordinary hadrons so that the entire flux tube layer would have exploded and decayed.

Could some kind of quantum critical phenomenon, stimulated by external perturbation, be in question? The TGD based stellar model predicts that stars have flux tube connections to other stars and also to the galactic blackhole-like object and this could make possible this kind of perturbations. Ordinary solar wind would correspond to similar local explosions. This suggests a similarity with the TGD based models of the sunspot cycle this and of geomagnetic reversals and excursions for which I have considered a model based on stochastic resonance (see this).

See the article Some Solar Mysteries or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, November 07, 2024

Early Universe was empty and featureless: really?

I encountered a popular article (see this) with the title "Einstein Was Right After All: Webb Telescope Observes Emptiness in the Extremely Early Universe". The article states that the early universe approached featureless space predicted by the standard cosmology, also predicting that various structures emerged later. To me, the findings of the James Webb telescope suggest something very different. JWT has found highly evolved galaxies and giant blackholes, which should not exist in the very early universe.

On the other hand, in a certain sense the claim conforms with the TGD view predicting that the very early Universe was dominated by string-like objects, 4-surfaces looking like extremely thin strings. I call them cosmic strings. This is of course something very different from the predicton of GRT. Thee matter density due to cosmic behaved like 1/a2, where a denotes the cosmic time defined by the Lorentz invariant light-cone proper time. This means that the mass of the comoving volume went to zero like a. In this sense the Universe became empty.

The energy of the cosmic string can be identified as dark energy, somewhat surprisingly also identifiable as galactic dark matter located at the cosmic strings, there would be no halo. The decay of tangles of cosmic strings to ordinary matter as analog of the vacuum energy of inflaton fields would generate the ordinary galactic matter and the energy density of cosmic strings creates the gravitational force explaining the flat velocity spectrum of distant stars.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Blackhole that grew quite too fast

Marko Manninen asked for the TGD view concerning the recently found black-hole like object (BH) which seems to gobble the matter from environment much faster than it should (see the popular article 'Fastest-feeding' black hole of the early universe found. But does it break the laws of physics?. This BH counted identified as dwarf blachole, found by the James Webb telescope, should have gotten its mass of more than 7 million solar masses in 12 million years. The rate for its formation would have been 40 times too high.

Objects thought to be black holes often differ in many respects from the black holes of general relativity. In particular, the giant BHs of the very early universe and BHs associated with quasars and the cores of galaxies do so. Star-born BHs could be ordinary blackholes but the giant BHs might be something different. Also the dwarf backhole found by JWT might different. The basic mystery is why the giant BHs can be so large in the very early Universe if they are formed in the expected way. Do the BHs always grow by gobbling up matter from the environment?

TGD leads to a view of BHs different from the GR view in many respects (see for instance this).

  1. In TGD, BHs are not singularities containing their mass at a single point but would correspond to portions of long cosmic strings (extremely thin string-like 3-surfaces), which have formed a tangle and thickened so that they fill the entire volume. BH property would mean that they are maximally dense.
  2. The thickening of the cosmic string liberates the energy of the cosmic string and BHs would transform in an explosive way into ordinary matter, which is feeded into the environment. The accretion disk would not be associated with the inflowing matter, but would be formed by the outflowing matter as it slows down in the gravitational field and forms a kind of traffic jam. Radiation would escape. The situation would be very similar to the standard picture where the outgoing radiation would be produced by the infalling matter. At the QFT limit of TGD replacing many-sheeted space-time with a region of Minkowski space made slightly curved, the metric in the exterior region would be in a good approximation Schwartschild metric.
  3. This kind of object would be more like a white hole-like object (WH). Zero-energy ontology indeed predicts Objects resembling ordinary blackholes as the time reversals of WHs. Matter would really fall into them. One can make quite precise predictions about the mass spectrum of these objects (see for instance this).
This vision leads to a model for the formation of galaxies and generation of ordinary matter from the dark energy assignable to the cosmic strings, which would dominate in the very early Universe (see for instance this).
  1. The collisions of the cosmic strings during the primordial string dominated cosmology are unavoidable for topological reasons and would lead to their thickening and heating inducing the formation of WHs and their explosive decay to ordinary matter. This would generate a radiation dominated phase, perhaps when the temperature approaches the Hagedorn temperature as a maximal temperature for string-like objects. These WHs would be the TGD equivalent for the vacuum energy of inflaton fields assumed in inflation theory to decay to ordinary matter.
  2. The energy of cosmic strings would have Kähler magnetic and volume parts and have interpretation as dark energy. There is now rather convincing evidence for connecting between dark energy and the giant blackholes (see this).
  3. An unexpected connection is that galactic dark matter would be dark energy of a cosmic string transversal to the galactic plane and containing galaxies along it: this has been known for decades! There would be no dark matter halo and no exotic dark matter particles. This predicts without further assumptions the flat velocity spectrum of the distant stars rotating galaxies associated with very long cosmic strings and also solves the many problems of the halo models and MOND.
  4. TGD also predicts dark matter-like macroscopically quantum coherent phases of ordinary matter for which the effective Planck constant heff is large. The generation of these phases at magnetic bodies, for example in biology, solves the problem of missing baryonic matter: that is why baryonic (and also leptonic) matter disappears during the cosmic evolution.
Let's return to the question whether TGD can explain why the BHs in question grow so fast. They do not do so by gobbling the matter from the environment but from the long cosmic string. The energy of the thickening string filament is converted into matter and generates a WH. This could happen much faster than the growth of a black hole in the usual way. At this moment it is not possible to estimate the rate of this process but it could also explain how the early Universe can contain these giant blackhole-like objects.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Could geomagnetic reversals and excursions relate to extinctions and collapses of civilizations?

The stimulus for these considerations came from a new perspective to climate change and other phenomena. They could be argued to reflect the ethical and moral decay of our civilization. Could there be a much deeper reason for these phenomena and could they be unavoidable and implied by the basic physics? To put it provocatively: could our ethical and moral standards correlate with our physical environment in some sense?

Climate warming and other phenomena that cause disorder in the biosphere bring to mind the second law of thermodynamics. Could a deeper explanation be based on the second law of thermodynamics of its generalization. We turn too much ordered energy into dis-ordered energy. Could carbon dioxide emissions be a secondary phenomenon?

I did not take these considerations very seriously because it is difficult to see the reduction to the atomic level. The loss of order also manifests itself in a rather abstract form, for example on a social level as violence and inequality. Recently, however, I saw a mention of a study in which I claimed that the increase in entropy produced by human energy consumption starts to be significant at the atomic level. Could the decline of civilization have an explanation in terms of a generalization of the second law forced by TGD?

Some interesting observations

There are several interesting observations which have stimulated the ideas to be discussed in the sequel.

  1. The Earth's magnetic field is changing rapidly near the poles (see this). Interestingly, global warming is fastest near the poles. It is expected that the direction of the field can change within a very short period of time. The shortest known polarization change has occurred in a year and global polarization reversals can last hundreds of years. Bjarne Lorentz has proposed on basis of correlations between temperature and the strength of the global magnetic field (see this) that the geomagnetic reversal could relate to global warming because it no longer protects the biosphere from cosmic radiation.

    This proposal however forces us to give up the standard view about dynamo mechanism as the origin of the Earth's magnetic field. The dynamo mechanism has severe difficulties: in particular, the magnetic field should have disappeared a long time ago. The TGD view of magnetic fields deviates dramatically from the Maxwellian view and leads to an explanation for the stability of the Earth's magnetic field and also predicts a mechanism for the polarization reversals (see this) . This mechanism has been also applied to the polarization reversals of the solar magnetic field (see this).

    In TGD, the magnetic bodies of ordinary physical systems carry macroscopically quantum coherent phases of matter being able to control the associated systems consisting of ordinary matter. TGD inspired quantum biology relies on this notion. Therefore there are good motivations to ask whether the correlation between the weakening of magnetic field and climate warming could exist.

    Mainstream scientists do not take the proposal seriously (see this) since there seems to be no standard physics mechanism justifying the claim. Also I am personally skeptical about the proposal that standard physics mechanisms could relate global warming and geomagnetic reversal.

  2. In the last global reversal of the direction of the magnetic field about 41,000 years ago, the Neanderthals disappeared, although the reversal was short-lived about 250 years. The average period between reversals between long lasting global reversals is 450,000 years. For short lasting global reversals created in excursions, the average period is 10 times shorter, about 45,000 years (see this). There can also be local excursions and the strength and direction of the magnetic field of Earth indeed fluctuates.

    \item Callahan have studied magnetic fields around the world (see this) and noticed that the magnetic field and as a consequence the Schumann resonance can be very weak, for instance in the Near East. There are serious social problems in these areas. Why would the strength of the magnetic field correlate with the coherence the social atmoshere? Could the magnetic field strength correlate with the coherence of collective consciousness?

Could the entropization of field bodies lead to magnetic reversals and excursions explaining extinctions and declines of civilizations

The above considerations lead to the key idea.

  1. Magnetic bodies control biomatter in TGD. Specifically, the Earth's magnetic body, which would determine the collective consciousness of the Earth's and also affect the consciousness of living organisms since their magnetic bodies interact with the Earth's magnetic body.The magnetic body of the Sun would be also involved.
  2. Could the fundamental cause of the problems of humanity and the biosphere be the increase of entropy at the level of magnetic bodies. The aging magnetic body would be due to entropization. This mechanism could also explain the aging of biological organisms (see this). The entropization would lead to a loss of quantum coherence and the magnetic body would gradually lose control over the processes at the level of the biological body. This would eventually lead to a death struggle of the magnetic body and magnetic body.

    More concretely, the monopole flux tube pairs of the Earth's magnetic field would split to short flux tubes. Later they could fuse back to flux tubes with a reversed direction of magnetic field. The process would be the same as in the reversal of the solar magnetic field.

    As a result, the quantum coherence scales would shorten and the control of the magnetic body over the bio-matter would be lost. Biomatter would be forced to cope without the help of the magnetic body. During sleep a similar situation takes place and during motor activities and sensory input are absent. The decay of the flux tubes can be local or global and the resulting magnetic flux tubes could be long lasting or only temporary.

  3. In zero energy ontology (ZEO), the transition period leading to regeneration of the monopole flux tube would correspond to two "big" state function reductions (BSFRs) in macroscopic scale. It can be local or global and also short-term. In BSFR, the magnetic body would lose its consciousness reincarnating with an opposite arrow of time. In the second BSFR it would wake up with the original arrow of time.
  4. One life cycle of the Earth's magnetic body would end (or a little more gently, the magnetic Mother Gaia would fall asleep and live in another direction of time). Eventually, a new cycle would begin with a new magnetic field. These cycles are analogous to the counterpart of sunspot cycles with a duration of 11+11 years. Could one think of a year cycle with a period about 45,000 years in which the magnetic field with reverted direction is short lived. For us, it might mean the collapse and rebirth of civilization. One can wonder what our fate in the next reversal is?
  5. There are reasons to ask whether our species is approaching extinction. On the other hand, an enormous progress in science and technology is being made at the same time. This paradox applies more generally, as, for example, biologist Jeremy England has observed (see this). Biological evolution is generally accompanied by an increase in entropy. p-Adic vision about cognition leads to exactly this prediction (see this). When the p-adic negentropy associated with quantum entanglement as a measure for the amount of conscious information is large, the standard entropy is also large. The smarter we get, the more we produce entropy.
  6. Homo sapiens appeared 300,000 years ago. The oldest Neandertal fossils are 430,000 years old. The most recent global and long-lasting direction change, the Brunhes Matuyama reversal, occurred 780,000 years ago.

    45,000 years is a reasonable estimate for the average period for the magnetic excursions (see this). The last magnetic excursion was 41,000 years ago. The reversal lasted only 250 years but Neanderthals disappeared. Also now, a change in direction is taking place: could it lead to the extinction of our species or at least the destruction of civilization within a few hundred years? If these temporary reversals are periodic, our species would have survived 7 reversals. This gives a cause for optimism. But on the other hand, we are doing our best to destroy our civilization.

Is it possible to estimate time scales for the duration of the magnetic field orientation from basic physics? The durations of the episodes seem random and the durations of the transitions also vary. p-Adic length scale hypothesis suggests that the periods come in powers of two. Surprisingly, also an esoteric view of the evolution of consciousness predicts so called Yuga cycle predicting octaves of the basic period and giving nearly the same quantitative predictions.

Period doubling and stochastic resonance, requiring the presence of a periodic perturbation and noise, could explain these characteristics. The first candidate for the periodic perturbation is the period of equinox precession. A better candidate is the orbital period of planet Sedna to which Earth would have monopole flux tube contacts. The noise would be thermal noise due to the aging of the magnetic body of Earth leading to its "death" and reincarnation by magnetic reversal or excursion.

See the article Could geomagnetic reversals and excursions relate to extinctions and collapses of civilizations? or the chapter Magnetic Bubbles in TGD Universe: part II.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, November 04, 2024

Simulation hypothesis and TGD

Heikki Hirvonen asked for my opinion about the simulation hypothesis. I must say that I find difficult to distinguish the simulation hypothesis of Boström from pseudoscience. It says nothing about physics It is not inspired by any problem nor does it solve any problem. And it only creates problems: for instance, who are the simulators and what physics they obey? We would be just computer programs. But how computer programs can be conscious: this is the basic problem of materialism. One can introduce a magic world "emergence" but it only puts the problem under the rug.

Some systems can of course create simulations of the external world and even themselves. Neuroscience talks about self model, which is a very real thing. Modern society is busily simulating the physical world and its activities. But this has nothing to do with Boström's hypothesis about a mysterious outsider as a simulator and ourselves as computer programs, who never can know who this mysterious simulator is (God of AI age).

It is however interesting to look whether the simulation hypothesis might have some analogies in TGD.

  1. TGD predicts a hierarchy of field bodies as space-time surfaces which are counterparts of the Maxwellian and more general gauge fields. Field bodies are predicted to be conscious entities carrying phases of ordinary matter with a large value of effective Planck constants making the quantum coherent systems in large scales. They give rise to a hierarchy of conscious entities.

    For instance, EEG would communicate information from biological body to field body control signals from field body to biological body. In quantum biology field bodies serve as bosses or more like role models for the ordinary biomatter. If I am forced to talk about simulation, I would say that the biological body is a simulation of the magnetic body.

  2. In TGD cognition has p-adic corelates as p-adic space-time surfaces. Cognitive representations correspond to their intersections with real space-time surfaces and consist of a discrete set of points in an extension of rationals. They could be called simulations since cognition is a conscious representation of the sensory (real) world. All physical systems would have at least rudimentary cognitive consciousness and would be performing these "simulations".
For the TGD view about Universe as a conscious quantum Platonia see this. For the TGD view of how computers could become conscious see this.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Sunday, November 03, 2024

Tegmark's Platonia and TGD Platonia

Max Tegmark has published a book titled "Our Mathematical Universe" (see this about the idea that only mathematical objects exist objectively and mathematical object exists if this is possible in a mathematically consistent way.

Also TGD leads to this view. The basic problem shared with materialism is that the existence of mere mathematical objects does subjective existence. In TGD quantum jumps for the spinor fields in Platonia identified as the "world of classical worlds" consisting of space-time surfaces in $H=M4×CP2$ obeying holography = holomorphy principle brings in consciousness and zero energy ontology solves the basic problem of quantum measurement theory and allows the experience of free will.

One of the participants of the discussion gave a short summary of the ontology of Tegmark. Tegmark proposes a multiverse with four levels, each more complex and abstract than the last:

  1. Level I (Observable Universe): This is the most familiar level. In this view, the observable universe is just one of many pockets in an enormous (potentially infinite) space, all governed by the same physical laws.
  2. Level II (Bubble Universes): Here, each universe (or "bubble") might have different physical constants and properties. It s like having different rules for physics in each universe one could have a different speed of light, while another might not have gravity at all.
  3. Level III (Many-Worlds Interpretation): This level involves quantum mechanics. Every time a quantum event occurs, the universe "branches" into different outcomes, creating countless parallel universes. Think of it like a choose-your-own-adventure book that explores all possible story paths.
  4. Level IV (Ultimate Mathematical Universe): This is where MUH comes in. Level IV is a collection of every possible mathematical structure, even those that don t resemble anything we would call a universe. According to MUH, each mathematical structure is a complete, self-contained universe. If a structure is logically consistent, it exists.
What about quantum Platonia according to TGD?
  1. In TGD only the level I Universe expanded from real universe to adelic one to describe correlates of cognition is needed. The physical Universe is fixed by the condition that the structures involved exist mathematically. In the TGD framework the mathematical existence of the twistor lift of TGD fixes H=M4×CP2 completely. Also number theoretical arguments fix H. Also standard model symmetries and interaction fix H. Space-time dimension is fixed to D=4 by the existence of pair creation made possible by exotic smooth structures as standard smooth structure with point-like defects identifiable as vertices for fermion pair creation. The mathematical existence of the Kähler geometry of the "world of classical worlds" (WCW) (space-time surfaces satisfying holography) fixes it. This was already observed by Freed for the loop space. Infinite-D existence is highly unique.
  2. Level II Universe would be multiverse and is not needed in TGD since H=M4×CP2 is fixed by mathematical existence and no spontaneous compactification leading to multiverse takes place. Inflation is replaced in TGD with the transformation of galactic dark matter as dark energy of cosmic strings to ordinary matter and there are no inflation fields forcing the multiverse (see this).
  3. Level III Universe is not needed. The new quantum ontology, zero energy ontology (ZEO), leads to the solution of the quantum measurement problem and no interpretations are needed. It also leads to a theory of consciousness and a new view about the relation of geometric and subjective time. The implications are non-trivial in all scales, even in cosmology. One could of course call the hierarchy of field bodies as an analog of the multiverse.
  4. Level IV Universe corresponds to WCW and WCW spinor fields M8-H duality relating number theoretic and geometric visions of TGD, holography= holomorphy vision, and Langlands duality in 4-D case implying that space-time surfaces are representations for complex numbers. Space-time surfaces can be multiplied and summed: this arithmetic is induced by the function field arithmetics for generalized analytic functions of H coordinates (3 complex and one hypercomplex coordinate).

    Space-time surfaces correspond to roots for pairs of this kind of functions and form hierarchies beginning with hierarchies of polynomials with coefficients in extensions of rationals but containing also analytic functions of this kind and even general analytic functions. The quantum counterparts of mathematical concepts like abstraction, concept, set, Hilbert space, Boolean algebra follow using the arithmetics of space-time surfaces.

  5. Consciousness emerges from quantum jumps between quantum states as spinor fields of WCW representing quantum concepts. WCW spinors correspond to Fock states for the fermions of H and their Fock state basic forms a representation of Boolean algebra. One can say that logic emerges via the spinor structure of WCW which is a square root of geometry. Number theoretic vision implies the increase of algebraic complexity and hence evolution.

    ZEO allows the quantum Platonia to learn about itself by generating memory in SFRs and also makes memory recall possible: the failure of exact classical determinism for the space-time surfaces as analogs of Bohr orbits makes this possible. The seats of non-determinism represent memory sites (see this). Quantum Platonia evolves as a conscious entity as the WCW spinor fields defining conscious entities disperse to more and more algebraically complex regions of it.

See the article Space-time surfaces as numbers, Turing and Gödel, and mathematical consciousness and the chapter About Langlands correspondence in the TGD framework.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

The mirror Universe hypothesis of Turok and Boyle from the TGD point of view

The popular article 'Cosmic inflation:' did the early cosmos balloon in size? A mirror universe going backwards in time may be a simpler explanation by Neil Turok, tells about the proposal of Neil Turok and Latham Boyle stating that the early Universe effectively contained the CP, or equivalently T mirror image, of the ordinary Universe and claims that this hypothesis solves some problems of the cosmology.

The article contains some mutually conflicting statements related to the interpretation of time reversal: I don't know whom to blame.

What is meant with time reversal?

Time reversal has two different meanings, which are often confused. It can refer to time reflection or change of the arrow of time. This confusion appears also in the article.

  1. The article states that T refers to a time reflection symmetry. The article also states that time flows backwards in the mirror universe. These two statements are not consistent. Either the authors or the popularizers have confused T and time reversal in thermodynamic sense.
  2. The arrow of time is fixed in standard QFT and therefore in thermodynamics. In quantization this means a selection of vacuum. What we call annihilation operators, annihilate the vacuum. For the other option, their hermitian conjugates would annihilate the vacuum with the opposite arrow of time.
  3. In the zero energy ontology (ZEO) of TGD, these arrows of time are associated with quantum states, which remain unaffected at the passive boundary of CD in the sequence of "small" state function reductions. This time reversal has nothing to do with T or CP. "Big" SFRs (BSFRs) change the roles of active and passive boundaries and change the arrow of time. These two arrows of time are in a central role in the TGD inspired cosmology and also in biology.
  4. Could the ordinary matter in phases with opposite arrows of time behave like a mirror universe? The arrow of time changes in BSFRs and means a death or falling asleep of a conscious entity. By a simple statistical argument half of the matter is ordinary and time reversed "sleep" states (half of the universe "sleeps"). Note that there is a scale hierarchy of conscious entities.

    The phases of matter with opposite arrows of time cannot see each other by classical signals. The detection process requires what is essentially pair creation of fundamental fermions. One could therefore say that in TGD the mirror universe exists in a well-defined sense.

  5. In fact, the change of arrow of time in BSFRs is possible in arbitrarily long scales due to the hierarchy of Planck constant making quantum coherence possible even in astrophysical scales. This implies that the evolution of astrophysical objects is a sequence of states with opposite arrows of time. Living forth and back in geometric time implies that their evolutionary age is much longer than the geometric age and this explains stars and galaxies older than the universe.
2. Problems related to the mirror universe hypothesis

  1. Suppose the mirror image in the theory of Turok et al is indeed T mirror image. One must explain why it is invisible for us. The proposal is that the mirror universe might be a mere mathematical trick. This makes me feel uneasy.
  2. In the proposed model the mirror image would consist mostly of antimatter and the unobservability of the mirror universe would apparently solve the problem due to matter antimatter asymmetry. This does not however solve the problem why there the amount of matter/antimatter in the universe/its mirror is so small. One must explain why CP breaking leads to this asymmetry.

    The TGD explanation of matter antimatter asymmetry suggests that antimatter is confined within cosmic strings and matter outside them and that the decay of the cosmic strings to ordinary matter as a counterpart of the inflation process violates CP symmetry and leads to the asymmetric situation.

3. Can the hypothesis solve the problem of dark matter?

The proposed hypothesis states that dark matter consists of right handed neutrinos and that they interact with ordinary matter only gravitationally.

  1. The problem is that the standard model does not predict right-handed neutrinos so that the mirror universe would contain only the antiparticles of left handed neutrinos which would interact and would not be therefore be dark. Standard model should be modified.
  2. In TGD, right-handed neutrinos are indeed predicted and their covariantly constant modes would behave like dark matter. Covariantly constant right handed neutrinos are the only massless spinor modes of M4×CP2 spinors but might mix with higher massive color partial waves. They could also represent an analog of supersymmetry. In TGD νR:s would appear as building bricks of fermions and bosons. Can νR:s exist as free particles? Number theoretic vision and Galois confinement suggests that this is not possible. Therefore νR:s would not solve the problem of galactic dark matter.

    In TGD the dark (magnetic and volume) energy of cosmic strings explains galactic dark matter but one cannot of course exclude the presence of right handed neutrinos and other fermions inside cosmic strings. Whether quantum-classical correspondence is true in the sense that the classical energy of cosmic strings actually corresponds to the energy of fermions inside them, remains an open question.

4. Does the mirror universe solve the entropy problem

It is also claimed that the mirror universe solves the problem related to entropy. On basis of the popular article I could not understand the argument.

  1. Second law suggests that the very early Universe should have a very low entropy. This is in a sharp conflict with radiation dominated cosmology.
  2. In TGD this is not so simple, since both arrows of time are possible and both thermodynamics are possible and time reversed dynamics increases entropy in the opposite direction of geometric time so that it apparently decreases in the standard arrow of time. This effect is actually used to reduce the entropy of phase conjugate laser beams.

    In TGD however the very early Universe would consist of cosmic strings which would make collisions (here the dimension of space-time is crucial) causing their thickening and transformation to ordinary matter. This would lead to radiation dominated cosmology.

    But what is the entropy of the cosmic string dominated phase? The cosmic string dominated phase could have a very low entropy if the geometric excitations are absent (note that cosmic strings are actually 3-D and only effectively 1-D). The number of excited states (deformations) of the string increases rapidly with temperature. This implies Hagedorn temperature as a maximal temperature for cosmic strings.

    Was the very early Universe in Hagedorn temperature or was it heated from a very low temperature to Hagedorn temperature and made a transition to a radiation dominated phase by the thickening to monopole flux tubes and subsequent decay to ordinary matter? If I must make a guess I would say that the temperature was very low.

See the article Latest progress in TGD or the chapter with the same title .

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Could quantum field theories be universal

The findings of Nima Arkani Hamed and his collaborators (see this), in particular Carolina Figueiredo, suggest a universality for the scattering amplitudes predicted quantum field theories. Is it possible to understand this universality mathematically and what could its physical meaning be?

The background for these considerations comes from TGD, where holography = holomorphy principle and M8-H duality relating geometric and number theoretic visions fixing the theory to a high degree.

  1. Space-time surfaces are holomorphic surfaces in H=M4× CP2 and therefore minimal surfaces satisfying nonlinear analogs of massless field equations and representing generalizations of light-like geodesics. Therefore generalized conformal invariance seems to be central and also the Hamilton-Jacobi structures (see this) realizing this conformal invariance in M4 in terms of a pair formed by complex and hypercomplex coordinate, which has light-like coordinate curves.
  2. Quantum criticality means that minima as attractors and maxima as repulsors are replaced with saddle points having both stable and unstable directions. A particle at a saddle point tends to fall in unstable directions and end up to a second saddle point, which is attractive with respect to the degrees of freedom considered. Zero energy ontology (ZEO) predicts that the arrow of time is changed in "big" state function reductions (BSFRs). BSFRs make it possible to stay near the saddle point. This is proposed to be a key element of homeostasis. Particles can end up to a second saddle point by this kind of quantum transition.
  3. Quantum criticality has conformal invariance as a correlate. This implies long range correlations and vanishing of dimensional parameters for degrees of freedom considered. This is the case in QFTs, which describe massless fields.

    Could one think that the S-matrix of a massless QFT actually serves as a model for transition between two quantum critical states located near saddle points in future and past infinity? The particle states at these temporal infinities would correspond to incoming and outgoing states and the S-matrix would be indeed non-trivial. Note that masslessness means that mass squared as the analog of harmonic oscillator coupling vanishes so that one has quantum criticality.

What can one say of the massless theories as models for the quantum transitions between two quantum critical states?
  1. Are these theories free theories in the sense that both dimensional and dimensionless coupling parameters associated with the critical degrees of freedom vanish at quantum criticality. If the TGD inspired proposal is correct, it might be possible to have a non-trivial and universal S-matrix connecting two saddle points even if the theories are free.
  2. A weaker condition would be that dimensionless coupling parameters approach fixed points at quantum criticality. This option looks more realistic but can it be realized in the QFT framework?
QFTs can be solved by an iteration of type DXn+1= f(Xn) and it is interesting to see what this allows to say about these two options.
  1. In the classical gauge theory situation, Xn+1 would correspond to an n+1:th iterate for a massless boson or spinor field whereas D would correspond to the free d'Alembertian for bosons and free Dirac operator for fermions. f(Xn) would define the source term. For bosons it would be proportional to a fermionic or bosonic gauge current multiplied by coupling constant. For a spinor field it would correspond to the coupling of the spinor field to gauge potential or scalar field multiplied by a dimensional coupling constant.
  2. Convergence requires that f(Xn) approaches zero. This is not possible if the coupling parameters remain nonvanishing or the currents become non-vanishing in physical states. This could occur for gauge currents and gauge boson couplings of fermions in low enough resolution and would correspond to confinement.
  3. In the quantum situation, bosonic and fermionic fields are operators. Radiative corrections bring in local divergences and their elimination leads to renormalization theory. Each step in the iteration requires the renormalization of the coupling parameters and this also requires empirical input. f(Xn) approaches zero if the renormalized coupling parameters approach zero. This could be interpreted in terms of the length scale dependence of the coupling parameters.
  4. Many things could go wrong in the iteration. Already, the iteration of polynomials of a complex variable need not converge to a fixed point but can approach a limit cycle and even chaos. In more general situations, the system can approach a strange attractor. In the case of QFT, the situation is much more complex and this kind of catastrophe could take place. One might hope that the renormalization of coupling parameters and possible approach to zero could save the situation.
It is interesting to compare the situation to TGD? First some general observations are in order.
  1. Coupling constants are absorbed in the definition of induced gauge potentials and there is no sense in decomposing the classical field equations to free and interaction terms. At the QFT limit the situation of course changes.
  2. There are no primary boson fields since bosons are identified as bound states of fermions and antifermions and fermion fields are induced from the free second quantized spinor fields of H to the space-time surfaces. Therefore the iterative procedure is not needed in TGD.
  3. CP2 size defines the only dimensional parameter and has geometric meaning unlike the dimensional couplings of QFTs and string tension of superstring models. Planck length scale and various p-adic length scales would be proportional to CP2 size. These parameters can be made dimensionless using CP2 size as a geometric length unit.
The counterpart of the coupling constant evolution emerges at the QFT limit of TGD.
  1. Coupling constant evolution is determined by number theory and is discrete. Different fixed points as quantum critical points correspond to extensions of rationals and p-adic length scales associated with ramified primes in the approximation when polynomials with coefficients in an extension of rationals determine space-time surfaces as their roots.
  2. The values of the dimensionless coupling parameters appearing in the action determining geometrically the space-time surface (K\"ahler coupling strength and cosmological constant) are fixed by the conditions that the exponential of the action, which depends n coupling parameters, equals to its number theoretic counterparts determined by number theoretic considerations alone as a product of discriminants associated with the partonic 2-surfaces (see this and this). These couplings determine the other gauge couplings since all induced gauge fields are expressible in terms of H coordinates and their gradients.
  3. Any general coordinate invariant action constructible in terms of the induced geometry satisfies the general holomorphic ansats giving minimal surfaces as solutions. The form of the classical action can affect the partonic surfaces only via boundary conditions, which in turn affects the values of the discriminants. Could the partonic 2-surfaces adapt in such a way that the discriminant does not depend on the form of the classical action? The modified Dirac action containing couplings to the induced gauge potentials and metric would determine the fermioni scattering amplitudes.
  4. In TGD the induction of metric, spinor connection and second quantized spinor fields of H solves the problems of QFT approach due to the condition that coupling parameters should approach zero at the limit of an infinite number of iterations. Minimal surfaces geometrizes gauge dynamics. Space-time surfaces satisfying holography = holomorphy condition correspond to quantum critical situations and the iteration leading from one critical point to another one is replaced with quantum transition.
See the article TGD as it is towards end of 2024: part I or a chapter with the same title.

For a summary ofhttps://draft.blogger.com/u/0/blog/posts/10614348 earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.